• ImplyingImplications@lemmy.ca
    link
    fedilink
    arrow-up
    25
    ·
    1 month ago

    Because AI needs a lot of training data to reliably generate something appropriate. It’s easier to get millions of reddit posts than millions of research papers.

    Even then, LLMs simply generate text but have no idea what the text means. It just knows those words have a high probability of matching the expected response. It doesn’t check that what was generated is factual.

      • ulkesh@beehaw.org
        link
        fedilink
        English
        arrow-up
        2
        ·
        1 month ago

        Because we have brains that are capable of critical thinking. It makes no sense to compare the human brain to the infancy and current inanity of LLMs.