My meme/shitposting alt, other @Deebsters are available.

  • 7 Posts
  • 134 Comments
Joined 3 years ago
cake
Cake day: July 13th, 2021

help-circle











  • Deebster@lemmy.mlto196@lemmy.blahaj.zoneRule
    link
    fedilink
    arrow-up
    2
    ·
    1 month ago

    Hmm, I think they’re close enough to be able to say a neural network is modelled on how a brain works - it’s not the same, but then you reach the other side of the semantics coin (like the “can a submarine swim” question).

    The plasticity part is an interesting point, and I’d need to research that to respond properly. I don’t know, for example, if they freeze the model because otherwise input would ruin it (internet teaching them to be sweaty racists, for example), or because it’s so expensive/slow to train, or high error rates, or it’s impossible, etc.

    When talking to laymen I’ve explained LLMs as a glorified text autocomplete, but there’s some discussion on the boundary of science and philosophy that’s asking is intelligence a side effect of being able to predict better.


  • Deebster@lemmy.mlto196@lemmy.blahaj.zoneRule
    link
    fedilink
    arrow-up
    2
    ·
    edit-2
    1 month ago

    I agree to your broad point, but absolutely not in this case. Large Language Models are 100% AI, they’re fairly cutting edge in the field, they’re based on how human brains work, and even a few of the computer scientists working on them have wondered if this is genuine intelligence.

    On the spectrum of scripted behaviour in Doom up to sci-fi depictions of sentient silicon-based minds, I think we’re past the halfway point.