• MacN'Cheezus@lemmy.todayOP
    link
    fedilink
    English
    arrow-up
    19
    arrow-down
    2
    ·
    6 months ago

    I don’t think that’s a problem with the model itself, but the fact that it was heavily censored and lobotomized in order to achieve maximum political correctness so they could avoid another Tay incident.

    • lud@lemm.ee
      link
      fedilink
      arrow-up
      6
      ·
      6 months ago

      It makes sense that they do that since the media and randoms on the internet think everything chatGPT and Bing chat say is as valid as info from OpenAI and MS official spokespersons.

    • Sigh_Bafanada@lemmy.world
      link
      fedilink
      arrow-up
      2
      arrow-down
      1
      ·
      6 months ago

      Thing is, there wasn’t even a chance of having a full Tay incident. The problem with Tay was that it was a learning model, so people could teach it to be more messed up.

      Meanwhile, ChatGPT doesn’t learn, and instead has a preset dataset it knows (hence why it only knows things up to September 2021), so the main reason why it got so heavily censored is more likely to avoid much more minor incidents, which imo is dumb.