It just feels too good to be true.

I’m currently using it for formatting technical texts and it’s amazing. It doesn’t generate them properly. But if I give it the bulk of the info it makes it pretty af.

Also just talking and asking for advice in the most random kinds of issues. It gives seriously good advice. But it makes me worry about whether I’m volunteering my personal problems and innermost thoughts to a company that will misuse that.

Are these concerns valid?

  • sub_o@beehaw.org
    link
    fedilink
    English
    arrow-up
    7
    ·
    11 months ago

    https://www.techradar.com/news/samsung-workers-leaked-company-secrets-by-using-chatgpt

    I’ve never used ChatGPT, so I don’t know if there’s an offline version. So I assume everything that you typed in, is in turn used to train the model. Thus, using it will probably leak sensitive information.

    Also from what I read is that, the replies are convincing enough, but could sometimes be very wrong, thus if you’re using it for machineries, medical stuff, etc, it could end up fatal.

    • lloram239@feddit.de
      link
      fedilink
      arrow-up
      3
      ·
      edit-2
      11 months ago

      I’ve never used ChatGPT, so I don’t know if there’s an offline version.

      There is no offline version of ChatGPT itself, but many competing LLMs are available to run locally, e.g. Facebook just released Llama2 and Llama.cpp is a popular way to run those models. The smaller models work reasonably well on modern consumer hardware, the bigger ones less so.

      but could sometimes be very wrong

      They are mostly correct when you stay within the bounds of its training material. They are completely fiction when you go out of it or try to dig to deep (e.g. summary of popular movie will be fine, asking for specific lines of dialog will be made up, summary of less popular movie might be complete fiction).