• magnetichuman@fedia.io
    link
    fedilink
    arrow-up
    32
    ·
    2 months ago

    I don’t fear skilled professionals using GenAI to boost their productivity. I do fear organisations using GenAI to replace skilled professionals

    • HumanPenguin@feddit.uk
      link
      fedilink
      English
      arrow-up
      16
      arrow-down
      1
      ·
      2 months ago

      This. It is like any tool. It is down to the skill/knowlege/experience of the user to evaluate the result.

      But as soon as management/government start seeing it as a cheat to reduce hiring. It become a danger.

      • SoleInvictus@lemmy.blahaj.zone
        link
        fedilink
        English
        arrow-up
        14
        arrow-down
        2
        ·
        2 months ago

        I think the issue with this particular tool is it can authoritatively provide incorrect, entirely fabricated information or a gross misinterpretation of factual information.

        In any field I’ve worked in, I’ve often had to refer to reference material as I simply can’t remember everything. I have to use my experience and critical thinking skills to determine if I’m utilizing the correct material. I have not had to further determine if my reference material has simply made up a convincing, “correct sounding” answer. Yes, there are errors and corrections to material over time, but never has the entire reference been suspect, yet it continued to be used.

        • Swedneck@discuss.tchncs.de
          link
          fedilink
          English
          arrow-up
          3
          ·
          2 months ago

          i maintain that AI companies could improve their stuff a huge amount by simply forcing it to prefix “I think” to all statements. It’s sorta like how calculators shouldn’t show more data than it can confidently produce, if the precision is only 4 decimals then don’t show 8.

      • Digestive_Biscuit@feddit.uk
        link
        fedilink
        English
        arrow-up
        1
        ·
        2 months ago

        Imagine an AI with a model trained exclusively on a specific set of medical books, the same set of books all doctors have access to already. While there’s still room for error it would guide the doctor to a very familiar reference. No internet junk, social media, etc.

        Exactly as you say. It’s a tool, not a replacement. Certainly not in healthcare anyway.

      • stupidcasey@lemmy.world
        link
        fedilink
        English
        arrow-up
        1
        arrow-down
        5
        ·
        2 months ago

        I would prefer this to no healthcare until it’s too late which seems to be the option in places with free healthcare.

        • Echo Dot@feddit.uk
          link
          fedilink
          English
          arrow-up
          3
          arrow-down
          2
          ·
          edit-2
          2 months ago

          Yeah we should old use the corporate system which is brilliant. As long as you’re rich, easy solution, just be rich and you’re fine.

          Thank you for your unhelpful and ignorant comment

    • GreatAlbatross@feddit.ukM
      link
      fedilink
      English
      arrow-up
      7
      arrow-down
      1
      ·
      2 months ago

      And I also fear overburdened professionals not having time to second guess ML hallucinations.

      • Echo Dot@feddit.uk
        link
        fedilink
        English
        arrow-up
        2
        arrow-down
        2
        ·
        2 months ago

        You were probably already at risk then of a misstep. Don’t have time to think about the output then they probably didn’t have time before AI came along, so the AI isn’t really adding to the issue here.

  • Mrkawfee@feddit.uk
    link
    fedilink
    English
    arrow-up
    21
    arrow-down
    1
    ·
    2 months ago

    Using Generative AI as a substitute for professional judgement is a disaster waiting to happen. LLMs aren’t sentient and will frequently hallucinate answers. It’s only a matter of time before incorrect output will lead to catastrophic consequences and the idiot who trusted the LLM, not the LLM itself, will be responsible.

    • Echo Dot@feddit.uk
      link
      fedilink
      English
      arrow-up
      1
      arrow-down
      2
      ·
      2 months ago

      If you read the article that’s not what’s happening here.

      Doctors are just using AI like they use any tool. To inform their decision.

  • streetlights@lemmy.world
    link
    fedilink
    English
    arrow-up
    13
    arrow-down
    2
    ·
    2 months ago

    20 years ago there were complaints that GP’s were using Google, now its normal. Can’t help but feel the same will happen here.

    • TheGrandNagus@lemmy.world
      link
      fedilink
      English
      arrow-up
      4
      ·
      2 months ago

      You’re right. Within 10 seconds I just found an article from 2006 saying just that. Earlier ones likely exist.

    • Swedneck@discuss.tchncs.de
      link
      fedilink
      English
      arrow-up
      2
      ·
      2 months ago

      to be fair back then google just showed you what you searched for, i’m not as happy about people googling stuff these days. With AI we already know that it tends to make shit up, and it might very well only get worse as they start being trained on their own output.

      • Echo Dot@feddit.uk
        link
        fedilink
        English
        arrow-up
        1
        arrow-down
        2
        ·
        2 months ago

        Actually hallucinations have gone down as AI training has increased. Mostly through things like prompting them to provide evidence. When you prompt them to provide evidence they don’t hallucinate in the first place.

        The problem is really to do with the way the older AIs were originally trained. They were basically trained on data where a question was asked, and then a response was given. Nowhere in the data set was there a question that was asked, and the answer was “I’m sorry I do not know”, so the AI basically was unintentionally taught that it is never acceptable to not answer a question. More modern AI have been trained in a better way and have been told it is acceptable not to answer a question. Combined with the fact that they now have the ability to perform internet searches, so like a human they can go look up data if they recognize that they don’t have access to it in their current data set.

        That being said, Google’s AI is an idiot.

  • Echo Dot@feddit.uk
    link
    fedilink
    English
    arrow-up
    5
    arrow-down
    4
    ·
    2 months ago

    The headline and the article are completely mismatched.

    Basically all the article is saying is that doctors sometimes use AI. Which is a bit like saying sometimes doctors look things up in books. Yeah, course they do.

    If somebody comes in with a sore throat and the AI prescribes morphine the doctor is probably smart enough to not do that so I don’t really think there’s a major issue here. They are skilled medical professionals they’re not blindly following the AI.