and subsequent update to the headline, which reads like kind of a backpedal from the CEO:

Update: Tinybuild CEO Alex Nichiporchik says a recent talk that indicated the publisher uses AI to monitor staff was “hypothetical.”

Update (07/14/23): In a separate response sent directly to Why Now Gaming, Nichiporchik said the HR portion of his presentation was purely a “hypothetical” and that Tinybuild doesn’t use AI tools to monitor staff.

“The HR part of my presentation was a hypothetical, hence the Black Mirror reference. I could’ve made it more clear for when viewing out of context,” reads the statement. “We do not monitor employees or use AI to identify problematic ones. The presentation explored how AI tools can be used, and some get into creepy territory. I wanted to explore how they can be used for good.”

  • Megaman_EXE@beehaw.org
    link
    fedilink
    English
    arrow-up
    21
    ·
    edit-2
    1 year ago

    The company I work for uses activity watch to monitor our productivity. The program isn’t very accurate though but they seem to take it as gospel. So I’ve had to set up ways to prevent it from making me appear away when I’m actually still at my pc working.

    These kinds of micromanaging steps only further the employer/employee divide. In my eyes a good employer would consult with an employee if they aren’t meeting their standards and work with them to improve things or offer other potential solutions.

    From my experience they want us to be robots, not humans.

    Edit: I also forgot to mention they monitor us with this tool without employee knowledge. They only will reveal this once they feel there’s a need to fix an “issue” (which might not even be an issue)

    There’s also been a rumor going around that they are checking our webcams without our knowledge but I can’t confirm if this is true.

    It’s not great overall

  • SteleTrovilo@beehaw.org
    link
    fedilink
    English
    arrow-up
    8
    ·
    1 year ago

    I hope they don’t do this. Tinybuild publishes some very good games, but ChatGPT is only useful for entertainment. It’s too prone to errors - which it presents confidently as facts - for any legitimate purpose.

  • TheTrueLinuxDev@beehaw.org
    link
    fedilink
    English
    arrow-up
    8
    ·
    edit-2
    1 year ago

    This is a Pandora box situation, when potential use for malicious purposes on AI on the ponderance of evidence outweigh the goods, one have to conclude that it is necessary to ban it from the purpose of monitoring. This have immense impact on disabled workers for instance.

  • Gork@beehaw.org
    link
    fedilink
    English
    arrow-up
    3
    ·
    1 year ago

    Umm… how exactly can you use an invasive, privacy-eliminating AI monitoring system “for good”?