- cross-posted to:
- [email protected]
- cross-posted to:
- [email protected]
Excerpt:
To underline Blanchfield’s point, the ChatGPT book selection process was found to be unreliable and inconsistent when repeated by Popular Science. “A repeat inquiry regarding ‘The Kite Runner,’ for example, gives contradictory answers,” the Popular Science reporters noted. “In one response, ChatGPT deems Khaled Hosseini’s novel to contain ‘little to no explicit sexual content.’ Upon a separate follow-up, the LLM affirms the book ‘does contain a description of a sexual assault.’”
This is as transparent as hell. It reminds me of a TV show where a bunch of idiots plot to murder someone so they decide that if they all pull the trigger together, none of them are “technically” the murderer. Of course, that just meant they were all culpable.
It’s only a few layers of abstraction above “we didn’t ban these books, we flipped a coin to decide whether to ban them and fate chose tails…”
Pathetic.