- cross-posted to:
- [email protected]
- cross-posted to:
- [email protected]
Excerpt:
To underline Blanchfield’s point, the ChatGPT book selection process was found to be unreliable and inconsistent when repeated by Popular Science. “A repeat inquiry regarding ‘The Kite Runner,’ for example, gives contradictory answers,” the Popular Science reporters noted. “In one response, ChatGPT deems Khaled Hosseini’s novel to contain ‘little to no explicit sexual content.’ Upon a separate follow-up, the LLM affirms the book ‘does contain a description of a sexual assault.’”
I basically agree with you but for your example that’s because ChatGPT wasn’t made to return local results, nor even recent ones.
So of course it’s going to fail spectacularly at that task. It has no means to research it.