Warning: Some posts on this platform may contain adult material intended for mature audiences only. Viewer discretion is advised. By clicking ‘Continue’, you confirm that you are 18 years or older and consent to viewing explicit content.
Good point, however, thinking about it, I would consider those rules to be closer to AI than LLMs, because there are logical rules based on “understanding” input data. As in “using input data in a coherent way that imitates how a human would use it”. LLMs are just sophisticated examples of the dozen of monkeys with typewriters that eventually come up with the works of Shakespeare out of pure chance.
Except that they have a bazillion switches to adapt and are trained on desired output, and then the generated output is formed with some admittedly impressive grammar filters to impress humans.
However, no one can explain how the result came to pass (with traceable exceptions being the material of ongoing research), and no one can predict the output for a not yet tested input (or for identical input after the model has been altered, regardless how little).
Calling it AI is contributing to manslaughter, as evidenced by e.g. Tesla “autopilot” murdering people.
PS: I know Tesla’s murder system is not an LLM, but it’s a very good example how misnoming causes deaths. Obligatory fuck the muskrat
Yet calling the simple rules that govern video game enemies AI is not controversial. Since when does AI have not to be fake to be called that?
Good point, however, thinking about it, I would consider those rules to be closer to AI than LLMs, because there are logical rules based on “understanding” input data. As in “using input data in a coherent way that imitates how a human would use it”. LLMs are just sophisticated examples of the dozen of monkeys with typewriters that eventually come up with the works of Shakespeare out of pure chance. Except that they have a bazillion switches to adapt and are trained on desired output, and then the generated output is formed with some admittedly impressive grammar filters to impress humans. However, no one can explain how the result came to pass (with traceable exceptions being the material of ongoing research), and no one can predict the output for a not yet tested input (or for identical input after the model has been altered, regardless how little). Calling it AI is contributing to manslaughter, as evidenced by e.g. Tesla “autopilot” murdering people. PS: I know Tesla’s murder system is not an LLM, but it’s a very good example how misnoming causes deaths. Obligatory fuck the muskrat