Warning: Some posts on this platform may contain adult material intended for mature audiences only. Viewer discretion is advised. By clicking ‘Continue’, you confirm that you are 18 years or older and consent to viewing explicit content.
I’m not sure what is these days but according to Merriam it’s the capability of computer systems or algorithms to imitate intelligent human behavior. So it’s debatable.
I don’t think it’s just marketing bullshit to think of LLMs as AI… The research community generally does, too. Like the AI section on arxiv is usually where you find LLM papers, for example.
That’s not like a crazy hype claim like the “AGI” thing, either… It doesn’t suggest sentience or consciousness or any particular semblance of life (and I’d disagree with MW that it needs to be “human” in any way)… It’s just a technical term for systems that exhibit behaviors based on training data rather than explicit programming.
I’m not sure what is these days but according to Merriam it’s the capability of computer systems or algorithms to imitate intelligent human behavior. So it’s debatable.
I don’t think it’s just marketing bullshit to think of LLMs as AI… The research community generally does, too. Like the AI section on arxiv is usually where you find LLM papers, for example.
That’s not like a crazy hype claim like the “AGI” thing, either… It doesn’t suggest sentience or consciousness or any particular semblance of life (and I’d disagree with MW that it needs to be “human” in any way)… It’s just a technical term for systems that exhibit behaviors based on training data rather than explicit programming.
Basically, whenever we find that a human ability can be automated, the goalposts of the “AI” buzzword are silently moved to include it.