Warning: Some posts on this platform may contain adult material intended for mature audiences only. Viewer discretion is advised. By clicking ‘Continue’, you confirm that you are 18 years or older and consent to viewing explicit content.
So it was the physics Nobel… I see why the Nature News coverage called it “scooped” by machine learning pioneers
Since the news tried to be sensational about it… I tried to see what Hinton meant by fearing the consequences. Believe he is genuinely trying to prevent AI development without proper regulations. This is a policy paper he was involved in (https://managing-ai-risks.com/). This one did mention some genuine concerns. Quoting them:
“AI systems threaten to amplify social injustice, erode social stability, and weaken
our shared understanding of reality that is foundational to society. They could also enable large-scale
criminal or terrorist activities. Especially in the
hands of a few powerful actors, AI could cement
or exacerbate global inequities, or facilitate automated warfare, customized mass manipulation,
and pervasive surveillance”
like bruh people already lost jobs because of ChatGPT, which can’t even do math properly on its own…
Also quite some irony that the preprint has the following quote: “Climate
change has taken decades to be acknowledged and
confronted; for AI, decades could be too long.”, considering that a serious risk of AI development is climate impacts
So it was the physics Nobel… I see why the Nature News coverage called it “scooped” by machine learning pioneers
Since the news tried to be sensational about it… I tried to see what Hinton meant by fearing the consequences. Believe he is genuinely trying to prevent AI development without proper regulations. This is a policy paper he was involved in (https://managing-ai-risks.com/). This one did mention some genuine concerns. Quoting them:
“AI systems threaten to amplify social injustice, erode social stability, and weaken our shared understanding of reality that is foundational to society. They could also enable large-scale criminal or terrorist activities. Especially in the hands of a few powerful actors, AI could cement or exacerbate global inequities, or facilitate automated warfare, customized mass manipulation, and pervasive surveillance”
like bruh people already lost jobs because of ChatGPT, which can’t even do math properly on its own…
Also quite some irony that the preprint has the following quote: “Climate change has taken decades to be acknowledged and confronted; for AI, decades could be too long.”, considering that a serious risk of AI development is climate impacts