Warning: Some posts on this platform may contain adult material intended for mature audiences only. Viewer discretion is advised. By clicking ‘Continue’, you confirm that you are 18 years or older and consent to viewing explicit content.
Pretty fucking cool that we are postulating that dreams are not similar to machine learning algorithms in that they are not primarily purposed for adapting to what’s experienced during sober consciousness, but rather used in preventing subconscious lower-level instinctual brain functions from over controlling the perception of reality.
The actual opposite is happening with machine learning LLMs where it believes their hallucinations which are derived from obfuscated data is truthful regardless of where it pulled the data from.
I think the key problem with LLMs is that they have no grounding in physical reality. They’re just trained a whole bunch of text data, and the topology of the network ends up being moulded to represent the patterns in that data. I suspect that what’s really needed is to train models on interactions with the physical world first, to create an internal representation of how it works, the same way children do. Once it develops an intuition for how the world works, then it could be taught language in that context.
Pretty fucking cool that we are postulating that dreams are not similar to machine learning algorithms in that they are not primarily purposed for adapting to what’s experienced during sober consciousness, but rather used in preventing subconscious lower-level instinctual brain functions from over controlling the perception of reality.
The actual opposite is happening with machine learning LLMs where it believes their hallucinations which are derived from obfuscated data is truthful regardless of where it pulled the data from.
I think the key problem with LLMs is that they have no grounding in physical reality. They’re just trained a whole bunch of text data, and the topology of the network ends up being moulded to represent the patterns in that data. I suspect that what’s really needed is to train models on interactions with the physical world first, to create an internal representation of how it works, the same way children do. Once it develops an intuition for how the world works, then it could be taught language in that context.