Greg Rutkowski, a digital artist known for his surreal style, opposes AI art but his name and style have been frequently used by AI art generators without his consent. In response, Stable Diffusion removed his work from their dataset in version 2.0. However, the community has now created a tool to emulate Rutkowski’s style against his wishes using a LoRA model. While some argue this is unethical, others justify it since Rutkowski’s art has already been widely used in Stable Diffusion 1.5. The debate highlights the blurry line between innovation and infringement in the emerging field of AI art.
Thanks for the clarification!
LLMs have indeed shown interesting behaviors but, from my experience with the technology and how it works, I would say that any claims of intelligence being possessed by a system that is only an LLM would be suspect and require extraordinary evidence to prove that it is not mistaken anthropomorphizing.
I don’t think an LLM alone can be intelligent… but I do think it can be the central building block for a sentient self-aware intelligent system.
Humans can be thought of as being made of a set of field-specific neural networks, tied together by a looping self-evaluating multi-modal LLM that we call “conscience”. The ability of an LLM to consume its own output, is what allows it to be used as the conscience loop, and current LLMs being trained on human language with all its human nuance, is an extra bonus.
Probably some other non-text multi-modal neural networks capable of consuming their own output could also be developed and be put in a loop, but right now we have LLMs, and we kind of understand most of what they’re saying, and they kind of understand most of what we’re saying, so that makes communication easier.
I mean, it is anthropomorphizing, but in this case I think it makes sense because it’s also anthropogenic, since these human language LLMs get trained on human language.
Absolutely agreed with most of that. I think that LLMs and similar technologies are incredible and have great potential to be components of artificial intelligences. LLMs by themselves are more akin to “virtual intelligences” portrayed in the Mass Effect games, but currently generally with fewer guard rails to prevent hallucinations.
I suspect there may be a few other concurrent “loops”, likely not as well compared to LLMs (though some might be) running in our meat computers and their inefficiency and poor fidelity likely ends up being part of the factors that make our consciousness. Otherwise, your approximation makes a lot of sense. Still a lot to learn about our meat computers but, I really do hope we, as a species, succeed in making the world a bit less lonely (by helping other intelligence emerge).
There is some discussion about people “with an internal monologue”, and people “without”. I wonder if those might be some different ways of running that loop, or maybe some people have one loop take over others… and the whole “dissociative personality disorder” could be multiple loops competing for being the main one at different times.
Related to fidelity, some time ago I read an interesting thing: consciousness means having brainwaves out of sync, when they get in sync people go unconscious. From a background in electronics, I’ve always assumed the opposite (system clock and such), but apparently our consciousness emerges from the asynchronous differences, meaning the inefficiencies and poor fidelity might be a feature, not a bug.
Anyway, right now, as someone suffering from insomnia, I’d happily merge with some AI just to get a “pause” button.