Warning: Some posts on this platform may contain adult material intended for mature audiences only. Viewer discretion is advised. By clicking ‘Continue’, you confirm that you are 18 years or older and consent to viewing explicit content.
Scientists have built a framework that gives generative AI systems like DALL·E 3 and Stable Diffusion a major boost by condensing them into smaller models — without compromising their quality.
You’re building beautiful straw men. They’re lies, but great job.
I said originally that we need to improve the interpretation of the model by AI, not just have even bigger models that will invariably have the same flaw as they do now.
Deterministic reliability is the end goal of that.
Will increasing the model actually help? Right now we’re dealing with LLMs that literally have the entire internet as a model. It is difficult to increase that.
Making a better way to process said model would be a much more substantive achievement. So that when particular details are needed it’s not just random chance that it gets it right.
Where exactly did you write anything about interpretation? Getting “details right” by processing faster? I would hardly call that “interpretation” that’s just being wrong faster.
It is far off. It’s like saying you have the entire knowledge of all physics because you skimmed a textbook once.
Interpretation is also a problem that can be solved, current models do understand quite a lot of nuance, subtext and implicit context.
But you’re moving the goal post here. We started at “don’t get better, at a plateau” and now you’re aiming for perfection.
You’re building beautiful straw men. They’re lies, but great job.
I said originally that we need to improve the interpretation of the model by AI, not just have even bigger models that will invariably have the same flaw as they do now.
Deterministic reliability is the end goal of that.
Where exactly did you write anything about interpretation? Getting “details right” by processing faster? I would hardly call that “interpretation” that’s just being wrong faster.