Warning: Some posts on this platform may contain adult material intended for mature audiences only. Viewer discretion is advised. By clicking ‘Continue’, you confirm that you are 18 years or older and consent to viewing explicit content.
Scientists have built a framework that gives generative AI systems like DALL·E 3 and Stable Diffusion a major boost by condensing them into smaller models — without compromising their quality.
Will increasing the model actually help? Right now we’re dealing with LLMs that literally have the entire internet as a model. It is difficult to increase that.
Making a better way to process said model would be a much more substantive achievement. So that when particular details are needed it’s not just random chance that it gets it right.
Where exactly did you write anything about interpretation? Getting “details right” by processing faster? I would hardly call that “interpretation” that’s just being wrong faster.
Where exactly did you write anything about interpretation? Getting “details right” by processing faster? I would hardly call that “interpretation” that’s just being wrong faster.