Warning: Some posts on this platform may contain adult material intended for mature audiences only. Viewer discretion is advised. By clicking ‘Continue’, you confirm that you are 18 years or older and consent to viewing explicit content.
A detailed analysis of the DeepMind/Meta study: how large language models achieve unprecedented compression rates on text, image, and audio data - and the implications of these results
It looks like they did it both ways (“raw rate” vs “adjusted rate”):
In the case of the adjusted compression rate, the model’s size is also added to the compressed size, i.e., it becomes (compressed size + number of model parameters) / raw size. This metric allows us to see the impact of model parameters on the compression performance. A very large model might be able to compress the data better compared to a smaller model, but when its size is taken into account, the smaller model might be doing better. This metric allows us to see that.
Yes. They also mention that using such large models for compression is not practical because their size thwarts any amount of data you might want to compress. But this result gives a good picture into how generalized such large models are, and how well they are able to predict the next tokens for image/audio data at a high accuracy.
It looks like they did it both ways (“raw rate” vs “adjusted rate”):
Yes. They also mention that using such large models for compression is not practical because their size thwarts any amount of data you might want to compress. But this result gives a good picture into how generalized such large models are, and how well they are able to predict the next tokens for image/audio data at a high accuracy.