Warning: Some posts on this platform may contain adult material intended for mature audiences only. Viewer discretion is advised. By clicking ‘Continue’, you confirm that you are 18 years or older and consent to viewing explicit content.
As Mozilla envisions Firefox’s future, we are focused on building a browser that empowers you to choose your own path and gives you the freedom to explor
Well I’m guessing they actually did testing on local AI using a 4GB and 8GB RAM laptop and realized it would be an awful user experience. It’s just too slow.
Well I’m guessing they actually did testing on local AI using a 4GB and 8GB RAM laptop and realized it would be an awful user experience. It’s just too slow.
I wish they rolled it in as an option though.
They wanted to use fast small language models, not LLMs like Llama
Llamafile with tinyllama model is 640mb. It could be a flag to enable or an extension