Over time, Lemmy instances are going to keep aquiring more, and more data. Even if, in the best case, they are not caching content and they are just storing the data posted to communities local to the server, there will still be a virtually limitless growth in server storage requirements. Eventually, it may get to a point where it is no longer economically feesible to host all of the infrastructure to keep expanding the server’s storage. What happens at this point? Will servers begin to periodically purge old content? I have concerns that there will be a permanent horizon (as Lemmy becomes more popular, the rate of growth in storage requirements will also increase, thereby reducing the distance to this horizon) over which old – and still very useful – data will cease to exist. Is there any plan to archive this old data?
Pictrs 0.4 recently added support for object storage. This is fantastic, because object storage is dirt cheap compared to traditional block storage (like a VM filesystem). This helps a lot for image storage, which is a large part of the problem, but it’s not the whole problem.
I know Lemmy uses Postgres for everything else, but they should really invest time into moving towards something more sustainable for long term/permanent hosting. Paid Postgres services are obscenely upcharged and prohibitively expensive, so that’s not an option.
I’m armchair architecting here so I’m not sure what that would look like for Lemmy (Cloudflare KV? Redis?)
Still, even my own private instance has been growing at a rate of about 700MB per day, and I don’t even subscribe to that many things. I can’t imagine what the major instances are dealing with. This isn’t sustainable unless we want to start purging old data, which will kill Lemmy long term.
EDIT: Turns out ~90% of my Lemmy data is just for debugging and not needed:
https://github.com/LemmyNet/lemmy/issues/3103#issuecomment-1631643416
The largest table holds data that is only needed by Lemmy briefly. There is a scheduled job to clear it… Every 6 months. There are active discussions on how best to handle this.
On my instance I’ve set a cronjob to delete everything but the most recent 100k rows of that table every hour.
I saw that issue, and then I saw people having problems after clearing it, so I’m just going to wait until they figure that out in a stable version. Looking forward to it though!
@[email protected] @[email protected] Can either of you link to that discussion please?
It looks like the issue I was referring to has since been edited, as it’s not actually relevant to clearing this database bloat:
https://github.com/LemmyNet/lemmy/issues/3103
Isn’t it mostly pictures and movies taking up space, posts and comments that is just text doesn’t take up much.
I would be fine with text is forever but pictures and movies are deleted after time.
Just think of all those old, helpful forum posts from years past with tinypic and Photobucket links that are dead. I agree memes can probably die out after time but anything informative would be bad to lose imo
There is a good writeup on how to do the migration here. I went through it myself since I host my tiny Lemmy instance on an AWS EC2 instance. It went pretty smoothly bu obviously larger instances will have to take a longer downtime to perform the migration.
Hey, that’s a Vultr guide! I use Vultr, thanks!
By the way, how are your costs on EC2? My understanding is that hosting on EC2 would be cost prohibitive from data transfer costs alone, not to mention their monthly rates for instances are pretty much always below the cost of a VPS.
Now if only someone could do this for the Postgres data. I wonder if S3FS would be able to handle the load of a running database, that would be a nice way to save costs.
The 700MB are the postgres data or everything including the images?
I’m under the impression that text should be very cheap to store inside postgres.
Keep in mind that you are also storing metadata for the post (i.e. creation time), relations (i.e. which used posted) and an index.
Might not be much now but these things really add up over the years.
Yes but those are in general a couple of bytes at most. The average comment will be less than 1KB. Metadata that goes with it will be barely more.
On the other hand most images will be around 1MB, or 1000x times larger. Sure it depends on the type of instance but text should be a long way from filling a hard drive. From what I’ve seen on github the database size is actually mostly debugging information, so it might explain the weirdness.