Over time, Lemmy instances are going to keep aquiring more, and more data. Even if, in the best case, they are not caching content and they are just storing the data posted to communities local to the server, there will still be a virtually limitless growth in server storage requirements. Eventually, it may get to a point where it is no longer economically feesible to host all of the infrastructure to keep expanding the server’s storage. What happens at this point? Will servers begin to periodically purge old content? I have concerns that there will be a permanent horizon (as Lemmy becomes more popular, the rate of growth in storage requirements will also increase, thereby reducing the distance to this horizon) over which old – and still very useful – data will cease to exist. Is there any plan to archive this old data?

  • ubergeek77@lemmy.ubergeek77.chat
    link
    fedilink
    English
    arrow-up
    5
    ·
    edit-2
    1 year ago

    Pictrs 0.4 recently added support for object storage. This is fantastic, because object storage is dirt cheap compared to traditional block storage (like a VM filesystem). This helps a lot for image storage, which is a large part of the problem, but it’s not the whole problem.

    I know Lemmy uses Postgres for everything else, but they should really invest time into moving towards something more sustainable for long term/permanent hosting. Paid Postgres services are obscenely upcharged and prohibitively expensive, so that’s not an option.

    I’m armchair architecting here so I’m not sure what that would look like for Lemmy (Cloudflare KV? Redis?)

    Still, even my own private instance has been growing at a rate of about 700MB per day, and I don’t even subscribe to that many things. I can’t imagine what the major instances are dealing with. This isn’t sustainable unless we want to start purging old data, which will kill Lemmy long term.


    EDIT: Turns out ~90% of my Lemmy data is just for debugging and not needed:

    https://github.com/LemmyNet/lemmy/issues/3103#issuecomment-1631643416

    • Lodion 🇦🇺@aussie.zone
      link
      fedilink
      English
      arrow-up
      1
      ·
      1 year ago

      The largest table holds data that is only needed by Lemmy briefly. There is a scheduled job to clear it… Every 6 months. There are active discussions on how best to handle this.

      On my instance I’ve set a cronjob to delete everything but the most recent 100k rows of that table every hour.

    • teolan@lemmy.world
      link
      fedilink
      English
      arrow-up
      0
      ·
      1 year ago

      The 700MB are the postgres data or everything including the images?

      I’m under the impression that text should be very cheap to store inside postgres.

      • cestvrai@lemm.ee
        link
        fedilink
        English
        arrow-up
        0
        ·
        1 year ago

        Keep in mind that you are also storing metadata for the post (i.e. creation time), relations (i.e. which used posted) and an index.

        Might not be much now but these things really add up over the years.

        • teolan@lemmy.world
          link
          fedilink
          English
          arrow-up
          1
          ·
          1 year ago

          Yes but those are in general a couple of bytes at most. The average comment will be less than 1KB. Metadata that goes with it will be barely more.

          On the other hand most images will be around 1MB, or 1000x times larger. Sure it depends on the type of instance but text should be a long way from filling a hard drive. From what I’ve seen on github the database size is actually mostly debugging information, so it might explain the weirdness.

      • ubergeek77@lemmy.ubergeek77.chat
        link
        fedilink
        English
        arrow-up
        1
        ·
        edit-2
        1 year ago

        Hey, that’s a Vultr guide! I use Vultr, thanks!

        By the way, how are your costs on EC2? My understanding is that hosting on EC2 would be cost prohibitive from data transfer costs alone, not to mention their monthly rates for instances are pretty much always below the cost of a VPS.

        Now if only someone could do this for the Postgres data. I wonder if S3FS would be able to handle the load of a running database, that would be a nice way to save costs.

    • Legarth@lemmy.fmhy.ml
      link
      fedilink
      English
      arrow-up
      0
      ·
      1 year ago

      Isn’t it mostly pictures and movies taking up space, posts and comments that is just text doesn’t take up much.

      I would be fine with text is forever but pictures and movies are deleted after time.

      • WhatASave@lemmy.world
        link
        fedilink
        English
        arrow-up
        1
        ·
        1 year ago

        Just think of all those old, helpful forum posts from years past with tinypic and Photobucket links that are dead. I agree memes can probably die out after time but anything informative would be bad to lose imo

  • 018118055@sopuli.xyz
    link
    fedilink
    English
    arrow-up
    3
    ·
    1 year ago

    One way to approach the geometric storage growth would be to not cache everything everywhere all at once. With 1000+ instances, storing an object in a few instances would be ok if others can pull it in on demand. Can use some typical caching methodology like use frequency, aging etc.

    • vamp07@lemm.ee
      link
      fedilink
      English
      arrow-up
      1
      ·
      1 year ago

      This is a great idea. Instances will need eventually to agree to common storage areas, even if they dont all allow the same content on their instance. That savings would be huge in the long run.

  • BaroqueInMind@kbin.social
    link
    fedilink
    arrow-up
    1
    ·
    1 year ago

    Sounds like the federated instances should consider opening up for donations and paid features like comment awards and animated shit like Discord or Reddit.

  • HamSwagwich@kbin.social
    link
    fedilink
    arrow-up
    0
    ·
    1 year ago

    @Kalcifer

    The long term solution is something like IPFS object storage that’s read only for everyone but the author instance. One copy of the data but all instances can read it and it’s stored forever in a redundant medium with bitrot protection.

  • Hotzilla@sopuli.xyz
    link
    fedilink
    English
    arrow-up
    0
    ·
    1 year ago

    Premature optimization is not good. Content here is not very storage intensive, so I would not yet make it to issue. Postgre can handle billions of rows when indexed right.

  • o_o@programming.dev
    link
    fedilink
    English
    arrow-up
    1
    arrow-down
    1
    ·
    1 year ago

    Personally I think we should add a differentiation between the storage policies of content which is owned by your own instance and content that federates from other instances.

    The former should be kept for a long time (forever?), while the latter can be cleared more regularly.

    • Kalcifer@lemmy.worldOP
      link
      fedilink
      English
      arrow-up
      0
      ·
      1 year ago

      ie oldest postes && least liked First

      This would pretty much automatically throw out all troubleshooting posts. These sorts of posts, very often, don’t receive many likes, as that is not their purpose. On top of that, there has been many a time that I have been saved by finding some ancient forum post that solved my problem.

      • Boinketh@lemm.ee
        link
        fedilink
        English
        arrow-up
        1
        ·
        1 year ago

        Text barely takes up any space. Just target media and leave text alone and troubleshooting posts should be mostly fine unless they put key information in an image.