• Dave@lemmy.nz
    link
    fedilink
    English
    arrow-up
    13
    ·
    8 months ago

    The blog post states:

    We build the AI Assistant using a flexible, solution-independent approach which gives you a choice between multiple large language models (LLM) and services. It can be fully hosted within your instance, processing all requests in-house, or powered by an external service.

    So it sounds like you pick what works for you. I’d guess on a raspberry pi, on board processing would be both slow and poor quality, but I’ll probably give it a go anyway.

    • pixxelkick@lemmy.world
      link
      fedilink
      English
      arrow-up
      2
      ·
      8 months ago

      Yeah sorry I was specifically referring to the on prem LLM if that wasnt clear, and how much juice running that thing takes.

      • Dave@lemmy.nz
        link
        fedilink
        English
        arrow-up
        4
        ·
        8 months ago

        Some of the other Nextcloud stuff (like that chat stuff) isn’t suitable on Raspberry Pi, I expect this will be the same. It’s released though, right? Might have to have a play.