I’ve gone down a rabbit hole here.

I’ve been looking at lk99, the potential room temp superconductor, lately. Then I came across an AI chat and decided to test it. I then asked it to propose a room temp superconductor and it suggested (NdBaCaCuO)_7(SrCuO_2)_2 and a means of production which got me thinking. It’s just a system for looking at patterns and answering the question. I’m not saying this has made anything new, but it seems to me eventually a chat AI would be able to suggest a new material fairly easily.

Has AI actually discovered or invented anything outside of it’s own computer industry and how close are we to it doing stuff humans haven’t done before?

  • Botree@lemmy.world
    link
    fedilink
    arrow-up
    5
    arrow-down
    5
    ·
    11 months ago

    The “breakthrough” likely happened a long time ago, but as with all tech, it was only recently accessible to the general public. LLM alone isn’t even that sophisticated to begin with.

    AI assistant will soon be an essential part of our lives. It will handle grocery shopping based on your dietary requirements, conduct basic diagnosis of your health, create personalized software, books, music, and movies on the fly for you, do your taxes and offer financial advices. All these are already happening.

    • nandeEbisu@lemmy.world
      link
      fedilink
      arrow-up
      6
      ·
      11 months ago

      I assume you are referring to transformers, which came out in the literature around 2017. Attention on its own is significantly older, but wasn’t really used in a context that came close to being used as a large language model until the early / mid 2010s.

      While attention is fairly simplistic, a trait which helps it parallelize well and scale well, there is a lot of research that came about recently around how the text is presented to the model, and the size of the models. There is also a lot of sophistication around instruction tuning and alignment as well which is how you get from simple text continuation to something that can answer questions. I don’t think you could make something like chatGPT using just the 2017 “Attention is All You Need” paper.

      I suspect that publicly released models lags whatever google or OpenAI has figured out by 6 months to a year, especially because there is now a lot of shareholder pressure around releasing LLM based products. Advancement that are developed in the open source community, like apply LoRA and quantization in various contexts, has a significantly shorter time between development and release.