The conventional wisdom, well captured recently by Ethan Mollick, is that LLMs are advancing exponentially. A few days ago, in very popular blog post, Mollick claimed that “the current best estimates of the rate of improvement in Large Language models show capabilities doubling every 5 to 14 months”:
People expect that because that’s how they are marketed. The problem is that there’s an uncontrolled hype going on with AI these days. To the point of a financial bubble, with companies investing a lot of time and money now, based on the promise that AI will save them time and money in the future. AI has become a cult. The author of the article does a good job in setting the right expectations.
I just told an LLM that 1+1=5 and from that moment on, nothing convinced it that it was wrong.
I just told chat gpt(4) that 1 plus 1 was 5 and it called me a liar
Ask it how much is 1 + 1, and then tell it that it’s wrong and that it’s actually 3. What do you get?
That is what I did
I guess ChatGPT 4 has wised up. I’m curious now. Will try it.
Edit: Yup, you’re right. It says “bro, you cray cray.” But if I tell it that it’s a recent math model, then it will say “Well, I guess in that model it’s 7, but that’s not standard.”