1 min read

AI's slow (er than expected) take off...

Dwarkesh has a strong essay about AI's slower than expected take off and has a interesting insight - AI is not great at iterative learning that humans excel at. #ai #llm
But the fundamental problem is that LLMs don’t get better over time the way a human would. The lack of continual learning is a huge huge problem. The LLM baseline at many tasks might be higher than an average human's. But there’s no way to give a model high level feedback. You’re stuck with the abilities you get out of the box. You can keep messing around with the system prompt. In practice this just doesn’t produce anything even close to the kind of learning and improvement that human employees experience.

The reason humans are so useful is not mainly their raw intelligence. It’s their ability to build up context, interrogate their own failures, and pick up small improvements and efficiencies as they practice a task.

Why I have slightly longer timelines than some of my guests

Dwarkesh of the eponymous podcast has a strong insight. This is related to the concept of 'intelligence' - which itself requires processing. The insight is that those associations build on itself versus a retraining which itself need not produce the same paths again. I need to mull more about the implications of this.