Errors of discomprehension
I used to call these kinds of glitches—in which neural network-based systems do things that are utterly baffling—errors of discomprehension. It is perhaps time to revive that term. Such errors clearly aren’t disappearing any time soon.
https://garymarcus.substack.com/p/wheres-waldo-the-elephant-in-the
Seems like the right term to use when dealing with the disconnect between “understanding” and “quasi understanding” because of regurgitation.
The closest I got to a real world metaphor was remembering college and some examinations — microprocessor diagrams come to mind.
I could remember and regurgitate the microprocessor diagram by forcing myself to keep doing it repetitively. However I never understood why the design was what it was. So if the exam question was “draw the diagram of an 8086,” I could. However if it turned out to be, “suggest how you’d attempt to improve the 8086 pin layout,” I’ll attempt to connect it with the knowledge I’d built up until then. It may be right / it may be wrong. However, nobody should be expecting to do something mission critical with that response.
Now is that intelligence?
Yes, in a way. I am solving the stated problem connecting what my brain might have fired to quickly try and connect with info I already knew.
However is it reliable? No!
I think this should be a key assumption on any product being designed around LLM as the core unlock to a solution and any business designed around the product. In its current generation, you don’t have reliability in an LLM.
So, one shouldn’t replace a solution with an LLM if they want reliance as they are less reliable than expert humans.
However if the product doesn’t rely on an LLM’s reliance, I think there are some very interesting engagement and efficiency unlocks awaiting.
Member discussion