Disruption, Dependencies and the Rabbit R1 - A product management lens on Rabbit R1 and Humane
David Pierce doesn't bury the lede:
Artificial intelligence might someday make technology easier to use and even do things on your behalf. All the Rabbit R1 does right now is make me tear my hair out.
...
Rabbit R1 continues to be panned. Here's one in video format by MKBHD:
or by Dave2D:
AI is not a feature, but a technology towards an end
I will admit that I was hoping the R1 would be a better product than the Humane AI pin as it had the potential for low-end disruption to the smartphone. One potential product thesis might have been:
The smartphone is wildly priced for some tasks like:
- being a clock
- setting timers
- answering simple questions
- taking notes
- taking reminders
- adding tasks
- a music streaming / playing remote
As a result, there is a market for a potential low-end disruptor hardware product that leverages AI as a cost structure advantage, or as a way to improve these usecases - well enough - at a significantly lower price compared to a smartphone.
Humane decided that it wanted to come up with an entirely different class of device that supplants the smartphone by using a different set of constraints to achieve the same tasks. Rabbit R1 seems to have traveled down that path as well as evidenced by the use cases they've chosen: Uber, DoorDash, and MidJourney
IOW, they leaned on the AI as the feature and not the technology to achieve something else.
Nota bene: The low end features I laid out don't need AI. The reason I chose them were because AI makes what would have been basic features in note taking but making them significantly better by leveraging voice, transcription and summarization to make them potentially even more useful that what's present on a smartphone.
Let's be gracious and assume they thought they could make these use cases better with AI, then they made the other cardinal mistake of product development: Don't stack bets.
Successful, viable, delightful products are not made on stacked bets
Like with Humane, Rabbit is making a bet on AI systems all the way down, and until all those systems get better, none of them will work very well.
Both Humane and Rabbit also seem to have made stacked bets. In product development this is one of the first things one identifies to provide a robust solution.
The robustness of a solution is indirectly proportional to the variability of the dependencies.
In this case, Rabbit decided that it will bet on:
- LLMs to transcribe intent (safe)
- LLMs to answer questions (risky)
- LAMs to achieve tasks (very risky)
- LAMs to scale (very risky)
- Voice to be the primary input (risky)
I am sure there are more. However, in all these cases, the probability of success is multiplicative and as a result were painting themselves into a corner.

I am not completely familiar with which of these risks would turn this around from a useless gadget not worth $200 to a useful utility. However, as someone that develops products it's easy to see the amount of risk they were taking without nailing one delightful usecase.
That said, the road to successful products are paved with failures. So, kudos to Rabbit and Humane for trying. However, I do wish that Silicon Valley would improve its understanding of the development process better. Some of these mistakes are avoidable by just having a better clarity on the four variables of product management: customer, problem, solution and business.
Member discussion