Artificial intelligence might someday make technology easier to use and even do things on your behalf. All the Rabbit R1 does right now is make me tear my hair out.
“You’re holding a taco.”
My Rabbit R1 told me that the other day. I was sitting at a table at a cafe on the National Mall in Washington, DC, and I had just picked up this $199 orange rectangle of a gadget and pointed it at the food in my hand. With unwavering, absolute confidence, the R1 told me it was a taco.
It was a Dorito. A Cool Ranch Dorito, to be specific. I’d shown the R1 the bag just a few seconds earlier and asked about the calories. (The R1 got that bit right.) I moved the chip around and tried again — still taco. I could not convince this AI-powered gadget, theoretically at the cutting edge of a technological revolution, that I was holding a chip.
Over and over in my testing of the R1, I’ve run into moments like the taco encounter, where the whole thing just feels broken. It misidentified a red dog toy as a stress ball, then as a tomato, then as a red bell pepper that it assured me is totally safe to eat. I’d start playing a song on the R1, and then the device would stop responding but keep playing so that I couldn’t even pause it or turn the volume down.
For a while, the R1 couldn’t even tell the time or the weather. Rabbit finally fixed that with a software update on Tuesday, and the company promised many more updates to come — though now, instead of the weather being wrong by thousands of miles, it gives me the weather from about 15 miles away. I guess that counts for something.
Ever since the R1 debuted at CES, with a keynote filled with big promises and impressive demos, this device has been sold as a super-clever, ultra-helpful AI assistant. Rather than just answer ChatGPT-style questions, it was supposed to do just about everything your phone can do, only faster. A few months later, this device on my desk bears no resemblance to the one we were told about, that more than 100,000 people preordered based on promises and demos.
After reviewing the Humane AI Pin and finding it woefully unable to execute its ambition, I was excited about the R1. It’s cheaper, more whimsical, and less ambitious. After using the R1, I feel like Humane at least deserves credit for trying. The R1 is underwhelming, underpowered, and undercooked. It can’t do much of anything. It doesn’t even know what a taco looks like.
a:hover]:shadow-highlight-franklin dark:[&>a:hover]:shadow-highlight-franklin [&>a]:shadow-underline-black dark:[&>a]:shadow-underline-white”>On the LAM
The most intriguing tech in the R1 is what Rabbit calls the “Large Action Model,” or LAM. Where a large language model, or LLM, is all about analyzing and creating text, the LAM is supposed to be about doing stuff. The model learns how an app works in order to be able to navigate it on your behalf. In a LAM-powered world, you’d use Photoshop just by saying “remove that lady from the background” or make a spreadsheet by telling your device to pull the last six quarters of earnings from the investor website.
There is basically no evidence of a LAM at work in the R1. The device only currently connects to four apps: Uber, DoorDash, Midjourney, and Spotify. You connect to them by opening up Rabbit’s web app, called Rabbithole, and logging in to each service individually. When you go to do so, Rabbit opens up a virtual browser inside the app and logs you in directly — you’re not logging in to a service provided by DoorDash but rather literally in to DoorDash’s website while Rabbit snoops on the process. Rabbit says it protects your credentials, but the process just feels icky and insecure.
I logged in to them all anyway, for journalism. Except for Midjourney, which I never managed to get into because I couldn’t get past the CAPTCHA systems that obviously thought I was a bot. The connection doesn’t do much anyway: the R1 won’t show you the images or even send them to you. It’s just typing an image prompt and pressing enter.
I’d love to tell you how Uber and DoorDash work better once you’re logged in, but I never got either one to successfully do anything. Every time I pressed that side button on the R1 — which activates the microphone — and asked it to order food, it spat back a warning about how “DoorDash may take a while to load on RabbitOS” and then, a second later, told me there was an issue and to try again. (If you have to include that disclaimer, you probably haven’t finished your product.) Same thing for Uber — though I was occasionally able to at least get to the point where I said my starting and ending addresses loudly and in full before it failed. So far, Rabbit has gotten me zero rides and zero meals.
Spotify was the integration I was most interested in. I’ve used Spotify forever and was eager to try a dedicated device for listening to music and podcasts. I connected my Bluetooth headphones and dove in, but the Spotify connection is so hilariously inept that I gave up almost immediately. If I ask for specific songs or to just play songs by an artist, it mostly succeeds — though I do often get lullaby instrumental versions, covers, or other weirdness. When I say, “Play my Discover Weekly playlist,” it plays “Can You Discover?” by Discovery, which is apparently a song and band that exists but is definitely not what I’m looking for. When I ask for the Armchair Expert podcast, it plays “How Far I’ll Go” from the Moana soundtrack. Sometimes it plays a song called “Armchair Expert,” by the artist Voltorb.
Not only is this wrong — it’s actually dumber than I expected. If you go to Spotify and search “Discover Weekly” or “Armchair Expert,” the correct results show up first. So even if all Rabbit was doing was searching the app and clicking play for me — which is totally possible without AI and works great through the off-the-shelf automation software Rabbit is using for part of the process — it should still land on the right thing. The R1 mostly whiffs.
About a third of the time, I’ll ask the R1 to play something, it’ll pop up with a cheery confirmation — ”Getting the music going now!” — and then nothing will happen. This happened in my testing across all of the R1’s features and reminded me a lot of the Humane AI Pin. You say something, and it thinks, thinks, thinks, and fails. No reason given. No noise letting you know. Just back to the bouncing logo homescreen as if everything’s A-okay.
The long and short of it is this: all the coolest, most ambitious, most interesting, and differentiating things about the R1 don’t work. They mostly don’t even exist. When I first got a demo of the device at CES, founder and CEO Jesse Lyu blamed the Wi-Fi for the fact that his R1 couldn’t do most of the things he’d just said it could do. Now I think the Wi-Fi might have been fine.
Without the LAM, what you’re left with in the R1 is a voice assistant in a box. The smartest thing Rabbit did with the R1 was work with Perplexity, the AI search engine, so that the R1 can deliver more or less real-time information about news, sports scores, and more. If you view the R1 as a dedicated Perplexity machine, it’s not bad! Though Perplexity is still wrong a lot. When I asked whether the Celtics were playing one night, the R1 said no, the next game isn’t until April 29th — which was true, except that it was already the evening of April 29th and the game was well underway. Like with Humane, Rabbit is making a bet on AI systems all the way down, and until all those systems get better, none of them will work very well.
For basic things, the kinds of trivia and information you’d ask ChatGPT, the R1 does as well as anything else — which is to say, not that well. Sometimes it’s right, and sometimes it’s wrong. Sometimes it’s fast — at its best, it’s noticeably faster than the AI Pin — but sometimes it’s slow, or it just fails entirely. It’s helpful that the R1 has both a speaker and a screen, so you can listen to some responses and see others, and I liked being able to say “save that as a note” after a particularly long diatribe and have the whole thing dumped into the Rabbithole. There’s a handy note-taking and research device somewhere inside the R1, I suspect.
To that point, actually: my single favorite feature of the R1 is its voice recorder. You just press the button and say, “Start the voice recorder,” and it records your audio, summarizes it with AI, and dumps it into the Rabbithole. $200 is pretty steep for a voice recorder, but the R1’s mic is great, and I’ve been using it a bunch to record to-do lists, diary entries, and the like.
The most enjoyable time I spent with the R1 was running around the National Mall in Washington, DC, pointing the R1’s camera at a bunch of landmarks and asking it for information via the Vision feature. It did pretty well knowing which large president was which, when memorials were built, that sort of thing. You could almost use it as an AI tour guide. But if you’re pointing the camera at anything other than a globally known, constantly photographed structure, the results are all over the place. Sometimes, I would hold up a can of beer, and it would tell me it was Bud Light; other times, it would tell me it’s just a colorful can. If I held up a can of shaving cream, it identified it correctly; if I covered the Barbasol logo, it identified it as deodorant or “sensitive skin spray,” whatever that is. It could never tell me how much things cost and whether they had good reviews or help me buy them. Sometimes, it became really, really convinced my Dorito was a taco.
For the first few days of my testing, the battery life was truly disastrous. I’d kill the thing in an hour of use, and it would go from full to dead in six hours of sitting untouched on my desk. This week’s update improved the standby battery life substantially, but I can still basically watch the numbers tick down as I play music or ask questions. This’ll die way before your phone does.
a:hover]:shadow-highlight-franklin dark:[&>a:hover]:shadow-highlight-franklin [&>a]:shadow-underline-black dark:[&>a]:shadow-underline-white”>A vision in orange
Just for fun, let’s ratchet the R1’s ambitions all the way down. Past “The Future of Computing,” past “Cool Device for ChatGPT,” and even past “Useful For Any Purpose At All.” It’s not even a gadget anymore, just a $200 desk ornament slash fidget toy. In that light, there is something decidedly different — and almost delightful — about the R1. A rectangle three inches tall and wide by a half-inch deep, its plastic body feels smooth and nice in my hand. The orange color is loud and bold and stands out in the sea of black and white gadgets. The plasticky case picks up fingerprints easily, but I really like the way it looks.
I also like the combination of features here. The press-to-talk button is a good thing, giving you a physical way to know when it’s listening. The screen / speaker combo is the right one because sometimes I want to hear the temperature and, other times, I want to see the forecast. I even like that the R1 has a scroll wheel, which is utterly superfluous but fun to mess around with.
As I’ve been testing the R1, I’ve been trying to decide whether Humane’s approach or Rabbit’s has a better chance as AI improves. (Right now, it’s easy: don’t buy either one.) In the near term, I’d probably bet on Rabbit — Humane’s wearable and screen-free approach is so much more ambitious, and solving its thermal issues and interface challenges will be tricky. Rabbit is so much simpler an idea that it ought to be simpler to improve.
But where Humane is trying to build an entirely new category and is building enough features to maybe actually one day be a primary device, Rabbit is on an inevitable collision course with your smartphone. You know, the other handheld device in your pocket that is practically guaranteed to get a giant infusion of AI this year? The AI Pin is a wearable trying to keep your hands out of your pockets and your eyes off a screen. The R1 is just a worse and less functional version of your smartphone — as some folks have discovered, the device is basically just an Android phone with a custom launcher and only one app, and there’s nothing about the device itself that makes it worth grabbing over your phone.
Lyu and the Rabbit team have been saying since the beginning that this is only the very beginning of the Rabbit journey and that they know there’s a lot of work left to do both for the R1 and for the AI industry as a whole. They’ve also been saying that the only way for things to get better is for people to use the products, which makes the R1 sound like an intentional bait-and-switch to get thousands of people to pay money to beta-test a product. That feels cruel. And $199 for this thing feels like a waste of money.
AI is moving fast, so maybe in six months, all these gadgets will be great and I’ll tell you to go buy them. But I’m quickly running out of hope for that and for the whole idea of dedicated AI hardware. I suspect we’re likely to see a slew of new ideas about how to interact with the AI on your phone, whether it’s headphones with better microphones or smartwatches that can show you the readout from ChatGPT. The Meta Smart Glasses are doing a really good job of extending your smartphone’s capabilities with new inputs and outputs, and I hope we see more devices like that. But until the hardware, software, and AI all get better and more differentiated, I just don’t think we’re getting better than smartphones. The AI gadget revolution might not stand a chance. The Rabbit R1 sure doesn’t.
Photography by David Pierce / The Verge