Large Action Model (LAM) for AI is here, in a bright red handheld device.
Rabbit R1 is a new $199 handheld AI device that points to a future where AI can not only understand the spoken word, but that it will execute tasks by using software interfaces.
The R1: More Than Just a Device
At first glance, the Rabbit R1 might remind you of a handheld console from the ’90s, but it’s so much more. It’s a compact, standalone device, half the size of an iPhone, equipped with a 2.88-inch touchscreen, a rotating camera, and a unique scroll wheel/button for navigation. Its design, a collaboration with Teenage Engineering, emphasizes both aesthetics and functionality.
Rabbit OS
Rabbit OS is powered by a “Large Action Model” (LAM). Unlike conventional large language models, LAM acts as a universal controller for apps, streamlining tasks across various platforms. Whether it’s controlling music, ordering a car, or managing groceries, Rabbit OS simplifies the process through a single interface.
A New Era of App Interaction
Rabbit’s approach is ingenious. Instead of building APIs or seeking developer support for a new OS, they trained their model to use existing apps. The LAM learned app functionalities by observing human interactions, understanding settings, order confirmations, and navigation within these apps.
Rabbit R1 represents a significant step towards a future where our digital interactions are more streamlined and efficient. It’s not just a gadget; it’s a potential game-changer in how humans will interact with AI and digital tools.
For more innovation insights, subscribe here: https://lnkd.in/gNihwQQX