From Personal Robots to Intelligent Services
KIKI, a personal robot system designed with former Google X researchers. The challenge was designing for interactions you couldn't predict.
About ten years ago, I worked with former Google X Everyday Robotics researchers to design a personal robot system. This was my introduction to deep reinforcement learning: the same systems that power Claude, ChatGPT, and the LLMs we work with today.
I was responsible for the research, the interaction design (both physical and digital), and the overall value proposition. It was a startup, so everything mattered.
The main thing I learned: when you're designing intelligent systems, you can't predict what the interaction will be once the product ships. Certain behaviors emerge. The system learns. The user adapts. The interaction becomes something you didn't design for.
Families could tune KIKI's personality across six dimensions using the HEXACO framework, creating guardrails while allowing emergent behavior.
In KIKI's case, we designed for guardrails from the start. We created a personality graph where families could control the robot's behavioral boundaries across six dimensions based on the HEXACO personality framework (Honesty-Humility, Emotionality, Extraversion, Agreeableness, Conscientiousness, and Openness to Experience) Families could tune how introverted or extroverted KIKI would be, how agreeable, how neurotic. This allowed for emergence, but always kept a human in the loop to modulate the experience.
The design challenge wasn't just building a robot, it was building one that families could shape to fit their lives
With a robot, this design challenge of emergent systems is tangible. You can see the system acting in the world. But as I've moved into designing complex services: tech platforms, financial systems, healthcare experiences - these lessons about emergent behavior have become even more relevant. I design for alternative paths, for edge cases, for the moments when the system doesn't behave as expected, and critically, for tools that let end users influence the experience they want or that feels safe to them after the service has shipped.
This showed up in my work building the Responsible Mixed Reality team at Microsoft, and it shows up in my current work designing safeguards for agentic servicing in financial services. The principle is the same: intelligent systems require intelligent constraints, and the user needs to stay in control.