We are a design and research lab building natural interfaces that make complex AI workflows simple and affordable for everyone. Our flagship project is a friendly offline AI assistant named Modu.
Building the OS for local AI
We’re building the interface layer for an on‑device future, where people interact with local AI through intuitive design and clear, unassuming language - not parameter counts. Today, most on‑device AI interfaces are built for developers first. We design and test new UX systems that make local AI work seamlessly and feel enjoyable for anyone to use.
There are clear advantages in using open-source local models to build applications: they are private, run offline on your own hardware, can be tailored to your specific needs, and stay affordable over time because you are not vendor-locked into metered cloud pricing. Yet most people are still unaware of local models, or assume they are either too technical to set up or too limited to be useful in everyday life, even though they increasingly match the sophistication of closed models.
Our thesis is that this is largely a design challenge, and that the value proposition of local models can be made clear through creative design components that emphasize practical value and instant usability. This is why we are designing an on-device copilot that understands the context of your hardware, screen and actions, provides proactive recommendations, and embeds itself in your daily routine, all without sending data to the cloud or being tear-jerkingly expensive.
But building great local AI isn’t just about the interface - it’s about delivering a complete, ready-to-use experience from day one. On their own, local models still fall short of what people now expect from everyday AI tools. It is only when you layer on web search, multimodality, large scale file context, powerful RAG, and rich MCP connections that they become viable for real, day‑to‑day workflows.
People naturally expect immediate functionality without having to tinker with servers, config files, or API keys. Our team aims to achieve that "out-of-the-box" functionality, where web search, multimodality, and other key characteristics come pre-bundled in local AI apps.
To deliver that kind of out‑of‑the‑box experience on real‑world machines, we also have to be honest about the limits of today’s local models. While we are local‑first, we are also realistic about the limitations of current local models. After all, not everyone owns an M4 Max or RTX 4090. This is why we are setting up the technological rudiments for users to be able to effortlessly switch between local and cloud models, and actively keep up on new research to optimize for a hybrid approach where we dynamically route tasks to both local and cloud models to match each user's unique hardware constraints.
Who are we?
We were the first Korean team to receive the Medici Grant from Danielle Strachman, GP of the 1517 Fund, co-founder of the Thiel Fellowship, and early backer of Ethereum, Figma, Loom, and others.
Our previous projects placed 8th in Y Combinator’s SUS Pitch Competition, were nominated for Site of the Year by Framer, won a global case competition hosted by UC Berkeley, were featured on the front page of read.cv, and reached #1 trending on the leaderboard of Korea's Product Hunt.
If you’re a student based in Seoul (or not) with even a passing interest in building meaningful tools for students, get in touch.
Seoul Student Company is run by Joonseo Chang. Illustrations by the great noona Hannah Lee 🌼.