I use AI all day, but the work kept scattering.
One thread for coding. Another for research. Another for a product idea. Another buried inside a model provider's app. Some context in ChatGPT, some in Claude, some in a local coding session, some in screenshots, some gone forever because I opened the wrong tab.
That is fine when AI is occasional.
It breaks when AI becomes part of how you work every day.
I did not need another chatbot demo. I needed a home base: a clean, hosted assistant surface for coding questions, research, idea exploration, and quick thinking.
So I built Sharon Chat.
The Problem Was Not Access to Models
Everyone has access to models now.
That is not the bottleneck.
The bottleneck is the working surface around the models. Where do your conversations live? How quickly can you start? Can you switch models without switching products? Can you keep the interface quiet enough to think? Can you turn the assistant into your own workflow instead of adapting yourself to someone else's product roadmap?
Most people treat chat as temporary. Ask, answer, close the tab.
But if AI is part of your operating rhythm, chat becomes infrastructure. It is where rough ideas get shaped, where research gets compressed, where technical questions get unstuck, and where a half-formed thought becomes the next task.
That deserves a product surface you control.
What Sharon Chat Does
Sharon Chat is a hosted personal AI chat environment.
It is intentionally simple:
- ask coding, research, and strategy questions
- keep the interface clean and app-like
- use a modern AI SDK stack instead of a pile of one-off scripts
- keep the surface hackable so it can evolve with my workflow
The point is not to compete with ChatGPT or Claude. That would be the wrong frame.
The point is ownership.
I wanted an assistant surface that belongs to my workflow, sits next to the rest of my product ecosystem, and can be shaped as my needs change.
That is why this product matters to me. It is not a chatbot as a destination. It is a workbench.
Why I Built It on Vercel's Chatbot Template
I started from Vercel's open-source Chatbot template because it already makes the right infrastructure choices for a serious assistant surface.
The template gives you:
- Next.js App Router
- Vercel AI SDK
- Auth.js
- shadcn/ui
- data persistence
- Vercel Blob support
- AI Gateway for model routing
That matters because a chat product looks simple until you build the boring parts. Auth. persistence. model routing. file storage. UI state. deployment. environment variables. You can waste days rebuilding scaffolding before you even get to the product behavior.
For Sharon Chat, I did not want to prove I could rebuild the boilerplate. I wanted a working surface I could adapt quickly.
The repo history also reflects the model direction I wanted: the default was moved to Kimi K2.5 in April 2026. That choice fits the product's purpose: fast, practical daily work, not a model leaderboard theater.
The Human Value
The value is not "chat with AI."
The value is reducing the friction between thought and useful next step.
When I am building, I do not want to stop and decide which product to open. I do not want the interface to push me into a feed, a store, or a feature I do not need. I want a quiet place to ask the next question and keep moving.
For founders and operators, that matters more than it sounds.
Most of the day is not deep strategy. It is small decisions:
- clarify this idea
- compare these options
- turn this rough thought into a plan
- explain this error
- rewrite this message
- summarize this research
- pressure-test this product angle
Those moments compound. If the assistant surface is slow, noisy, or scattered, the habit breaks. If it is close, clean, and yours, it becomes part of the way you think.
That is what I wanted Sharon Chat to be.
What Makes It Different From a Demo
A demo proves a feature.
A product surface changes behavior.
Sharon Chat is small, but it is built as a real surface in my ecosystem. It has a live URL. It has a thumbnail, a project card, and a place next to Orbit AI, Jobot, CheckApp, and Open Agents. It is one part of a broader pattern: I keep turning repeated AI behavior into owned infrastructure.
Jobot handles the job-search pipeline.
Orbit AI handles customer memory for agents.
CheckApp handles content quality before publish.
Open Agents turns repo tasks into reviewable work.
Sharon Chat is the daily thinking surface between all of that.
It is the place I go when the work is not yet a workflow, a repo task, or a product. It is where the idea starts.
The Takeaway
Not every AI product needs to be a SaaS company.
Sometimes the most useful product is a private surface that matches how you actually work.
That is the lesson from Sharon Chat. The model matters. The stack matters. But the real question is simpler:
Where does your AI work live?
If the answer is "wherever the last tab happened to be," you do not have a workflow. You have fragments.
I built Sharon Chat to give those fragments a home.
Links
- Live app: chat.sharonsciammas.com
- Source template fork: github.com/sharonds/chatbot
- Related: I Built Orbit AI Because Agents Need Customer Memory, Not Another Prompt
- Related: I Built a Live Coding Agent on Vercel. The Point Wasn't the Code.