I kept building agents that could do the work, but not remember the relationship.
Jobot could read job alerts, scrape descriptions, score roles, and move opportunities through a pipeline. CheckApp could review content before publish. My Open Agents fork could edit code in a sandbox and open pull requests.
Each project was different, but the same problem kept showing up underneath: once an agent starts acting on behalf of a person or a team, it needs structured memory.
Not vibes.
Not a longer prompt.
Not a folder of notes.
Customer memory.
Who is this person? Which company do they belong to? What happened last time? Which deal are they attached to? What stage is it in? Who owns the relationship? What should happen next? Can an agent write that record safely? Can another agent read it later without guessing?
That is the problem I built Orbit AI to solve.
Orbit AI is open-source CRM infrastructure for AI agents and developers. It is not a hosted CRM product. It is not a UI. It is the memory layer I wanted under agentic applications: typed contacts, companies, deals, notes, tasks, activities, tags, webhooks, imports, products, payments, contracts, and sequences, exposed through the surfaces builders actually use.
REST API. TypeScript SDK. CLI. MCP server. Direct core access. Same model underneath.
This post is the build story: why I built it, what was harder than expected, and why I think agent products need CRM-shaped memory before they need another clever prompt.
The Problem Was Not "CRM"
I did not wake up wanting to build a CRM.
I was building agent workflows and kept noticing that the valuable work happened after the first tool call. An agent could find a lead, draft a message, summarize a thread, or recommend a next step. But if the result disappeared into a chat transcript, the workflow was fragile.
The next run would ask the same question again.
The next agent would not know what already happened.
The next system would need another custom integration.
That is where prompts stop being enough. A prompt can tell an agent how to behave in this moment. It does not give the product durable state. It does not create a tenant boundary. It does not define who can read which contact. It does not tell you whether a deal moved from discovery to proposal. It does not give you a typed activity log that another system can trust.
Vector memory is useful, but it is not a CRM model. A semantic blob can help retrieve context. It cannot replace structured records for contacts, companies, deals, stages, owners, notes, activities, and tasks.
Hosted CRMs solve the record problem, but they create a different one. If I am building an agent-native product, I do not always want to force users into another CRM account. I want infrastructure that can run beside the app, inside the same deployment boundary, with the same auth model and datastore decisions.
So the question became more specific:
What would CRM infrastructure look like if it were designed for agents first?
The Shape of the Answer
Orbit starts with a simple belief: one customer memory model should be reachable through multiple trusted surfaces.
The core package owns the domain model and storage adapters. Around that, Orbit exposes thin surfaces:
MCP hosts / agents
|
v
@orbit-ai/mcp
|
SDK clients ---> @orbit-ai/api <--- CLI scripts
| |
| v
+--------> @orbit-ai/core <--- starter apps
|
+----------+----------+
| |
SQLite Postgres
local dev/tests production + RLS
That architecture matters because different builders need different entry points.
If you are writing server-side TypeScript, use the SDK. If another service needs JSON over HTTP, use the REST API. If a script or CI job needs to operate records, use the CLI. If Claude, Cursor, Copilot, or another MCP host needs customer tools, use the MCP server. If you are inside trusted server-side code and want no network boundary, use direct core access.
The point is not to have five separate products. The point is to keep one entity model and make every surface speak it consistently.
That consistency became the hard part.
The First Real Bug Was a Language Problem
The most important bug I hit was not glamorous. It was serialization.
Internally, the core services use TypeScript and Drizzle-style camelCase fields. Public API consumers expect snake_case fields like organization_id, stage_id, and request_id. That sounds like a small naming issue until you realize it can break the contract between every surface.
In PR #44, I fixed the API/SDK boundary with a bidirectional serialization layer. API responses now serialize internal records into the public shape. Request bodies deserialize public inputs back into the internal shape. DirectTransport had to match HTTP mode, not invent its own behavior.
That PR also fixed a more serious security issue: a webhook signing secret field was leaking through DirectTransport. The fix stripped sensitive fields consistently across transports.
The lesson was blunt: if an agent can write customer data, naming is not cosmetic. The contract is the product.
An agent does not care that your ORM says organizationId while your API says organization_id. It will call whatever surface you expose. If the SDK, API, CLI, and MCP server disagree, the product becomes unreliable in exactly the place where agents need determinism.
That is why Orbit treats serialization as infrastructure, not glue code.
The Launch Gate Became 14 Journeys
Once the contract was cleaner, I needed to prove the surfaces worked together.
Unit tests were not enough. They could tell me a service worked in isolation, but they could not answer the question I actually cared about:
Can a builder use Orbit end to end?
So PR #48 added a private @orbit-ai/e2e package with 14 launch-gate journeys.
Those journeys covered the real paths:
orbit initscaffolding config files- SQLite adapter setup for local development
- contacts, companies, and deals CRUD across SDK HTTP, SDK direct, raw API, CLI, and MCP
- moving a deal between pipeline stages
- schema inspection and custom fields
- migration preview, apply, and destructive gates
- SDK HTTP auth, pagination, and typed errors
- SDK direct-core mode
- MCP tool registration and JSON-RPC calls
- Gmail, Google Calendar, and Stripe connector configuration
The PR body reported 1,725 passing tests and 16 e2e tests at that point. More important than the number was the kind of coverage: the tests followed the surfaces a real builder would touch.
That changed how I thought about Orbit. It was no longer "do the packages build?" It was "can the same customer record survive across all the ways an agent or developer might touch it?"
That is the correct bar for agent infrastructure.
Why MCP Is a First-Class Surface
Orbit includes an MCP server because agents should not have to fake CRM access through brittle browser automation or one-off scripts.
The MCP package exposes 23 built-in tools over stdio or HTTP transport. The tools cover the core CRM operations: search, get, create, update, delete, relationships, bulk operations, pipelines, deal movement, activities, schema, imports, exports, sequences, reports, dashboard summaries, and assignment.
That matters because MCP turns customer memory into something an agent can use directly.
An agent can call create_record to add a contact. It can call move_deal_stage to update a pipeline. It can call log_activity after a meeting. It can call get_dashboard_summary before deciding what to do next.
But the trust boundary matters. Orbit's HTTP MCP transport requires bearer auth per request. Direct mode bypasses HTTP auth, rate limiting, and scope enforcement, so it is only for trusted local embeddings. That distinction is not an implementation detail. It is part of the product.
The more powerful agents become, the more explicit these boundaries need to be.
What Is Actually Alpha-Ready
Orbit is public alpha software. I want to be precise about that because open-source trust depends on honest boundaries.
The source is public at github.com/sharonds/orbit-ai. The live product page is goorbit.cc. The repo is currently source-first, which means the packages are not published to npm yet. Until the alpha publish is ready, you clone the source, install with pnpm, and build the workspace locally.
The alpha foundation includes:
- type-safe CRM entities
- shared schemas, IDs, validation, pagination, and error contracts
- REST API with auth, scopes, request IDs, idempotency, rate limiting, and payload limits
- TypeScript SDK over HTTP or DirectTransport
- CLI commands for local and operational workflows
- MCP server with 23 built-in tools
- SQLite adapter for local development and tests
- Postgres-family path for production adapters
- Gmail, Google Calendar, and Stripe connector packages
- deterministic demo seed data
- starter scaffolder through
@orbit-ai/create-orbit-app
The alpha gaps are real too. Some advanced routes and workflows are intentionally incomplete. Multi-instance stores, richer connector workflows, batch write implementation, and package publishing still need more work.
That is why the site says "source-first alpha." I would rather be explicit than pretend it is a finished hosted platform.
As I write this, the alpha release work is still visible in public. PR #74 is the Changesets release PR for the alpha package versions. That is the unglamorous part of launching open-source infrastructure: version packages, verify artifacts, document the release path, and resist the urge to make the landing page sound more finished than the code is.
The landing site followed the same rule. It is not a waitlist page. It is not a hosted SaaS funnel. It says what Orbit is, what works now, what is future work, and how to clone the source.
Why This Is Not Another Hosted CRM
The easiest way to explain Orbit is to say "CRM for agents." That is directionally right, but it can also mislead people.
Orbit is not trying to replace HubSpot, Salesforce, or Attio as a team-facing CRM UI. It is not asking your team to move into a new dashboard. It is not another vendor account that becomes the center of the workflow.
Orbit is lower in the stack.
It is for builders who need CRM-shaped data inside their own product: contacts, companies, deals, activities, tasks, notes, ownership, scopes, tenant context, and agent access. You deploy it beside your app. You decide the datastore. You decide the auth boundary. You decide which surfaces are exposed.
That is the difference between a product your team adopts and infrastructure your product depends on.
For agent-native software, that distinction matters. The agent is not just reading from the CRM. The agent may be part of the workflow that creates the customer record, updates the deal, logs the activity, or triggers the next step.
If the memory layer is outside your control, the agent workflow inherits that boundary. Sometimes that is fine. Sometimes it is exactly the wrong abstraction.
What Building Orbit Taught Me
Orbit clarified something I had been circling through Jobot, CheckApp, and Open Agents.
The next useful layer in AI products is not always a smarter model. Often it is boring infrastructure that makes the model's work durable, auditable, and safe to repeat.
Agents need tools, but tools are not enough. They need records. They need scopes. They need tenant boundaries. They need predictable error contracts. They need a way to write something today and have another workflow trust it tomorrow.
That is what customer memory means to me now.
It is not "remember everything." It is remember the right things in the right shape, behind the right boundary.
That is the bet behind Orbit AI. Agents can already act. The missing layer is the structured memory that lets those actions compound.
Build the prompt if you need a better first move.
Build the memory layer if you want the second, third, and hundredth move to make sense.
Links
- Orbit AI live site: goorbit.cc
- Orbit AI source: github.com/sharonds/orbit-ai
- Orbit AI security post: What I Changed After Making Orbit AI Public
- Jobot build post: I Build AI for Clients. My Job Search Was Still Manual. So I Built an Agent.
- Open Agents deploy post: I Deployed Vercel Open Agents. Here Are the 4 Bugs I Found.