§The Platform · The OS and the hardware it runs on

TabTab OS.
The soul of every Tab.

TabTab OS is the AI Agents Operating System that runs your Tab — six coordinated agents handling development, marketing, sales, outreach, support, and operations. It runs locally on dedicated hardware, hosted at the Bench.

Every TabTab deployment runs on dedicated AI hardware — your data, your runtime, your platform. We provision, configure, and operate the hardware on your behalf. The local-first architecture is deliberate: most agent operations run on local LLMs on the Tab’s hardware, with cloud APIs used selectively. As accelerated compute matures and clustering technologies improve, the ecosystem scales through hardware aggregation rather than cloud API dependency. This is the architectural foundation of our local-first thesis.

Dedicated hardware at the Bench — the physical home of every Tab
Fig. 00 / Premise

The OS runs on dedicated hardware today.

The principles of TabTab OS — local LLMs, owned data, no cloud lock-in, sealed nodes — work on any sufficiently capable local-first hardware. For the initial Tab cohort, we chose dedicated local hardware for three reasons:

i. Unified memory architecture

Unified memory shares resources between CPU, GPU, and Neural Engine. 30B+ parameter models run at full context locally on a single machine smaller than a hardcover book.

ii. Compact form factor

Dedicated hardware sits in a rack for years without maintenance. Tabs need infrastructure, not workstations. No server racks, no cooling, no IT department to maintain it.

iii. Sealed supply chain

New machines ship with predictable security guarantees, hardware enclaves, and Tailscale support. One vendor, one supply chain, one threat model.

Fig. I / Specifications

The hardware.

Every Tab runs on a dedicated node — hosted, managed, and monitored at the Bench in Irvine. One machine per Tab. No multi-tenancy.

Spec
Tab · Stack
Form factor
Compact desktop node
Unified memory
24 GB
Storage
512 GB SSD
GPU
16-core
Neural engine
16-core
Power
AC · always-on
Location
The Bench, Irvine
Access
Remote dashboard
§ 02 — Tab · Stack

Why dedicated hardware.

TAB·STACK · HOSTED · 24GB UNIFIED · 512GB SSD

Hosted at the Bench. Runs the directors and heavy brains. Indexes the vault around the clock. Active cooling for sustained inference.

Thermal sustenance

Active cooling allows sustained multi-hour inference runs — massive vector embeddings, overnight vault indexing — without thermal throttling.

Maximum compute density

No screen, no battery, no keyboard. Every gram is inference power. Ideal for running 30B+ parameter models at full context locally.

Always-on architecture

Hosted at the Bench, monitored 24/7. Runs headless, SSH-disabled, Tailscale-only. The silent anchor of your org chart.

Fig. IV / Security

The security “sandwich”.
Four layers deep.

Every TabTab node ships with defense-in-depth security that eliminates the most common attack vectors in AI agent deployments. Your Tab’s data never leaves your dedicated Tab unless you say so.

i.

Hardware seal

SSH disabled at factory. No open ports. Tailscale loopback binding only. The node is invisible to the public internet.

ii.

Dirty-data protocol

Infrastructure email accounts isolate inbound data. Verified sender lists prevent prompt injection from malicious emails reaching agent context.

iii.

Bicameral architecture

The Steward handles strategy via a strict JSON schema. The Builder handles execution. They communicate through structured handoffs — no free-form LLM-to-LLM chatter that can be hijacked.

iv.

Mesh encryption

WireGuard encryption via Tailscale. All node-to-node traffic is end-to-end encrypted. Zero data traverses the public internet in cleartext.

Fig. V / Positioning

The architecture spectrum.
Where TabTab sits.

Every AI deployment falls somewhere on this spectrum. Tabs sit deliberately in the middle — dedicated hardware, your data, but with the operational simplicity of a managed service.

DIY · Build it yourself

Self-hosted

  • Full control
  • No support
  • Weeks of setup
  • You are the sysadmin
TabTab · The middle

The Tab

  • Pre-configured hardware
  • White-glove onboarding
  • Local-first, your data
  • Sealed and supported
Cloud VPS · Rent a box

Noisy neighbors

  • Root password trust
  • Latency to your files
  • Monthly hosting fees
  • Provider reads metadata
Cloud API · SaaS

Zero hardware

  • Data leaves premises
  • Per-token pricing
  • Vendor lock-in
  • You don't own the model
Fig. 04 / Roadmap

Where this is going.

The current architecture — one node, one Tab — is the starting point, not the ceiling. Unified memory architecture and the emerging ecosystem around local model clustering (Exo, RDMA-style interconnects, Thunderbolt fabric) point toward a future where multiple machines can pool inference capacity without cloud dependency.

We are not shipping distributed inference today. What we are doing is making every architectural decision with that future in mind: standardized hardware, local-first model execution, sealed nodes with mesh networking already in place. When the clustering layer is production-ready, every Tab in the portfolio is positioned to benefit without re-architecture.

This is the difference between a platform that happens to run locally and a platform designed for a local-first future. The hardware decisions today are the infrastructure decisions of tomorrow.

§ Get started

Your Tab,
sealed at the Bench.

Hardware procured new. Configured at the Bench. Sealed and deployed. Three-hour onboarding loads your knowledge vault. By Friday, your Tab is operating.

System operational
Buildv1.0.0-rc
Tabs deploy fromIrvine, CA
Continental US & Canada
QSBS-eligible · Delaware C-corp
© MMXXVI TabTab LLC · Irvine, California
Powered by TabTab OS