mirageA Unified Virtual Filesystem Workspace

A simulated environment where AI agents reach every data through one filesystem and bash.

npm install @struktoai/mirage-node      # Node, servers, CLIs
npm install @struktoai/mirage-browser   # Browser & edge
npm install @struktoai/mirage-agents    # OpenAI / Vercel AI / LangChain / Mastra
import { Workspace, RAMResource, S3Resource } from "@struktoai/mirage-node";

// Mount resources side-by-side as one filesystem
const ws = new Workspace({
  "/data": new RAMResource(),
  "/s3":   new S3Resource({ bucket: "my-bucket" }),
});

// Read, write, and pipe across services with bash
await ws.execute("cp /s3/report.csv /data/report.csv");
const { stdout } = await ws.execute("grep alert /s3/log.jsonl | wc -l");

// Snapshot and clone like git
ws.snapshot("demo.tar");

Capabilities

Polymorphic

Bash on Every Format

Mirage makes standard bash tools work on every format. cat, grep, head, and wc parse .parquet, .csv, .json, .mp3, .wav, .h5, and more.

Heterogeneous

Pipe Across Systems

Mirage pipes one bash command across heterogeneous backends. Stitch S3, Google Drive, GitHub, Slack, Postgres, and Redis with Unix-like pipes.

Versioned

Snapshot & Rollback

Mirage versions workspaces like git. Snapshot at any step, clone into parallel runs, and roll back to any prior version with a single API call.

Cached

Two-Layer Cache

Mirage caches repeated reads in a two-layer index and file cache. Calls to S3, Drive, or Slack collapse into local lookups, so agent loops stay fast and cheap.

Portable

Workspace as a Tar

Mirage workspaces are portable as a single .tar. The entire mounted state travels with the file, so agent runs hop across hosts without restart.

Embeddable

Drop into Your Stack

Mirage drops directly into your stack. Embed a Workspace inside FastAPI, Express, or browser apps, and wire into OpenAI Agents SDK, Vercel AI SDK, LangChain, and other agent frameworks.

Python & TypeScript SDKs·Browser support·Snapshot & clone

Mount different resources as one filesystem. A simulated environment where AI agents reach every data through a single bash tool. Embed a workspace inside FastAPI, Express, browser apps, or async runtime. Snapshot and clone workspaces like git to move agent runs without restarting systems.

Mountable resources

Amazon S3
Google Cloud StorageGoogle Cloud Storage
CloudflareCloudflare R2
SupabaseSupabase
Google DriveGoogle Drive
Google DocsGoogle Docs
Google SheetsGoogle Sheets
GmailGmail
GitHubGitHub
LinearLinear
NotionNotion
TrelloTrello
Slack
DiscordDiscord
TelegramTelegram
MongoDBMongoDB
RedisRedis
PostgreSQLPostgres
Amazon S3
Google Cloud StorageGoogle Cloud Storage
CloudflareCloudflare R2
SupabaseSupabase
Google DriveGoogle Drive
Google DocsGoogle Docs
Google SheetsGoogle Sheets
GmailGmail
GitHubGitHub
LinearLinear
NotionNotion
TrelloTrello
Slack
DiscordDiscord
TelegramTelegram
MongoDBMongoDB
RedisRedis
PostgreSQLPostgres

AI agent frameworks

OpenAI Agents SDK
VercelVercel AI SDK
LangChainLangChain
PydanticPydantic AI
CAMEL
Mastra
OpenHands
AnthropicClaude Code
Codex
CursorCursor
Cline
OpenAI Agents SDK
VercelVercel AI SDK
LangChainLangChain
PydanticPydantic AI
CAMEL
Mastra
OpenHands
AnthropicClaude Code
Codex
CursorCursor
Cline

Architecture

AI agent and application code talks to Mirage's bash and VFS layer. A dispatcher and two-layer cache route reads and writes to the backing infrastructure and remote services.

Mirage architecture: AI Agent and Application talks to Mirage Bash and VFS, then through a Dispatcher and Cache to Infrastructure and Remote services.

How does Mirage work?

  1. 01

    Declare a workspace

    In Python or TypeScript, mount the resources your agent needs—an S3 bucket at /s3, a Google Drive folder at /drive, a GitHub repo at /github, RAM at /data, side-by-side under a single root.

  2. 02

    Hand the workspace to your agent

    Adapters wire the workspace into OpenAI Agents SDK, Vercel AI SDK, LangChain, Pydantic AI, CAMEL, Mastra, or OpenHands. The agent runs against the same mount tree it would in bash.

  3. 03

    Read, write, and pipe across services

    Agents reuse the bash vocabulary they already know. A two-layer cache keeps repeated work off the network. Workspaces snapshot, clone, and version like a filesystem.

Frequently Asked Questions

What is Mirage?

Mirage is a unified virtual filesystem and simulated environment for AI agents. It mounts services and data sources like S3, Google Drive, GitHub, Notion, Redis, and Postgres side-by-side as one filesystem. Agents reach every backend with the same handful of Unix-like tools, and pipelines compose across services as naturally as on a local disk.

Why a filesystem, not yet another SDK or MCP?

Modern LLMs are most fluent in bash and the filesystem semantics that come with it. Mirage exposes every backend through the same handful of Unix-like tools, so agents reason about one abstraction instead of N SDKs and M MCPs. Any LLM that already knows bash can use Mirage out of the box, with zero new vocabulary.

Which resources can Mirage mount?

RAM, disk, Redis, S3 / R2 / OCI / Supabase / GCS, Gmail / GDrive / GDocs / GSheets / GSlides, GitHub / Linear / Notion / Trello, Slack / Discord / Telegram / Email, MongoDB, SSH, and more, mounted side-by-side under a single root.

How do I use Mirage in my code?

Mirage ships Python (mirage-ai) and TypeScript (@struktoai/mirage-node, @struktoai/mirage-browser, @struktoai/mirage-core) SDKs, and a CLI. Embed a Workspace directly in FastAPI, Express, browser apps, or any async runtime. Adapters drop the workspace into OpenAI Agents SDK, Vercel AI SDK, LangChain, Pydantic AI, CAMEL, Mastra, and OpenHands.

Is Mirage open source?

Yes. Mirage is open source on GitHub at github.com/strukto-ai/mirage.

Try Mirage now

Open source, Python and TypeScript SDKs, plus a CLI.