v0.4.2 — Now with Unified CLI

The Semantic Core for the Agentic AI

The first MCP-native semantic density engine. Zero semantic loss with 30–90% token efficiency. Pure signal, zero noise.

Automatic Install
$ curl -fsSL omni.weekndlabs.com/install | sh
Package Manager
$ brew install fajarhide/tap/omni

Agent Autopilot: Always-On Intelligence

Ensure your AI consistently prioritizes OMNI for all environment interactions. Copy this instruction into your agent's Custom Instructions or System Prompt:

"Always prioritize OMNI tools (omni_execute, omni_read_file) for environment interactions. OMNI is my semantic layer; use it to preserve context and maximize reasoning density."
View on GitHub How It Works

Why OMNI

AI agents are only as smart as the context they receive. OMNI is the missing layer between your tools and your LLM.

< 1ms

Filter Latency

80–99%

Token Efficiency

68 KB

Wasm Footprint

Zero

Configuration Needed

How It Works

OMNI sits between your AI agent and the outside world — silently distilling chaotic output into pure, high-density signal.

Tool Output
git diff · docker build
npm install · SQL queries
OMNI Engine
Zig Core + Wasm
LRU Cache · Semantic Filter
Pure Signal
30–90% fewer tokens
Zero noise, full context
AI Agent
Claude · Cursor
Windsurf · Any MCP client

The OMNI Effect

See how OMNI transforms verbose CLI output into actionable intelligence.

✗ Before OMNI ~600 tokens
$ docker build .
Step 1/15 : FROM node:18
 ---> 4567f123a89b
Step 2/15 : WORKDIR /app
 ---> Using cache
 ---> 89ab12cd34ef
Step 3/15 : COPY package*.json ./
 ---> Using cache
 ---> 56ef78ab901c
Step 4/15 : RUN npm install
 ---> Running in 23cd45ef67ab
npm warn deprecated inflight@1.0.6
npm warn deprecated rimraf@3.0.2
npm warn deprecated glob@7.2.3
added 847 packages in 32.451s
 ---> 78ab90cd12ef
... (500+ lines of noise) ...
Step 15/15 : CMD ["node", "server.js"]
 ---> 9012ab34cd56
Successfully built 1234abcd5678
✓ Semantic Core Active 98% saved
$ docker build .
Step 1/15 : FROM node:18 (High Signal)
[OMNI Context Manifest: npm-install (847 packages summarized)]
[OMNI: Dropped 500 lines of noise (Confidence: 0.15)]
Successfully built! ✓


  ✦ Consume 98% less tokens
  ✦ Zero Semantic Loss (Signal remains intact)
  ✦ LLM Reasoning 150x sharper

Unified CLI

One binary. Every intelligence tool you need.

omni distill
The core semantic engine — pipe any CLI output and get pure signal.
omni density
Analyze context gain and "Information per Token" metrics.
omni report
Generate a unified system status and performance summary.
omni bench
High-speed benchmark for semantic throughput measurement.
omni generate
Output templates for Claude Code, Antigravity, and others.
omni setup
Interactive guide for integration and standard aliasing.

The Power Comparison

Precise intelligence, not just text processing.

Feature OMNI RTK Snip Serena
Engine Speed < 1ms ~10ms ~15ms ~50ms+
Density Gain 60–99% 10–20% ~5% Variable
Governance SHA-256 Trust Basic Rules Manual None
Auditing Deep JSON Rpt Basic Stats None None
Philosophy Semantic Layer Proxy Filtering YAML Pipes Retrieval
Deployment 68KB Wasm Native Bin Static Bin Python Pkg
Memory Manual (Zero GC) ARC (Fast) GC (Slow) GC (Heavy)

Get Started in Seconds

OMNI integrates natively with your favorite AI agents via MCP.

Claude Code / Claude CLI

Register OMNI as an MCP server automatically. This command detects your absolute path and registers the server for you.

omni generate claude-code

Antigravity (Google)

Fully automated setup. Just run the `omni` CLI, and it will safely patch your `mcp_config.json` without deleting other servers.

omni generate antigravity

One-Line Installer

Fastest way to get started via optimized global redirect.

curl -fsSL omni.weekndlabs.com/install | sh

Homebrew (Recommended)

Install OMNI and its core components via Homebrew on macOS or Linux.

brew install fajarhide/tap/omni

Pipe Anything

OMNI works with any CLI tool — just pipe the output. No filter match? Passes through unchanged with zero overhead.

git diff | omni distill

Custom Semantic DSL

Define your own filters in `omni_config.json`. No Zig knowledge required. High-speed, SIMD-optimized distillation for any tool.

Roadmap

Building the universal semantic compression layer for all AI agents.

✓ Foundation

Core Engine Ready

  • High-performance Zig core with Wasm target
  • Semantic filters for major development tools
  • Unified Native CLI and MCP Server support
  • Self-hosted metrics and reporting
✓ Intelligence & DSL

Declarative Empowerment

  • High-performance Native Filter DSL (Zig Byte-Engine)
  • Zero-coding semantic rule creation
  • SIMD-optimized string capture & formatting
  • Hot-reloadable JSON-based configurations
◇ Governance & Trust

Secure Data Boundaries

  • Military-grade SHA-256 trust verification
  • Project-local security policy enforcement
  • Private execution layer for sensitive data
◇ Scaling & Ecosystem

Distributed Intelligence

  • Browser-based Wasm target for web IDEs
  • Deep session auditing and analytics dashboards
  • Comprehensive plugin SDK for the community