Documentation, software, and frameworks at the intersection of AI and Unix. Focused on safety, operational discipline, and tools that work the way experienced engineers expect.
What AI shells are, what they do, and how to use them safely on Linux and FreeBSD.
An AI command-line shell integrates a large language model (LLM) — such as Anthropic Claude or OpenAI GPT — into the traditional Unix shell environment. Instead of memorising exact syntax, you describe what you want in plain language and the AI generates the corresponding command for your review before execution.
A traditional shell such as bash, zsh, or fish runs
programs, automates tasks via scripts, and manages files and processes. An AI shell augments
that environment: generating commands from natural language, explaining what a command does
before you run it, fixing errors, writing pipelines, and summarising logs. The Unix execution
model stays deterministic. The AI operates in the thinking phase, not the execution phase.
Common tools include Claude Code (Anthropic's official agentic CLI), ShellGPT, Amazon Q for command line, Warp AI, and local-model setups via Ollama. They run on Linux, macOS, FreeBSD, and other Unix-like systems.
The full introduction covers available tools, install guides for Linux and FreeBSD, local and remote models, security considerations, shell prompt integration, and a complete CLI tool index.
A firewall for AI-generated shell commands. Every proposed action is evaluated against declared policy before anything runs.
Large language models are good at generating shell commands. They are also probabilistic. Unix execution is deterministic and irreversible — a single misplaced flag or path can permanently alter system state. AIShell-Gate closes that gap.
It sits between an AI agent and the operating system as a deterministic policy engine. Every AI-generated command is evaluated against a layered policy stack before a single byte reaches the kernel. Unsafe commands are denied with a reason. Safe commands are allowed with a confirmation level appropriate to their risk. No shell is ever invoked. Default deny. Tamper-evident audit log.
An operational safety manual for engineers who already live in the terminal. Not a productivity tutorial.
Most AI tutorials focus on productivity. Almost none focus on safety. Oyster Stew exists to prevent expensive mistakes. It is a field guide for experienced Unix users — engineers, DevOps teams, and system administrators — integrating AI into real shell environments without creating new risks.
Seven chapters covering the core discipline: AI proposes, human reviews, Unix executes.
Includes working ai.sh wrappers for remote APIs and local LLMs, the
pipe-to-review pattern, failure modes, and the guardrails that belong outside the model.
Includes a printable safety checklist suitable for onboarding staff to AI-assisted
shell environments.
A rigorous framework for the cognitive work that precedes implementation. Hard material, taken seriously.
Something changed in the nature of programming, and it changed faster than the profession has been able to absorb. The primary activity of building software has shifted away from the layer the profession was organised around. The skills that were most valuable in the old configuration are not the most valuable ones in the new one.
The Shape of Intent is a framework for practitioners who have already felt that shift — who have worked with AI enough to sense that old habits do not fit the new medium, and who need the vocabulary, the daily disciplines, and the named reusable patterns that make working with AI reliable rather than intermittently inspired. Organised across five parts: Theory, Practice, Patterns, Application Domains, and Philosophy.