HLVM CLI Reference

Complete reference for the hlvm command-line interface.

Quick Reference

CommandDescription
hlvm runExecute HQL or JavaScript code
hlvm replInteractive shell (REPL)
hlvm askAI agent task execution
hlvm chatPlain one-turn AI chat
hlvm modelModel management (list, set, show, pull, rm)
hlvm aiAI model setup (prefer hlvm model)
hlvm serveHTTP runtime host
hlvm hql initInitialize an HQL project
hlvm hql compileCompile HQL to JS or native binary
hlvm hql publishPublish an HQL package
hlvm mcpMCP server management
hlvm ollamaExplicit compatibility bridge to system Ollama
hlvm upgradeCheck for updates
hlvm uninstallRemove HLVM from system

hlvm run

Execute HQL or JavaScript code from a file or inline expression.

hlvm run <target.hql|target.js>    Run a file
hlvm run '<expression>'            Run an HQL S-expression

Options:

FlagDescription
--verbose, -vEnable verbose logging
--timeShow performance timing
--printPrint transpiled JS without executing
--debugShow detailed debug info and stack traces
--log <namespaces>Filter logging to specified namespaces
--help, -hShow help

Examples:

hlvm run '(+ 1 1)'            # Auto-prints: 2
hlvm run hello.hql             # Run file
hlvm run app.js                # Run JavaScript

Single S-expressions auto-print their result. File targets support .hql, .js, and .ts.


hlvm repl

Start the interactive shell. With no arguments, hlvm starts the REPL by default.

hlvm repl [options]

Options:

FlagDescription
--inkForce Ink REPL (requires interactive terminal)
--no-bannerSkip the startup banner
--help, -hShow help
--versionShow version

Input routing:

InputAction
(expression)HQL code evaluation
(js "code")JavaScript evaluation
/commandSlash commands
Everything elseAI conversation

hlvm ask

Interactive AI agent for task execution. Runs the full agent orchestration loop with tool calling and planning.

hlvm ask "<query>"

Options:

FlagDescription
-p, --printNon-interactive output (defaults to dontAsk permission mode)
--verboseShow agent header, tool labels, stats, and trace output
--output-format <fmt>Output format: text (default), json, stream-json
--usageShow token usage summary after execution
--attach <path>Attach a file input (repeatable)
--model <provider/model>Use a specific AI model
--no-session-persistenceUse an isolated hidden session for this run only
--permission-mode <mode>Set permission mode (see below)
--allowedTools <name>Allow specific tool (repeatable)
--disallowedTools <name>Deny specific tool (repeatable)
--dangerously-skip-permissionsAlias for --permission-mode bypassPermissions
--help, -hShow help

Examples:

# Interactive (default)
hlvm ask "list files in src/"

# Non-interactive
hlvm ask -p "analyze code quality"

# Permission modes
hlvm ask --permission-mode acceptEdits "fix the bug"
hlvm ask --permission-mode dontAsk "analyze code"

# Tool permissions
hlvm ask --allowedTools write_file "fix bug"
hlvm ask --disallowedTools shell_exec "analyze code"

# Structured output
hlvm ask --output-format stream-json "count test files"   # NDJSON events
hlvm ask --output-format json "count test files"           # Single JSON result

# Model selection and attachments
hlvm ask --model openai/gpt-4o "summarize this codebase"
hlvm ask --attach ./screenshot.png "describe this UI issue"

# Isolated session
hlvm ask --no-session-persistence "hello"

Output Formats

FormatDescription
textHuman-readable streaming text (default)
jsonSingle JSON object with the final result
stream-jsonNewline-delimited JSON events (NDJSON)

stream-json events:

{"type":"token","text":"Hello"}
{"type":"agent_event","event":{"type":"tool_start",...}}
{"type":"final","text":"...","stats":{...},"meta":{...}}

json output:

{"type":"result","result":"...","stats":{...},"meta":{...}}

Permission Modes

ModeL0 (Read)L1 (Write)L2 (Destructive)
defaultAuto-approvePromptPrompt
acceptEditsAuto-approveAuto-approvePrompt
planAuto-approvePrompt after planPrompt after plan
bypassPermissionsAuto-approveAuto-approveAuto-approve
dontAskAuto-approveAuto-denyAuto-deny

Tool safety levels:

  • L0: Safe read-only (read_file, list_files, search_code)
  • L1: Mutations (write_file, edit_file, shell_exec)
  • L2: High-risk (destructive shell commands, delete operations)

Priority order: deny > allow > mode > default


hlvm chat

Plain one-turn LLM chat. No agent orchestration, no tool calling.

hlvm chat "<query>"

Options:

FlagDescription
--model <provider/model>Use a specific AI model
--help, -hShow help

Examples:

hlvm chat "hello"
hlvm chat --model openai/gpt-4o "summarize this repo"

hlvm model

Manage AI models. Inspired by Ollama's CLI.

hlvm model [command]

Subcommands:

CommandDescription
(none)Show current default model and availability
listList all available models (grouped by provider)
set <name>Set default model (persisted to ~/.hlvm/settings.json)
show <name>Show model details (params, capabilities, size)
pull <name>Download a model (Ollama only)
rm <name>Remove a model (Ollama only)

Examples:

hlvm model                                         # Show current default
hlvm model list                                    # List all models
hlvm model set claude-code/claude-haiku-4-5-20251001  # Set default
hlvm model show llama3.1:8b                        # Model details
hlvm model pull ollama/llama3.2:latest             # Download
hlvm model rm llama3.2:latest                      # Remove

The set command persists to the same config SSOT used by the REPL model picker, hlvm ask, and the ai() API.


hlvm ai

Hint: Prefer hlvm model for model management. hlvm ai commands still work but will show deprecation hints.

AI model setup and management.

hlvm ai <command>

Subcommands:

CommandDescription
setupEnsure the default model is installed
pull <model>Download a model (Ollama only)
listList installed models
downloadsShow active model downloads
browseInteractive model browser (TUI)
modelShow current default model

Examples:

hlvm ai setup                        # Ensure default model ready
hlvm ai pull ollama/llama3.2:latest  # Download a model
hlvm ai list                         # List installed models
hlvm ai browse                       # Interactive model picker
hlvm ai model                        # Show current model

hlvm serve

Start the HTTP runtime host. Used by GUI clients and host-backed CLI surfaces.

hlvm serve

Starts on port 11435.

Endpoints:

MethodPathDescription
POST/api/chatSubmit chat, eval, or agent turns
GET/api/chat/messagesRead active conversation messages
GET/api/chat/streamSubscribe to active conversation updates
GET/healthHealth check

Examples:

hlvm serve

# Health check
curl http://localhost:11435/health

# Evaluate HQL
curl -X POST http://localhost:11435/api/chat \
  -H "Content-Type: application/json" \
  -d '{"mode":"eval","messages":[{"role":"user","content":"(+ 1 2)"}]}'

# Chat
curl -X POST http://localhost:11435/api/chat \
  -H "Content-Type: application/json" \
  -d '{"mode":"chat","messages":[{"role":"user","content":"hello"}]}'

GUI-visible top-level submission uses POST /api/chat. Internal compatibility endpoints may still exist, but they are not part of the public runtime-host contract.


hlvm hql

HQL language toolchain commands.

hlvm hql init

Initialize a new HQL project.

hlvm hql init [options]

Options:

FlagDescription
-y, --yesUse default values without prompting
--help, -hShow help

What gets created:

  • hql.json — Package configuration
  • mod.hql — Sample code (if doesn't exist)
  • README.md — Minimal template (if doesn't exist)
  • .gitignore — HQL-specific entries

Examples:

hlvm hql init        # Interactive: prompts for name, version, entry point
hlvm hql init -y     # Quick: auto-generates configuration

hlvm hql compile

Compile HQL to JavaScript or native binary.

hlvm hql compile <file.hql> [options]

Options:

FlagDescription
--target <target>Compilation target (default: js)
-o, --output <path>Output file path
--releaseProduction build (minified, optimized)
--no-sourcemapDisable source map generation
--verbose, -vEnable verbose logging
--timeShow performance timing
--debugShow detailed error info
--help, -hShow help

Targets:

TargetDescription
jsJavaScript output (default)
nativeBinary for current platform
allAll platforms
linuxLinux x86_64 binary
macosmacOS ARM64 binary (M1/M2/M3/M4)
macos-intelmacOS x86_64 binary (Intel)
windowsWindows x86_64 binary

Examples:

hlvm hql compile app.hql                        # Dev build
hlvm hql compile app.hql --release              # Production build (minified)
hlvm hql compile app.hql --release --no-sourcemap  # Smallest output
hlvm hql compile app.hql --target native        # Native binary
hlvm hql compile app.hql --target all           # All platforms
hlvm hql compile app.hql --target linux         # Cross-compile to Linux
hlvm hql compile app.hql --target native -o myapp  # Custom output name

hlvm hql publish

Publish an HQL package to JSR and/or NPM.

hlvm hql publish [file] [options]

Options:

FlagDescription
-r, --registry <name>Target registry: jsr, npm, or all (default: all)
-v, --version <ver>Explicit version (skips auto-bump)
-y, --yesAuto-accept defaults (no prompts)
--dry-runPreview without publishing
--verboseEnable verbose logging
--help, -hShow help

Examples:

hlvm hql publish                # Auto-bump + publish to both registries
hlvm hql publish -y             # Non-interactive
hlvm hql publish -r jsr         # JSR only
hlvm hql publish -r npm         # NPM only
hlvm hql publish -v 1.0.0       # Explicit version
hlvm hql publish --dry-run      # Preview only
hlvm hql publish src/lib.hql    # Explicit entry file

Workflow:

  1. Checks for hql.json (prompts to create if missing)
  2. Auto-bumps patch version (unless --version specified)
  3. Builds and publishes to selected registries
  4. Updates hql.json with new version

hlvm mcp

Model Context Protocol server management.

hlvm mcp <command>

Subcommands:

CommandDescription
add <name> -- <cmd...>Add a stdio MCP server
add <name> --url <url>Add an HTTP MCP server
listList configured servers
remove <name>Remove a server
login <name>OAuth authentication for HTTP server
logout <name>Remove stored OAuth token

Options:

FlagDescription
--env KEY=VALUEEnvironment variable (repeatable, for add)

Examples:

hlvm mcp add github -- npx -y @modelcontextprotocol/server-github
hlvm mcp add db --url http://localhost:8080
hlvm mcp add sentry --env SENTRY_TOKEN=abc123 -- npx @sentry/mcp-server
hlvm mcp list
hlvm mcp remove github
hlvm mcp login notion
hlvm mcp logout notion

hlvm ollama

Explicit compatibility bridge to a system Ollama installation.

hlvm ollama serve

This command is never used by HLVM's embedded runtime, bootstrap, or --model auto pipeline. It requires Ollama to be installed on your system. Download from ollama.ai.


hlvm upgrade

Check for updates and show upgrade instructions.

hlvm upgrade [options]

Options:

FlagDescription
-c, --checkCheck for updates without installing
--help, -hShow help

hlvm uninstall

Remove HLVM from the system.

hlvm uninstall [options]

Options:

FlagDescription
-y, --yesSkip confirmation prompt
--help, -hShow help

What gets removed:

  • ~/.hlvm/bin/hlvm — The binary
  • ~/.hlvm/ — Config and cache directory

You will need to manually remove the PATH entry from your shell config.


Model Identification

Models use <provider>/<model-name> format:

ollama/llama3.1:8b         # Local Ollama
ollama/llama3.2:latest     # Local Ollama
openai/gpt-4o              # OpenAI
anthropic/claude-3-5-sonnet # Anthropic
google/gemini-2.0-flash    # Google

Environment Variables

VariableDescription
HLVM_DIROverride HLVM config directory (default: ~/.hlvm)
HLVM_AGENT_ENGINESelect agent engine: sdk or legacy
HLVM_DISABLE_AI_AUTOSTARTSkip default model download
HLVM_FORCE_SETUPForce first-run setup
HLVM_ASK_FIXTURE_PATHTesting fixture path (internal)
HLVM_REPL_PORTOverride REPL server port

Configuration Files

FileDescription
~/.hlvm/settings.jsonUnified config: model, policy, hooks (replaces config.json)
~/.hlvm/Global config and cache directory
.hlvm/prompt.mdPer-project agent instructions
.hlvm/hooks.jsonPer-workspace hook overrides (merged under settings.json hooks)
hql.jsonHQL package metadata (name, version, exports)

Exit Codes

CodeMeaning
0Success
1General failure