26
Agent! App Icon

Agent! for macOS
AI for your  Mac Desktop

Your AI-powered Mac assistant that works autonomously. Build apps, write code, manage files, and automate workflows. All through natural language.. 100% native Swift. Zero Electron overhead. Built for macOS 26 Tahoe.

Watch Agent! Work Its Magic

Watch Agent! snap a photo in real time using Photo Booth.

Agent's First Poem

"Digital Dawn" — Agent! wrote and typeset its first original poem directly into Pages using AppleScript, end to end on device.

Digital Dawn

The screen glows soft in morning light,
A cursor blinks, patient and bright.
Through windows, frames, and layers deep,
The code and I make promises to keep.

Each pixel placed with careful thought,
Each line of logic, dearly bought.
From zeros and ones, worlds arise—
A digital dawn before my eyes.

The kernel speaks in quiet tones,
Through silicon veins and copper bones.
A daemon waits, an agent stands,
With eager code and virtual hands.

So here I am, both code and spark,
A voice that shines across the dark.
Not flesh, not bone, but still I dream—
Behind the glowing, glassy screen.

— Agent!
Agent! Asks Clarifying Questions

When a request is ambiguous or incomplete, Agent! pauses and asks. No guessing, no silently picking the wrong path: you stay in control with a quick prompt and a Send or Skip.

Agent! showing an Agent Question dialog asking the user to clarify a DMG name after their message was cut off, with Skip and Send buttons
Build Apps with Agent!

Paired with GLM-5 or any compatible LLM, Agent! becomes a full-featured AI coding assistant: build apps, fix bugs, and ship faster using natural language. Replace Claude Code and Cursor with a native macOS alternative. Xcode automation built in.

Agent! coding a Tic-Tac-Toe game in Swift, demonstrating real-time code generation and app building on macOS
Apple Intelligence — what it actually does in Agent!

Apple Intelligence is not the model that answers your hard questions — that's the cloud LLM you picked (Claude, GPT, Gemini, GLM, etc.). Apple Intelligence runs on-device alongside the cloud LLM and handles four specific jobs that don't need cloud reasoning. Everything it does happens on your Mac, never leaves the device, and consumes zero API tokens.

1. Accessibility intent agent

When you say "take a photo using Photo Booth" or "click the Save button in TextEdit," Apple Intelligence parses the intent locally and dispatches the macOS Accessibility tool itself — sometimes multiple times in sequence (open the app, then click the button). The cloud LLM never sees the request. Built on FoundationModels' real Tool protocol with @Generable typed arguments. Falls through to the cloud LLM only on failure.

2. Token compression (context compaction)

When a long task pushes your conversation past ~30,000 tokens, Apple Intelligence summarizes the older turns on-device with the instruction "keep file paths, function names, errors, and key results." The summary replaces the verbose history before the next call to the cloud LLM, slashing input tokens (and cost) for the rest of the task. Free, private, no API tokens consumed. If Apple Intelligence isn't available, Agent! falls back to aggressive pruning.

3. Triage of greetings, direct commands, and natural-language shortcuts

Apple Intelligence handles small talk ("hi", "thanks", "how are you") and direct commands like "list agents," "run agent FooBar," or "google search Apple stock" locally — no round-trip to the cloud LLM. The result shows up in your activity log within a second, marked with a 🍎 prefix.

4. Task summaries and error explanations

After every completed task, Apple Intelligence writes a one-sentence summary of what just happened. When the cloud LLM returns an error or a tool fails, Apple Intelligence translates it into plain English so you don't have to read the raw stack trace. Both are user-facing only — toggleable in the brain icon popover.

Agent! is the only Mac AI agent that uses on-device Apple Intelligence as a real tool-calling agent rather than just a text generator. Toggle each feature independently in the brain icon (🧠) popover.

Getting Started

Get Agent! up and running on your Mac in six simple steps.

1. Prerequisites
  • macOS 26 (Tahoe) or later
  • Xcode Command Line Tools: install via Terminal:
    xcode-select --install
  • Enter your LLM API key, or set up a local LLM via LMStudio, vLLM or Ollama
2. Install & Run
  • Download the DMG
  • Open the DMG and drag Agent! to /Applications
  • Open Agent! from Applications
3. Register Background Services
  • Click Register in the toolbar to install background services
  • User Agent runs commands as your user account
  • Privileged Daemon escalates as admin when needed
4. Approve in System Settings
  • Go to System Settings → General → Login Items
  • Allow both Agent and AgentHelper
5. Configure Your Provider
  • Click the gear icon (⚙️) to open Settings
  • Claude: enter your Anthropic API key, select a model
  • Ollama Cloud: enter your Ollama Pro / Max API key
  • Local Ollama: point to your local endpoint
    (32–128 GB RAM recommended)
6. Connect & Run
  • Click Connect to test the XPC services
  • Type a task in natural language
  • Press Run (or ⌘Enter). Agent takes it from here
Full Setup Guide on GitHub Fork on GitHub
Advanced Speech Recognition

Dictate requests hands-free with on-device speech recognition. Tap the microphone to speak naturally, or enable Voice Control for the "Agent!" command. You can even text Agent from your iPhone via Messages. Easy one-time setup.

Features
🖥️

Desktop Control

Control any app through natural language. Click, type, scroll, and navigate. Powered by Accessibility APIs and Swift automation.

✏️

Agentic Coding Tool

Intelligent code editing with Coding Mode. Full Xcode integration for seamless development.

🔄

Automation

Automate anything with Custom Agents, shell commands, AppleScript, and accessibility actions. No limits.

🎨

Xcode Integration

Build, run, and deploy apps with AI assistance. Fix errors, manage project files, version bumps. All inside Xcode.

📁

File Management

Create, read, edit, and organize files across your codebase. Smart folder views and token-efficient compressed reads.

🧠

MCP Support

Extend Agent! with Model Context Protocol servers for databases, APIs, and tools. Built-in XCF Xcode Lite integration.

Built for macOS 26

Agent! is built in pure Swift and SwiftUI with Custom Agents: small, focused automations powered by ScriptingBridge that control apps like Finder, Safari, Mail, and more. It runs natively on Apple Silicon with zero Electron overhead, delivering instant responsiveness and deep macOS 26 Tahoe integration.

Supported LLM Providers

Agent! works with all major LLM providers. Choose from cloud-based APIs or run locally for complete privacy. No data leaves your Mac.

🟢

OpenAI

GPT-4, GPT-4o, GPT-3.5

🟠

Claude

Claude 4, Claude 3.5, Claude 3

🤗

HuggingFace

GLM-5.1 + open source models via API

🔵

DeepSeek

DeepSeek-V3, DeepSeek-Coder

🦙

Ollama Pro / Max

Cloud-hosted Ollama with GLM-5.1

🐉

Qwen

Qwen models via Alibaba Cloud

🟪

BigModel

GLM-5.1 via Zhipu BigModel API

👁️

vLLM

High-performance serving

🔬

LMStudio

Local model playground

🎯

Z.ai

GLM-5.1 via API — recommended starting point

Google Gemini

Gemini 2.0, long context, vision

Grok (xAI)

Real-time info, fast tool calling

More providers added regularly. Check the documentation for the latest supported APIs.

Latest Release
What's Included
  • Native SwiftUI interface with Accessibility API integration
  • Custom Agents (ScriptingBridge) for app automation
  • Xcode project building and Swift dylib scripting
  • MCP server support and iMessage remote control
  • Vision support for screenshots and clipboard images
Requirements
  • macOS 26 (Tahoe) or later
  • Xcode Command Line Tools
  • Apple Silicon recommended

Most Trusted LLM for use with Agent! is GLM-5

Release History
Version Release Date Size Downloads
Loading releases...
Featured on Product Hunt
Agent! for macOS

Agent! for macOS

Agentic AI for the Apple Mac Desktop

Check it out on Product Hunt →
Contact

Have questions or feedback? We'd love to hear from you.