Fleet feels like a deliberately minimal, low-latency editor front end coupled to a full IntelliJ code-processing backend rather than a faster VS Code or an Electron-based fork. Interaction is through a lightweight editing surface that can run purely locally or delegate compute to remote Fleet servers; the visible UI is distinct from the IntelliJ engine it drives. The primary developer value is a split responsibility: an uncluttered, fast editor for day-to-day editing with on-demand, engine-level code analysis and task execution provided by a distributed IntelliJ processing layer.
Intelligence & Context Management
Fleet’s AI capability is implemented as server-side services layered on top of the IntelliJ code-processing engine. Rather than shipping a bundled LLM, Fleet exposes an AI Assistant (server-side) and a workflow layer oriented around agentic jobs: structured task definitions, programmatic context assembly, asynchronous runs, and isolated execution contexts. Context assembly is explicit — the system selects and packages code fragments and metadata for each task run rather than relying on a fixed single-file view.
Indexing and semantic analysis are delegated to the IntelliJ engine running either locally or in the Fleet distributed backend; Fleet therefore leverages the engine’s language-aware parsing, type resolution, and project model to build the provenance and slices of context that AI tasks consume. There are no published native model bindings or token-window claims for Fleet in 2026; long-context reasoning is handled operationally by assembling targeted code slices and running asynchronous agentic tasks rather than depending on a single oversized context window. The AI Assistant is a server-side dependency and may cease functioning where Fleet’s hosted services are unavailable; local-only editing and engine-driven analysis remain usable on existing installs.
Key Workflow Tools
- AI Chat pane (added in 1.47): an integrated conversational UI that surfaces engine-derived context and accepts structured task requests; this pane is implemented as a client front end connecting to Fleet’s server-side assistant.
- Distributed editing/runtime model: the editor separates UI I/O from the IntelliJ code-processing engine, allowing local edit responsiveness with optional cloud offload for heavy analysis or agent runs.
- Asynchronous run isolation UI: task runs are isolated from the active editor state and can execute in background contexts (useful for agentic workflows and CI-like checks), preserving the local workspace.
- Editor ergonomics updates: multiline-comment folding (1.47), new UI elements (1.44), improved Markdown preview (1.45) and language additions (Zig) — practical editor features focused on readability and light editing.
- Build-system conveniences: auto-import for Gradle (1.45) and project-aware refactor hints driven by the IntelliJ backend rather than client-only heuristics.
- Runtime plugin channel: fleet.run.engine plugin remains a delivery path for engine-side improvements (latest recorded update 261.180.5, 2026-01-06), enabling orchestration between editor client and backend services.
Model Ecosystem & Security
Fleet does not publish a bundled set of native LLMs. Industry-standard models in 2026 include GPT-5, Claude 4.5 Sonnet, and Gemini 3.0, but Fleet provides no explicit native bindings or guarantee of default model choice. Deployment supports local execution for editor/engine components and cloud offload for server-side services; however, the platform’s AI Assistant is explicitly a server-side service and thus is dependent on Fleet-hosted endpoints.
Privacy and compliance specifics are not enumerated: there is no public statement in the available material about Zero Data Retention, SOC2, ISO certification, or native local LLM runtimes. Organizations requiring auditable, certified data handling or guaranteed on-prem-only inference should treat Fleet’s server-side AI features as a dependency to verify independently. MCP (Model Context Protocol) support is not specified.
The Verdict
Technical recommendation: choose Fleet when you want a lightweight editor with native access to IntelliJ’s language-aware code-processing capabilities and a distributed execution model that separates UI responsiveness from heavy analysis tasks. Fleet’s architecture gives a clear advantage over an IDE plugin: it is a standalone client with native integration to the IntelliJ engine for parsing, refactoring, and project model access — not an add-on constrained by another host IDE’s lifecycle. That native integration yields more reliable project model information and safer, engine-level refactors than a third-party plugin can usually provide.
Caveats: Fleet’s AI Assistant is server-side and lacks published, persistent model bindings or compliance guarantees; the product was discontinued for new downloads after 2025-12-22 and server-dependent features may degrade for organizations that cannot host the backend. For teams that require long-term, on-prem model control and certified data handling, a conventional IDE plus in-house AI plugin tied to verifiable on-prem inference will be a more predictable choice. For fast, cross-platform editing with engine-grade analysis and optional cloud agent runs, Fleet is technically compelling — provided you validate the availability of the Fleet backend and AI services for your operational horizon.