Security & Deployment

Production-Ready Security Foundation

Letting AI read and write enterprise systems means every operation can affect real data. Security isn't an add-on — it's the platform's foundational architecture.

Identity Authentication & Data Isolation

available

Each user can only access their own conversations, files, and knowledge bases. Uploaded files are stored with user-level isolation. SSE streaming supports token authentication for secure real-time communication. First launch guides admin account creation, no config file editing needed.

Admin Panel

available

Operations Overview

Operations overview: user count, conversation count, message count, token consumption statistics. 14-day activity trend charts, model usage distribution, token consumption breakdown by Agent.

Connector Statistics

Connector call statistics: call volume, success rate, average latency, and last call time for each connector.

User Management

User management: search, pagination, create, edit, role switching, password reset, account enable/disable.

Operation Confirmation Gate

coming

Agent auto-pauses before executing data modifications, approval initiations, and similar operations, sending confirmation requests to designated personnel.

Approve this time: approve the current operation
Always approve: auto-approve subsequent calls of this operation
Reject this time: reject the current operation
Always reject: auto-reject subsequent calls of this operation

Critical for Hub mode — when Agent reads from CRM, writes to ERP, and sends notifications via Feishu, each modification point in the cross-system chain can require user confirmation.

Configurable per Action: GET requests default to pass-through, POST/PUT/DELETE default to requiring confirmation.

Audit Logging

coming

Complete record for every operation: timestamp, user, connector, Action, parameters, response. Supports conditional filtering and export, meeting classified protection and compliance audit requirements.

Organization & Multi-Tenancy

coming

Organization-level resource management: admins configure connectors, Agents, and knowledge bases, then publish to organization members. Three visibility levels: private, organization-shared, and public. Each member accesses shared resources with their own identity and credentials.

Deployment Options

available

Self-Hosted (Currently Recommended)

Single process + SQLite, zero external dependencies. Python 3.11+ / Node.js 18+.

git clone https://github.com/fim-ai/fim-agent.git
cd fim-agent && cp example.env .env && ./start.sh

Only LLM_API_KEY is required.

coming

Docker Deployment

Docker Compose setup with API + SQLite + optional Langfuse. Suitable for standardized delivery and operations.

coming

On-Premise Private Deployment

For government, finance, and other clients with strict data residency requirements. All dependencies installable offline, supports air-gapped environments. Compatible with domestic trusted computing platforms.

Model Compatibility

available

Compatible with any provider supporting OpenAI /v1/chat/completions interface: OpenAI, Anthropic, DeepSeek, Qwen, Ollama, vLLM, etc. Switch by changing LLM_BASE_URL and LLM_MODEL — no business logic changes needed.

Multi-model configuration: assign different models by role (general / fast / vision / compact). Agent can switch models between steps as needed.

Developers

Get started in 3 minutes

git clone https://github.com/fim-ai/fim-agent.git && ./start.sh

Enterprise

Learn how FIM Agent fits your enterprise. Get a customized deployment plan.