Production-Ready Security Foundation
Letting AI read and write enterprise systems means every operation can affect real data. Security isn't an add-on — it's the platform's foundational architecture.
Identity Authentication & Data Isolation
availableEach user can only access their own conversations, files, and knowledge bases. Uploaded files are stored with user-level isolation. SSE streaming supports token authentication for secure real-time communication. First launch guides admin account creation, no config file editing needed.
Admin Panel
availableOperations Overview
Operations overview: user count, conversation count, message count, token consumption statistics. 14-day activity trend charts, model usage distribution, token consumption breakdown by Agent.
Connector Statistics
Connector call statistics: call volume, success rate, average latency, and last call time for each connector.
User Management
User management: search, pagination, create, edit, role switching, password reset, account enable/disable.
Operation Confirmation Gate
comingAgent auto-pauses before executing data modifications, approval initiations, and similar operations, sending confirmation requests to designated personnel.
Critical for Hub mode — when Agent reads from CRM, writes to ERP, and sends notifications via Feishu, each modification point in the cross-system chain can require user confirmation.
Configurable per Action: GET requests default to pass-through, POST/PUT/DELETE default to requiring confirmation.
Audit Logging
comingComplete record for every operation: timestamp, user, connector, Action, parameters, response. Supports conditional filtering and export, meeting classified protection and compliance audit requirements.
Organization & Multi-Tenancy
comingOrganization-level resource management: admins configure connectors, Agents, and knowledge bases, then publish to organization members. Three visibility levels: private, organization-shared, and public. Each member accesses shared resources with their own identity and credentials.
Deployment Options
Self-Hosted (Currently Recommended)
Single process + SQLite, zero external dependencies. Python 3.11+ / Node.js 18+.
git clone https://github.com/fim-ai/fim-agent.git
cd fim-agent && cp example.env .env && ./start.shOnly LLM_API_KEY is required.
Docker Deployment
Docker Compose setup with API + SQLite + optional Langfuse. Suitable for standardized delivery and operations.
On-Premise Private Deployment
For government, finance, and other clients with strict data residency requirements. All dependencies installable offline, supports air-gapped environments. Compatible with domestic trusted computing platforms.
Model Compatibility
availableCompatible with any provider supporting OpenAI /v1/chat/completions interface: OpenAI, Anthropic, DeepSeek, Qwen, Ollama, vLLM, etc. Switch by changing LLM_BASE_URL and LLM_MODEL — no business logic changes needed.
Multi-model configuration: assign different models by role (general / fast / vision / compact). Agent can switch models between steps as needed.