Usage & Enterprise Capabilities
Key Benefits
- Unified AI Access: One dashboard for GPT-4, Claude, Gemini, Llama, and more.
- Visual Workflow Engine: Drag-and-drop builder for complex, multi-step AI processes.
- Total Cost Control: Real-time spend tracking and alerts across all AI providers.
- Collaborative & Private: Team features with E2E encryption and self-hosting options.
- Prompt Engineering Suite: Version, test, and optimize prompts in a dedicated workspace.
Production Architecture Overview
- Core Server: The main backend application (Node.js/Python).
- PostgreSQL: Primary database for storing workflows, user data, and metadata.
- Redis: For real-time features, session management, and caching model responses.
- Object Storage (S3/MinIO): For storing file uploads, generated assets, and workflow artifacts.
- Reverse Proxy (NGINX/Traefik): For SSL termination, load balancing, and secure access.
Implementation Blueprint
Implementation Blueprint
Prerequisites
sudo apt update && sudo apt upgrade -y
sudo apt install docker.io docker-compose -y
sudo systemctl enable docker
sudo systemctl start dockerDocker Compose Production Setup
version: '3.8'
services:
core:
image: ghcr.io/core-ai/core-server:latest
ports:
- "3000:3000"
environment:
- DATABASE_URL=postgresql://core:password@db:5432/core
- REDIS_URL=redis://redis:6379
- STORAGE_ENDPOINT=http://minio:9000
- STORAGE_ACCESS_KEY=coreaccesskey
- STORAGE_SECRET_KEY=coresecretkey
- ENCRYPTION_KEY=your-32-char-encryption-key-here
depends_on:
- db
- redis
- minio
volumes:
- core_data:/app/data
restart: always
db:
image: postgres:16-alpine
environment:
- POSTGRES_USER=core
- POSTGRES_PASSWORD=password
- POSTGRES_DB=core
volumes:
- pg_data:/var/lib/postgresql/data
restart: always
redis:
image: redis:7-alpine
restart: always
minio:
image: minio/minio:latest
command: server /data --console-address ":9001"
environment:
- MINIO_ROOT_USER=coreaccesskey
- MINIO_ROOT_PASSWORD=coresecretkey
volumes:
- minio_data:/data
restart: always
volumes:
core_data:
pg_data:
minio_data:Kubernetes Production Deployment (Recommended)
# Deploy Core with its dependencies using a Helm chart or manifests
kubectl apply -f core-namespace.yaml
kubectl apply -f core-postgresql.yaml
kubectl apply -f core-redis.yaml
kubectl apply -f core-minio.yaml
kubectl apply -f core-server.yaml- Horizontal Scaling: Easily scale the Core server pods based on user load and workflow execution demand.
- High Availability: Run redundant instances of Core and its databases for fault tolerance.
- Managed Secrets: Use Kubernetes Secrets for secure storage of API keys and encryption keys.
- Efficient Resource Use: Allocate specific resources to Core, especially for memory-intensive model interactions.
Scaling Strategy
- Database Optimization: Use read replicas for PostgreSQL to handle analytics queries and separate them from main operations.
- Redis Clustering: Implement a Redis cluster for high-throughput caching and real-time synchronization in large teams.
- Job Queue for Workflows: Integrate a dedicated job queue (e.g., BullMQ with Redis) for managing long-running, complex AI workflows asynchronously.
- CDN for Static Assets: Serve the Core frontend application from a global CDN to ensure fast load times for distributed teams.
Backup & Safety
- Comprehensive Backups: Implement automated, encrypted backups for PostgreSQL data, MinIO object storage, and Core's local configuration volumes.
- Disaster Recovery Plan: Maintain a documented procedure for restoring the entire Core platform from backups.
- Network Security: Deploy Core within a private VPC or behind a VPN. Use the reverse proxy to enforce HTTPS and implement strict firewall rules.
- Secret Rotation: Establish a routine for rotating database passwords, storage credentials, and the master encryption key.
- Audit Logging: Ensure all API calls, workflow executions, and user management actions are logged for security and compliance auditing.
Recommended Hosting for Core
For systems like Core, we recommend high-performance VPS hosting. Hostinger offers dedicated setups for open-source tools with one-click installer scripts and 24/7 priority support.
Get Started on HostingerExplore Alternative Ai Infrastructure
OpenClaw
OpenClaw is an open-source platform for autonomous AI workflows, data processing, and automation. It is production-ready, scalable, and suitable for enterprise and research deployments.
Ollama
Ollama is an open-source tool that allows you to run, create, and share large language models locally on your own hardware.