Usage & Enterprise Capabilities

Best for:AI Research & Development TeamsProduct Teams Building AI FeaturesData Science and ML Engineering GroupsContent Creation and Marketing AgenciesStartups and Indie Developers
Core is the central nervous system for your AI operations, designed to eliminate the friction of switching between dozens of different AI tools, platforms, and API consoles. It provides a single, cohesive environment where you can access all major AI models, build complex multi-model workflows visually, manage costs, collaborate with your team, and maintain full control over your data and intellectual property.
The platform operates on a local-first principle, ensuring your prompts, workflows, and sensitive data remain on your machine by default. Optional end-to-end encrypted cloud sync enables seamless team collaboration without sacrificing privacy. Core's open-source foundation guarantees there are no hidden costs, no vendor lock-in, and the freedom to self-host the entire platform on your own infrastructure for maximum sovereignty.
Self-hosting Core transforms how your team interacts with AI, providing a private, powerful, and unified command center that streamlines development, reduces costs, and accelerates innovation.

Key Benefits

  • Unified AI Access: One dashboard for GPT-4, Claude, Gemini, Llama, and more.
  • Visual Workflow Engine: Drag-and-drop builder for complex, multi-step AI processes.
  • Total Cost Control: Real-time spend tracking and alerts across all AI providers.
  • Collaborative & Private: Team features with E2E encryption and self-hosting options.
  • Prompt Engineering Suite: Version, test, and optimize prompts in a dedicated workspace.

Production Architecture Overview

A production-grade Core self-hosted setup involves:
  • Core Server: The main backend application (Node.js/Python).
  • PostgreSQL: Primary database for storing workflows, user data, and metadata.
  • Redis: For real-time features, session management, and caching model responses.
  • Object Storage (S3/MinIO): For storing file uploads, generated assets, and workflow artifacts.
  • Reverse Proxy (NGINX/Traefik): For SSL termination, load balancing, and secure access.

Implementation Blueprint

Implementation Blueprint

Prerequisites

sudo apt update && sudo apt upgrade -y
sudo apt install docker.io docker-compose -y
sudo systemctl enable docker
sudo systemctl start docker
shell

Docker Compose Production Setup

This configuration runs Core with its required dependencies.
version: '3.8'

services:
  core:
    image: ghcr.io/core-ai/core-server:latest
    ports:
      - "3000:3000"
    environment:
      - DATABASE_URL=postgresql://core:password@db:5432/core
      - REDIS_URL=redis://redis:6379
      - STORAGE_ENDPOINT=http://minio:9000
      - STORAGE_ACCESS_KEY=coreaccesskey
      - STORAGE_SECRET_KEY=coresecretkey
      - ENCRYPTION_KEY=your-32-char-encryption-key-here
    depends_on:
      - db
      - redis
      - minio
    volumes:
      - core_data:/app/data
    restart: always

  db:
    image: postgres:16-alpine
    environment:
      - POSTGRES_USER=core
      - POSTGRES_PASSWORD=password
      - POSTGRES_DB=core
    volumes:
      - pg_data:/var/lib/postgresql/data
    restart: always

  redis:
    image: redis:7-alpine
    restart: always

  minio:
    image: minio/minio:latest
    command: server /data --console-address ":9001"
    environment:
      - MINIO_ROOT_USER=coreaccesskey
      - MINIO_ROOT_PASSWORD=coresecretkey
    volumes:
      - minio_data:/data
    restart: always

volumes:
  core_data:
  pg_data:
  minio_data:

Kubernetes Production Deployment (Recommended)

Core is built for cloud-native environments and scales efficiently within Kubernetes.
# Deploy Core with its dependencies using a Helm chart or manifests
kubectl apply -f core-namespace.yaml
kubectl apply -f core-postgresql.yaml
kubectl apply -f core-redis.yaml
kubectl apply -f core-minio.yaml
kubectl apply -f core-server.yaml
Benefits:
  • Horizontal Scaling: Easily scale the Core server pods based on user load and workflow execution demand.
  • High Availability: Run redundant instances of Core and its databases for fault tolerance.
  • Managed Secrets: Use Kubernetes Secrets for secure storage of API keys and encryption keys.
  • Efficient Resource Use: Allocate specific resources to Core, especially for memory-intensive model interactions.

Scaling Strategy

  • Database Optimization: Use read replicas for PostgreSQL to handle analytics queries and separate them from main operations.
  • Redis Clustering: Implement a Redis cluster for high-throughput caching and real-time synchronization in large teams.
  • Job Queue for Workflows: Integrate a dedicated job queue (e.g., BullMQ with Redis) for managing long-running, complex AI workflows asynchronously.
  • CDN for Static Assets: Serve the Core frontend application from a global CDN to ensure fast load times for distributed teams.

Backup & Safety

  • Comprehensive Backups: Implement automated, encrypted backups for PostgreSQL data, MinIO object storage, and Core's local configuration volumes.
  • Disaster Recovery Plan: Maintain a documented procedure for restoring the entire Core platform from backups.
  • Network Security: Deploy Core within a private VPC or behind a VPN. Use the reverse proxy to enforce HTTPS and implement strict firewall rules.
  • Secret Rotation: Establish a routine for rotating database passwords, storage credentials, and the master encryption key.
  • Audit Logging: Ensure all API calls, workflow executions, and user management actions are logged for security and compliance auditing.

Recommended Hosting for Core

For systems like Core, we recommend high-performance VPS hosting. Hostinger offers dedicated setups for open-source tools with one-click installer scripts and 24/7 priority support.

Get Started on Hostinger

Explore Alternative Ai Infrastructure

OpenClaw

OpenClaw

OpenClaw is an open-source platform for autonomous AI workflows, data processing, and automation. It is production-ready, scalable, and suitable for enterprise and research deployments.

Ollama

Ollama

Ollama is an open-source tool that allows you to run, create, and share large language models locally on your own hardware.

LLaMA-3.1-8B

LLaMA-3.1-8B

Llama 3.1 8B is Meta's state-of-the-art small model, featuring an expanded 128k context window and significantly enhanced reasoning for agentic workflows.

Technical Support

Stuck on Implementation?

If you're facing issues deploying this tool or need a managed setup on Hostinger, our engineers are here to help. We also specialize in developing high-performance custom web applications and designing end-to-end automation workflows.

Engineering trusted by teams at

Managed Setup & Infra

Production-ready deployment on Hostinger, AWS, or Private VPS.

Custom Web Applications

We build bespoke tools and web dashboards from scratch.

Workflow Automation

End-to-end automated pipelines and technical process scaling.

Faster ImplementationRapid Deployment
100% Free Audit & ReviewTechnical Analysis