Keyboard shortcuts

Press or to navigate between chapters

Press S or / to search in the book

Press ? to show this help

Press Esc to hide this help

Installation

BotServer installs itself automatically through the bootstrap process. Just run the binary.

System Requirements

ResourceMinimumProduction
OSLinux, macOS, WindowsLinux (Ubuntu/Debian)
RAM4GB16GB+
Disk10GB100GB SSD
CPU1 core2+ cores
GPUNoneRTX 3060+ (12GB VRAM) for local LLM

Quick Start

./botserver

The bootstrap process automatically:

  1. Detects your system (OS/architecture)
  2. Creates botserver-stack/ directory structure
  3. Downloads PostgreSQL, Drive, Cache, LLM server
  4. Initializes database and storage
  5. Deploys default bot
  6. Starts all services

First run takes 2-5 minutes.

Using Existing Services

If you have existing infrastructure, configure it in your bot’s config.csv:

name,value
database-url,postgres://myuser:mypass@myhost:5432/mydb
drive-server,http://my-drive:9000
drive-accesskey,my-access-key
drive-secret,my-secret-key

Default Ports

ServicePortConfig Key
UI Server8080server-port
PostgreSQL5432DATABASE_URL
Drive API9000DRIVE_SERVER
Drive Console9001-
LLM Server8081llm-server-port
Embedding8082embedding-url
Cache6379Internal

Verify Installation

# Check services
./botserver status

# Test database
psql $DATABASE_URL -c "SELECT version();"

# Test LLM
curl http://localhost:8081/v1/models

# Open UI
open http://localhost:8080

Bot Deployment

Bots deploy to object storage (not local filesystem):

mybot.gbai → creates 'mybot' bucket in drive

The work/ folder is for internal use only.

S3 Sync for Development

Use S3-compatible tools for local editing:

  • Cyberduck (GUI)
  • rclone (CLI)
  • WinSCP (Windows)
# rclone sync example
rclone sync ./mybot.gbai drive:mybot --watch

Edits sync automatically - changes reload without restart.

Memory Optimization

For limited RAM systems:

name,value
llm-server-ctx-size,2048
llm-server-parallel,2

Use quantized models (Q3_K_M, Q4_K_M) for smaller memory footprint.

GPU Setup

For GPU acceleration:

name,value
llm-server-gpu-layers,35

Requires CUDA installed and 12GB+ VRAM.

Deployment Options

MethodUse CaseGuide
LocalDevelopment, single instanceThis page
DockerProduction, microservicesDocker Deployment
LXCIsolated components, LinuxContainer Deployment

Troubleshooting

IssueSolution
Database connectionCheck DATABASE_URL, verify PostgreSQL running
Port conflictChange port in config or stop conflicting service
Memory issuesReduce llm-server-ctx-size, use quantized model
GPU not detectedVerify CUDA, set llm-server-gpu-layers,0 for CPU

Next Steps