Installation
BotServer installs itself automatically through the bootstrap process. Just run the binary.
System Requirements
| Resource | Minimum | Production |
|---|---|---|
| OS | Linux, macOS, Windows | Linux (Ubuntu/Debian) |
| RAM | 4GB | 16GB+ |
| Disk | 10GB | 100GB SSD |
| CPU | 1 core | 2+ cores |
| GPU | None | RTX 3060+ (12GB VRAM) for local LLM |
Quick Start
./botserver
The bootstrap process automatically:
- Detects your system (OS/architecture)
- Creates
botserver-stack/directory structure - Downloads PostgreSQL, Drive, Cache, LLM server
- Initializes database and storage
- Deploys default bot
- Starts all services
First run takes 2-5 minutes.
Using Existing Services
If you have existing infrastructure, configure it in your bot’s config.csv:
name,value
database-url,postgres://myuser:mypass@myhost:5432/mydb
drive-server,http://my-drive:9000
drive-accesskey,my-access-key
drive-secret,my-secret-key
Default Ports
| Service | Port | Config Key |
|---|---|---|
| UI Server | 8080 | server-port |
| PostgreSQL | 5432 | DATABASE_URL |
| Drive API | 9000 | DRIVE_SERVER |
| Drive Console | 9001 | - |
| LLM Server | 8081 | llm-server-port |
| Embedding | 8082 | embedding-url |
| Cache | 6379 | Internal |
Verify Installation
# Check services
./botserver status
# Test database
psql $DATABASE_URL -c "SELECT version();"
# Test LLM
curl http://localhost:8081/v1/models
# Open UI
open http://localhost:8080
Bot Deployment
Bots deploy to object storage (not local filesystem):
mybot.gbai → creates 'mybot' bucket in drive
The work/ folder is for internal use only.
S3 Sync for Development
Use S3-compatible tools for local editing:
- Cyberduck (GUI)
- rclone (CLI)
- WinSCP (Windows)
# rclone sync example
rclone sync ./mybot.gbai drive:mybot --watch
Edits sync automatically - changes reload without restart.
Memory Optimization
For limited RAM systems:
name,value
llm-server-ctx-size,2048
llm-server-parallel,2
Use quantized models (Q3_K_M, Q4_K_M) for smaller memory footprint.
GPU Setup
For GPU acceleration:
name,value
llm-server-gpu-layers,35
Requires CUDA installed and 12GB+ VRAM.
Deployment Options
| Method | Use Case | Guide |
|---|---|---|
| Local | Development, single instance | This page |
| Docker | Production, microservices | Docker Deployment |
| LXC | Isolated components, Linux | Container Deployment |
Troubleshooting
| Issue | Solution |
|---|---|
| Database connection | Check DATABASE_URL, verify PostgreSQL running |
| Port conflict | Change port in config or stop conflicting service |
| Memory issues | Reduce llm-server-ctx-size, use quantized model |
| GPU not detected | Verify CUDA, set llm-server-gpu-layers,0 for CPU |
Next Steps
- Quick Start Guide - Create your first bot
- First Conversation - Test your bot
- Configuration Reference - All settings