Keyboard shortcuts

Press or to navigate between chapters

Press S or / to search in the book

Press ? to show this help

Press Esc to hide this help

EXECUTIVE VISION: GENERAL BOTS PLATFORM

OPEN SOURCE ENTERPRISE AI PLATFORM

General Bots 6.1 delivers enterprise-grade AI capabilities with full data sovereignty. Own your infrastructure, control your data, deploy anywhere.


INVESTOR HIGHLIGHTS

DifferentiatorWhat Sets Us Apart
LLM OrchestratorSingle PROMPT.md per project defines bot personality across all channels and features
Hybrid AI+HumanSeamless bot-to-human handoff with AI-assisted attendants
True OmnichannelWhatsApp, Teams, Instagram, Telegram, Web - ONE codebase, ONE personality
No Vendor Lock-inSelf-hosted, export anytime, migrate freely
Industry-Standard CRMBuilt-in collections, scheduling, follow-ups, sales automation

FEATURE OVERVIEW

CAPABILITYWHAT IT DOESBUSINESS IMPACTTIME TO VALUE
AI-POWERED CONVERSATIONSMulti-channel bot orchestration with any LLM (OpenAI, Anthropic, Groq, local models)Significant reduction in customer service costs< 1 hour
LLM-ASSISTED ATTENDANTSReal-time tips, smart replies, message polish, sentiment analysis for human agents50% faster response timesImmediate
TRANSFER TO HUMANSeamless bot-to-human handoff with full contextHybrid support workflows5 minutes
WHATSAPP ATTENDANTHuman agents respond via WhatsApp with /commandsMobile workforce supportImmediate
KNOWLEDGE BASESVector-indexed document collections with semantic search (Qdrant/FAISS)Faster information retrieval15 minutes
EMAIL AUTOMATIONFull IMAP/SMTP integration with intelligent routingAutomated inbox management5 minutes
LLM-ASSISTED BASICPlain English programming with LLM code generationNo programming skills neededImmediate
DRIVE INTEGRATIONS3-compatible storage with automatic document processingScalable storage2 minutes
ENTERPRISE SECURITYArgon2 hashing, JWT tokens, TLS everywhereBank-grade security out of the boxBuilt-in
INSTANT THEMINGCSS-based UI customizationBrand consistency< 30 seconds
COMPLIANCE READYBuilt-in attendance, audit logs, GDPR/LGPD/HIPAA supportRegulatory compliancePre-configured
NVIDIA GPU SUPPORTCUDA acceleration for LLM operationsFaster AI responsesWhen available
OMNICHANNELWhatsApp, Teams, Instagram, Telegram, Slack, Web - ONE codebaseUnified customer engagementSingle deploy
CALENDAR MANAGEMENTFull scheduling, meeting coordination, availability trackingEfficient scheduling3 minutes
TASK AUTOMATIONCron-based scheduling, workflow orchestration24/7 automation5 minutes
WHITEBOARD COLLABORATIONReal-time collaborative drawing and diagrammingVisual team collaborationInstant
VIDEO CONFERENCINGLiveKit WebRTC integration for meetingsHigh-quality meetings10 minutes
ANALYTICS DASHBOARDReal-time metrics, usage patterns, performance monitoringData-driven decisionsBuilt-in
AUTOMATED REPORTSScheduled reports, custom metrics, export to PDF/ExcelExecutive visibility2 minutes
BACKUP & RESTOREAutomated backups, point-in-time recovery, export as ZIPData protectionAutomatic
MONITORING & ALERTSSystem health, performance metrics, custom alertsHigh availabilityPre-configured
DOCUMENT PROCESSINGOCR, PDF extraction, Excel parsing, image analysisDocument automationAutomatic
MIGRATION TOOLSImport from Office 365, Google Workspace, SlackSeamless transition< 1 day
API GATEWAYREST, GraphQL, Webhooks, WebSocket supportIntegration readyReady
USER DIRECTORYLDAP/AD replacement, SSO, group managementCentral authentication15 minutes
VOICE PROCESSINGSpeech-to-text, text-to-speech, voice commandsVoice interfaces5 minutes
CRM AUTOMATIONSFollow-ups, collections, scheduling, sales pipelinesRevenue operations10 minutes

LLM-ASSISTED ATTENDANT FEATURES

When conversations transfer from bot to human, the LLM continues working as a copilot:

FeatureDescriptionInvestor Appeal
Real-time TipsAI analyzes customer message and provides actionable tips“AI copilot for support”
Message PolishOne-click grammar and tone improvementReduces training costs
Smart Replies3 contextual reply suggestions50% faster responses
Auto SummaryInstant conversation summary on handoffFaster transfers
Sentiment AnalysisReal-time emotional state + escalation warningsPrevent churn

WhatsApp Commands for Attendants:

/queue    - View waiting customers
/take     - Take next conversation
/tips     - Get AI tips
/polish   - Improve message
/replies  - Get suggestions
/summary  - Conversation summary
/resolve  - Mark complete

DEPLOYMENT OPTIONS

Option 1: Pragmatismo Managed Hosting

  • Fully managed infrastructure
  • Access via: YourCompany.pragmatismo.com.br
  • Professional support included
  • Complete data ownership

Option 2: Self-Hosted

  • Deploy on your own infrastructure
  • Full control over hardware and configuration
  • Access via your own domain
  • No external dependencies

Option 3: Hybrid Deployment

  • Run locally with cloud backup
  • Export everything as ZIP anytime
  • Move between hosting options freely
  • No vendor lock-in

COMPETITIVE ADVANTAGE

vs CompetitorsGeneral Bots Advantage
Zendesk/IntercomSelf-hosted, AI-first, no per-seat pricing
DialogflowNative WhatsApp, human handoff, CRM built-in
BotpressLLM orchestration, multi-channel, enterprise features
n8n/ZapierConversational AI, not just workflows
Custom Development90% faster deployment, proven architecture

TECHNICAL ARCHITECTURE

COMPONENTTECHNOLOGYPERFORMANCE
Core RuntimeRust + TokioMillions of concurrent connections
DatabasePostgreSQL + DieselSub-millisecond queries
Vector SearchQdrant/FAISS100M+ documents indexed
CachingRedis + Semantic Cache95% cache hit ratio
Message QueueBuilt-in async channelsZero latency routing
File ProcessingParallel PDF/DOC/Excel extraction + OCR1000 docs/minute
Security LayerTLS 1.3 + Argon2 + JWTEnterprise-grade security
Video InfrastructureLiveKit WebRTC4K video, 50ms latency
Time-Series MetricsInfluxDB 32.5M+ points/sec ingestion
Backup SystemIncremental snapshotsRPO < 1 hour
API GatewayAxum + Tower middleware100K requests/second
Task SchedulerCron + async workersMillisecond precision

FEATURE TIERS

Core Edition (Default)

  • UI Server
  • Console Interface
  • Chat functionality
  • Automation engine
  • Task management
  • Drive integration
  • LLM support
  • Redis caching
  • Directory services

Standard Edition

  • All Core features plus:
  • Email integration (IMAP/SMTP)
  • Calendar management
  • Video meetings (LiveKit)
  • Enhanced automation

Enterprise Edition

  • All Standard features plus:
  • Compliance monitoring (LGPD/GDPR/HIPAA)
  • Attendance tracking with LLM assist
  • Vector database (Qdrant)
  • NVIDIA GPU acceleration
  • Advanced monitoring
  • gRPC support
  • Multi-channel messaging (WhatsApp, Teams, Instagram)
  • Human handoff with AI copilot
  • CRM automations (collections, scheduling, sales)

QUICK START

# config.csv - Enable all features
name,value
crm-enabled,true
attendant-llm-tips,true
attendant-polish-message,true
attendant-smart-replies,true
attendant-auto-summary,true
attendant-sentiment-analysis,true
# attendant.csv - Configure your team
id,name,channel,preferences,department,aliases
att-001,John Smith,all,sales,commercial,john;johnny
att-002,Maria Santos,whatsapp,support,customer-service,maria

Result: Full hybrid AI+Human support system in minutes.

Full Edition

  • All features enabled
  • Complete platform capabilities

COMPLIANCE & PRIVACY

General Bots includes built-in compliance templates:

Privacy Rights Center (privacy.gbai)

  • Data Access Requests - LGPD Art. 18 / GDPR Art. 15
  • Data Rectification - LGPD Art. 18 III / GDPR Art. 16
  • Data Erasure - LGPD Art. 18 VI / GDPR Art. 17 (Right to be Forgotten)
  • Data Portability - LGPD Art. 18 V / GDPR Art. 20
  • Consent Management - LGPD Art. 8 / GDPR Art. 7
  • Processing Objection - LGPD Art. 18 IV / GDPR Art. 21

Supported Frameworks

  • LGPD (Lei Geral de Proteção de Dados - Brazil)
  • GDPR (General Data Protection Regulation - EU)
  • HIPAA (Health Insurance Portability and Accountability Act)
  • CCPA (California Consumer Privacy Act)
  • SOC 2 (Service Organization Control)
  • ISO 27001 (Information Security Management)

QUICK START

# Install botserver
cargo install botserver

# Initialize your deployment
botserver --init my-company

# Start the server
botserver --start

PLATFORM COMPARISON

AspectTraditional SaaSGeneral Bots
LicensingPer-user monthly feesOpen source (AGPL)
Data LocationVendor cloudYour choice
CustomizationLimitedUnlimited
AI ModelsFixed providerAny provider
Source CodeClosedOpen
Vendor Lock-inHighNone
Data PortabilityOften difficultFull export anytime

INTEGRATION CAPABILITIES

LLM Providers

  • OpenAI (GPT-5, o3)
  • Anthropic (Claude Sonnet 4.5, Opus 4.5)
  • Meta (Llama)
  • DeepSeek
  • Local models via Ollama
  • Any OpenAI-compatible API

Communication Channels

  • WhatsApp Business
  • Microsoft Teams
  • Telegram
  • Slack
  • Instagram
  • Web chat
  • SMS

Storage Backends

  • AWS S3
  • MinIO
  • Any S3-compatible storage
  • Local filesystem

Directory Services

  • Built-in user management
  • LDAP integration
  • Active Directory
  • OAuth/OIDC SSO

ABOUT PRAGMATISMO

Pragmatismo develops General Bots as an open-source platform for enterprise AI and automation. Our focus is on delivering practical, production-ready solutions that organizations can deploy and customize to meet their specific needs.

Repository: github.com/GeneralBots/botserver

License: AGPL-3.0


QUICK START

Ready to see it in action? Skip to the hands-on guide:

⚡ Quick Start: Run Your First Bot in 5 Minutes →

Or continue reading for the full journey:

PathTimeBest For
Quick Start5 minDevelopers who want to dive in immediately
Introduction10 minUnderstanding the “No Forms” philosophy
Chapter 0115 minComplete installation and first conversation

NEXT STEPS

Chapter 01: Run and Talk →

General Bots Roadmap 2018-2026

General Bots Roadmap 2018-2026


Timeline Overview

PeriodFocusFeaturesKey Deliverables
2018-2024v1-v5 Pre-LLM12 ✅Package System, TALK/HEAR, NLP/BERT, GPT-3.5, QR CODE, SET SCHEDULE
2024v6 Foundation8 ✅Rust Core, Migration v5→v6, New Architecture, PostgreSQL, Vault, Minimal Flow
2025 H1Rust Migration10 ✅BASIC Engine, Channels, Drive (S3), Email, REST API, WhatsApp, Telegram, PDF
2025 H2Features & AUTOTASK19 ✅Tasks AI, KB, Vector DB, Tools, Generators, Multimodal, NVIDIA GPU/LXC, Paper, Research, Calendar, Meet
2026 Q1Tasks AI GO ⭐12 📋Production Autonomous, Gmail, Outlook, Google Drive, OneDrive, Google/Outlook Calendars, Transfer to Human
2026 Q2Collaboration10 📋Multi-Agent, Teams, Google Meet, Zoom, Slack, Discord, Docker, Compliance, Marketplace
2026 Q3Workflow & CRM2 📋Workflow Designer, CRM Integration
2026 Q4Enterprise4 📋Mobile Apps, Enterprise SSO, White Label, Advanced Monitoring

Total: 77 Features (49 Complete ✅ • 28 Planned 📋)


Feature Highlights

Tasks (AI Autonomous) GO

The flagship feature enabling fully autonomous AI task execution:

  • Human provides intent in natural language
  • AI creates execution plan
  • AI generates code/content
  • AI deploys result
  • Human reviews and approves

Available in: 2025 H2 (scaffolding), Q1 2026 (production)

Generators

GeneratorPurpose
BOTConversational bots
APPFull applications
SITEHTMX websites
GENERALGeneral content
LANDPAGELanding pages

Technology Stack

Backend: Rust, Actix-Web, Tokio, SQLx
Database: PostgreSQL
Storage: MinIO (S3-compatible)
Cache: Valkey (Redis alternative)
UI: HTMX, Askama templates
Desktop/Mobile: Tauri


Status Legend

StatusDescription
CompleteFeature is fully implemented and available
In ProgressFeature is currently being developed
PlannedFeature is scheduled for future development

Click the View Interactive Roadmap button above to explore all 77 features with detailed descriptions. Scroll horizontally to navigate the timeline from 2018 to 2026.

Introduction to General Bots

⚡ Want to skip ahead? Quick Start → gets you running in 5 minutes.

Build conversational AI bots in minutes, not months. General Bots lets you create intelligent chatbots by writing simple BASIC scripts and dropping in your documents. No complex frameworks, no cloud dependencies, no AI expertise required.

Your GoalGo To
Run a bot NOWQuick Start
Understand the visionKeep reading below
Write your first scriptChapter 06: BASIC Dialogs
Add documents to botChapter 02: Packages

The No Forms Movement

General Bots in 2017

Since 2017, Pragmatismo has championed the No Forms philosophy. The idea is simple but revolutionary:

People should converse, not fill forms.

Traditional software forces users into rigid forms with dropdowns, checkboxes, and validation errors. But humans don’t communicate that way. We talk. We explain. We ask questions.

General Bots was born from this vision: replace forms with conversations.

Before: The Form Experience

┌─────────────────────────────────────────┐
│ Customer Support Form                    │
├─────────────────────────────────────────┤
│ Name: [_______________]                  │
│ Email: [_______________]                 │
│ Department: [Select ▼]                   │
│ Priority: ○ Low ○ Medium ○ High         │
│ Subject: [_______________]               │
│ Description:                             │
│ [                                    ]   │
│ [                                    ]   │
│                                          │
│ Attachments: [Choose File]               │
│                                          │
│ [Submit]                                 │
│                                          │
│ ⚠️ Error: Email format invalid           │
│ ⚠️ Error: Description required           │
└─────────────────────────────────────────┘

Problems:

  • Intimidating for users
  • Requires learning the interface
  • Validation errors frustrate
  • No guidance or context
  • One-size-fits-all approach

After: The Conversation Experience

Benefits:

  • Natural and intuitive
  • Guides users step by step
  • Adapts to each situation
  • No errors, just clarifications
  • Feels like talking to a human

Projections, Not Screens

The No Forms philosophy extends beyond chat. In General Bots:

  • Visualizations replace dashboards - Data is projected contextually, not displayed in static grids
  • Conversations replace menus - Ask for what you need, don’t hunt through options
  • AI handles complexity - The system adapts, users don’t configure
  • Voice-first design - Everything works without a screen

This is why General Bots focuses on:

  1. Conversational interfaces - Chat, voice, messaging
  2. Contextual projections - Show relevant info when needed
  3. Minimal UI - The less interface, the better
  4. AI interpretation - Understand intent, not just input

Quick Example

Want a student enrollment bot? Here’s all you need:

  1. Drop your documents in a .gbkb folder:
edu.gbkb/
  enrollment-policy.pdf
  course-catalog.pdf
  1. Write a simple tool (optional):
' enrollment.bas
PARAM name, email, course
SAVE "enrollments.csv", name, email, course
TALK "Welcome to " + course + "!"
  1. Chat naturally:
User: I want to enroll in computer science
Bot: I'll help you enroll! What's your name?
User: Sarah Chen
Bot: Welcome to Computer Science, Sarah!

No form. No UI. Just conversation.

What Makes General Bots Different

Just Run It

./botserver

That’s it. No Kubernetes, no cloud accounts. The bootstrap process installs everything locally in 2-5 minutes. PostgreSQL, vector database, object storage, cache - all configured automatically with secure credentials stored in Vault.

Real BASIC, Real Simple

We brought BASIC back for conversational AI. See our complete keyword reference:

' save-note.bas - A simple tool
PARAM topic, content
SAVE "notes.csv", topic, content, NOW()
TALK "Note saved!"

Four lines. That’s a working tool the AI can call automatically.

Documents = Knowledge

Drop PDFs, Word docs, or text files into .gbkb/ folders. They’re instantly searchable. No preprocessing, no configuration, no pipelines. The bot answers questions from your documents automatically.

Tools = Functions

Create .bas files that the AI discovers and calls automatically. Need to save data? Send emails? Call APIs? Just write a tool. The AI figures out when and how to use it.

Architecture at a Glance

General Bots is a single binary that includes everything:

General Bots Architecture

One process, one port, one command to run. Deploy anywhere - laptop, server, LXC container.

Real-World Use Cases

Customer Support Bot

documents: FAQs, policies, procedures
tools: ticket creation, status lookup
result: 24/7 support that actually helps

Employee Assistant

documents: HR policies, IT guides, company info
tools: leave requests, equipment orders
result: Instant answers, automated workflows

Sales Catalog Bot

documents: product specs, pricing sheets
tools: quote generation, order placement
result: Interactive product discovery

Meeting Assistant

documents: agenda, previous minutes
tools: action item tracking, scheduling
result: AI-powered meeting facilitator

The Package System

Bots are organized as packages - just folders with a naming convention:

my-bot.gbai/                    # Package root
├── my-bot.gbdialog/            # BASIC scripts
│   └── start.bas               # Entry point
├── my-bot.gbkb/                # Knowledge base
│   ├── policies/               # Document collection
│   └── procedures/             # Another collection
└── my-bot.gbot/                # Configuration
    └── config.csv              # Bot settings

Copy the folder to deploy. That’s it. No XML, no JSON schemas, no build process.

Getting Started in 3 Steps

1. Install (2 minutes)

wget https://github.com/GeneralBots/botserver/releases/latest/botserver
chmod +x botserver
./botserver

2. Open Browser

http://localhost:8080

3. Start Chatting

The default bot is ready. Ask it anything. Modify templates/default.gbai/ to customize.

Core Philosophy

  1. No Forms - Conversations replace forms everywhere
  2. Simplicity First - If it needs documentation, it’s too complex
  3. Everything Included - No external dependencies to manage
  4. Production Ready - Secure, scalable, enterprise-grade from day one
  5. AI Does the Work - Don’t write logic the LLM can handle
  6. Projections Over Screens - Show data contextually, not in dashboards

Technical Highlights

  • Language: Written in Rust for performance and safety
  • Database: PostgreSQL with Diesel ORM
  • Cache: Redis-compatible cache for sessions
  • Storage: S3-compatible object store (MinIO)
  • Vectors: Qdrant for semantic search
  • Security: Vault for secrets, Argon2 passwords, AES encryption
  • Identity: Zitadel for authentication and MFA
  • LLM: OpenAI API, Anthropic, Groq, or local models
  • Scripting: Rhai-powered BASIC interpreter

A Brief History

2017 - Pragmatismo launches General Bots with the No Forms manifesto. The vision: conversational interfaces should replace traditional forms in enterprise software.

2018-2020 - Node.js implementation gains traction. Hundreds of bots deployed across banking, healthcare, education, and government sectors in Brazil and beyond.

2021-2023 - Major enterprises adopt General Bots for customer service automation. The platform handles millions of conversations.

2024 - Complete rewrite in Rust for performance, security, and reliability. Version 6.0 introduces the new architecture with integrated services.

Today - General Bots powers conversational AI for organizations worldwide, staying true to the original vision: people should converse, not fill forms.

What’s Next?

Community

General Bots is open source (AGPL-3.0) developed by Pragmatismo.com.br and contributors worldwide.

  • GitHub: https://github.com/GeneralBots/botserver
  • Version: 6.1.0
  • Status: Production Ready

Ready to build your bot? Turn to Chapter 01 and let’s go!


General Bots
Built with ❤️ from Brazil since 2017

Chapter 01: Run and Talk

⚡ In a hurry? Jump straight to Quick Start — you’ll be chatting with your bot in 5 minutes.

Get General Bots running and have your first conversation.


What You’ll Achieve

By the end of this chapter, you will:

  • Have General Bots running on your machine
  • Understand what happens during bootstrap
  • Complete your first conversation with a bot
  • Know how sessions and channels work

Choose Your Path

If You Want To…Go To
Get running immediatelyQuick Start
Understand the architecture firstOverview
See all installation optionsInstallation
Learn about conversationsFirst Conversation
Understand session managementSessions and Channels

The 30-Second Version

./botserver

Open http://localhost:9000. Start chatting. That’s it.

Everything installs automatically on first run—PostgreSQL, storage, cache, and your first bot.


How It Works


Topics in This Chapter

Overview

What General Bots does and how it fits together.

Quick Start

The fastest path from zero to running bot.

Installation

Detailed setup options including LXC containers and production deployment.

First Conversation

Understanding how the bot responds and learns.

Sessions and Channels

How conversations are managed across WhatsApp, Web, Telegram, and more.


Coming From Executive Vision?

If you just read the Executive Vision, here’s what to know:

  1. Everything in that feature table? It’s all included in the single botserver binary
  2. No configuration needed — Bootstrap detects your system and sets everything up
  3. Start simple — Run it, chat with it, then customize

The philosophy is: get running first, understand later.


Prerequisites

  • Operating System: Linux, macOS, or Windows (WSL2 recommended)
  • Disk Space: ~2GB for botserver-stack
  • RAM: 4GB minimum, 8GB recommended
  • Ports: 8080 (web), 5432 (database), 9000 (storage)

No Docker required. No cloud accounts. No API keys to start.


Next Step

Quick Start →


General Bots

Overview

botserver is an open-source conversational AI platform built in Rust that enables developers to create, deploy, and manage intelligent bots with minimal configuration. This chapter provides a comprehensive introduction to the platform’s architecture, capabilities, and design philosophy.

Core Philosophy

botserver was designed around five guiding principles that shape every aspect of the platform. Zero Configuration means the system works out of the box with sensible defaults, eliminating lengthy setup processes. The Package-Based approach ensures bots are self-contained in .gbai folders that can be copied and deployed anywhere. BASIC Scripting provides simple, accessible programming for conversation flows that non-programmers can understand and modify. Multi-Channel support means you deploy once and run everywhere across Web, WhatsApp, Teams, and other platforms. Knowledge-First design provides built-in document management and semantic search as core capabilities rather than afterthoughts.

Architecture Overview

botserver uses a modular architecture organized into three distinct layers that work together to provide a complete conversational AI platform.

Storage Layer

The storage layer handles all data persistence needs. The SQL database stores structured data including users, sessions, and configurations using PostgreSQL with the Diesel ORM. Object storage provides S3-compatible file storage for documents and uploads, typically using MinIO for self-hosted deployments. The high-performance cache layer handles sessions and frequently accessed data using a Redis-compatible store. An optional vector database enables semantic search capabilities for knowledge bases using Qdrant or similar vector stores.

Application Layer

The application layer contains the core bot functionality. The Bot Engine processes conversations and manages state across interactions. The BASIC Interpreter executes conversation scripts written in the General Bots dialect of BASIC. The Package Manager handles bot deployment, lifecycle management, and hot-reloading of changes. Channel Adapters connect to various messaging platforms, translating between platform-specific formats and the internal message representation.

Service Layer

The service layer provides the infrastructure that supports bot operations. The UI Server handles HTTP API requests and WebSocket connections for real-time chat interfaces. The Scheduler executes cron-based tasks for automation and maintenance. LLM Integration connects to language models whether hosted locally or in the cloud. Authentication integrates with directory services for user management and access control.

Key Features

Conversation Management

botserver provides comprehensive conversation management capabilities. Sessions persist across interactions, maintaining context and state throughout multi-turn dialogs. The context management system tracks conversation history and user information across interactions. Parallel conversation handling allows a single bot instance to manage thousands of simultaneous conversations efficiently.

Knowledge Base System

The knowledge base system turns your documents into searchable, AI-accessible information. Document ingestion supports PDF, TXT, MD, and DOCX formats with automatic text extraction. The indexing pipeline processes documents into searchable chunks stored in the vector database. Semantic search finds relevant information based on meaning rather than just keyword matching. Context injection automatically provides relevant document excerpts to the LLM when generating responses.

BASIC Scripting Language

The BASIC scripting language makes bot development accessible to everyone. The simple syntax allows non-programmers to read and modify conversation flows. Built-in keywords handle common tasks like sending messages, saving data, and calling APIs. The tool integration system lets you create callable functions that the AI can invoke automatically. Event-driven programming support enables reactive bots that respond to schedules, webhooks, and system events.

Multi-Channel Support

Deploy your bot once and reach users across multiple channels. The web chat interface provides an embeddable widget for websites. WhatsApp Business API integration enables customer service on the world’s most popular messaging platform. Microsoft Teams support brings your bot into enterprise collaboration spaces. Email integration allows conversational interactions through traditional email. SMS support via providers enables text message interactions for users without data connectivity.

Enterprise Features

botserver includes capabilities required for enterprise deployments. Multi-tenancy support allows a single installation to serve multiple organizations with complete isolation. Role-based access control restricts actions based on user roles and permissions. Comprehensive audit logging tracks all actions for compliance and debugging. Horizontal scaling distributes load across multiple instances. High availability configurations ensure continuous operation even during failures.

System Requirements

Minimum Requirements

For development and testing purposes, botserver runs comfortably on modest hardware. You need at least 4GB of RAM to run all components. A single CPU core is sufficient for light workloads. Reserve at least 10GB of disk space for the application, databases, and documents. The platform runs on Linux, macOS, or Windows operating systems.

Production deployments benefit from more substantial resources. Plan for 16GB of RAM to handle concurrent users and large knowledge bases. Two or more CPU cores improve response times under load. Use 100GB of SSD storage for better database and file access performance. Linux servers running Ubuntu or Debian provide the most tested and reliable environment. For local LLM hosting, an NVIDIA RTX 3060 or better GPU with at least 12GB of VRAM enables on-premises inference without cloud API dependencies.

Configuration

Bot configuration uses config.csv files with key-value parameters. Server settings like server_host and server_port control where the UI server listens. LLM configuration through llm-url and llm-model specifies which language model to use. Email settings including email-from and email-server enable outbound email functionality. UI customization parameters like theme-color1, theme-color2, theme-title, and theme-logo brand the interface. Conversation settings such as episodic-memory-history and episodic-memory-threshold tune how context is managed. Refer to the config.csv files in bot packages for the complete list of available parameters.

Bot Package Structure

Each bot is a self-contained .gbai folder that includes everything needed for deployment. The structure organizes different aspects of the bot into subfolders with specific naming conventions.

mybot.gbai/
  mybot.gbot/
    config.csv
  mybot.gbdialog/
    start.bas
    tools/
  mybot.gbkb/
    documents/
  mybot.gbtheme/
    styles/

The .gbot subfolder contains configuration files including the main config.csv. The .gbdialog subfolder holds BASIC scripts with start.bas serving as the entry point and additional scripts providing tools. The .gbkb subfolder stores knowledge base documents organized into topical folders. The optional .gbtheme subfolder contains CSS and assets for UI customization.

Deployment Models

Standalone Server

The standalone deployment model runs a single botserver instance serving multiple bots. This approach provides the simplest setup with shared resources across bots. Standalone deployment works best for small to medium deployments where isolation between bots is not critical.

LXC Containers

Linux containers provide lightweight virtualization for bot isolation. Each bot or group of bots runs in its own container with dedicated resources. LXC deployment offers easy management through standard container tooling while maintaining lower overhead than full virtual machines.

Embedded Mode

Embedded deployment integrates botserver into existing applications as a library. This mode provides programmatic control over bot behavior and direct integration with application logic. Custom integrations can use the embedded mode to add conversational capabilities to any Rust application.

Getting Started

Installation begins by downloading and running the botserver binary. The bootstrap process automatically downloads all required components to the botserver-stack/ directory, including database binaries, the object storage server, cache server, LLM runtime, and other dependencies.

Bot deployment uses object storage buckets. Each bot receives its own bucket for file storage. Bots are deployed to the drive rather than the work folder, which is reserved for internal operations as documented in the gbapp chapter.

After startup, access the UI interface at http://localhost:9000 to interact with your bots and monitor their operation.

Use Cases

Customer Support

Customer support bots automate FAQ responses and ticket handling. Load your support documentation, policies, and procedures into knowledge bases. Create tools for ticket creation and status lookup. The result is 24/7 support that handles common questions automatically while escalating complex issues to human agents.

Internal Tools

Employee assistant bots streamline internal operations. Knowledge bases contain HR policies, IT guides, and company information. Tools enable leave requests, equipment orders, and other common workflows. Employees get instant answers and automated processing for routine requests.

Educational Applications

Educational bots provide interactive learning experiences. Course materials and reference documents become searchable knowledge bases. Tools handle quiz administration, progress tracking, and enrollment. Students receive personalized guidance and immediate feedback.

Healthcare Applications

Healthcare bots assist with patient engagement while maintaining compliance. Appointment scheduling, medication reminders, and symptom checking tools automate routine interactions. Knowledge bases contain patient education materials. All interactions maintain audit trails for regulatory compliance.

Security Features

botserver implements comprehensive security at every layer. Authentication integrates with directory services for centralized user management. SSL/TLS encryption protects all network communications. Session tokens use cryptographically secure generation and validation. Input sanitization prevents injection attacks across all user inputs. SQL injection prevention uses parameterized queries throughout. XSS protection sanitizes output displayed to users. Rate limiting prevents abuse and denial of service attacks. Audit logging records all significant actions for compliance and forensics.

Monitoring and Operations

Health Checks

Health monitoring endpoints report component status for operational awareness. Database connectivity checks verify the storage layer is operational. Storage availability checks ensure object storage is accessible. Cache performance metrics track response times and hit rates.

Metrics

Operational metrics provide visibility into bot performance. Conversation counts show usage patterns over time. Response time measurements identify performance issues. Error rates highlight problems requiring attention. Resource usage tracking helps capacity planning.

Logging

Structured logging facilitates troubleshooting and analysis. Configurable log levels from ERROR through DEBUG control verbosity. Automatic rotation and archival prevent disk exhaustion. Search and filtering tools help locate specific events in large log files.

Extensibility

Channel Adapters

New messaging channels integrate through the adapter system. WebSocket protocols enable real-time bidirectional communication. REST API integration supports request-response style platforms. Custom protocols can be implemented for specialized messaging systems.

Storage Backends

Storage is abstracted to support multiple backend options. S3-compatible storage works with AWS, MinIO, and other providers. Database adapters could support different SQL databases. Cache providers can be swapped while maintaining the same interface.

Community and Support

Documentation

Comprehensive documentation covers all aspects of the platform. The User Guide walks through common tasks and best practices. The API Reference documents all endpoints and parameters. The BASIC Language Reference details every keyword and syntax rule. The Deployment Guide covers production installation and configuration.

Resources

Example bots in the templates/ directory demonstrate common patterns. Test suites verify functionality and provide usage examples. Migration tools help transition from other platforms to General Bots.

Contributing

General Bots is open source under the AGPL (GNU Affero General Public License). The GitHub repository hosts all development activity. Issue tracking manages bug reports and feature requests. Pull requests from the community are welcome and encouraged.

Codebase Statistics

The General Bots workspace contains the following lines of code by language:

LanguageLines of Code
HTML318,676
Rust232,015
Markdown135,130
SVG47,196
JSON45,743
CSS40,476
SQL26,242
JavaScript16,257
TOML5,640
YAML4,762
Shell3,602
TypeScript13
Total875,752

Summary

botserver provides a complete platform for building conversational AI applications. The combination of simple BASIC scripting, automatic setup, and enterprise features bridges the gap between simple chatbots and complex AI systems. The focus on packages, minimal configuration, and multi-channel support makes botserver suitable for both rapid prototyping and production deployments serving millions of users.

Quick Start

Installation in 3 Steps

1. Run botserver

./botserver

That’s it! No configuration needed.

2. Wait for Bootstrap (2-5 minutes)

You’ll see:

botserver starting...
Bootstrap: Detecting system...
Installing PostgreSQL...
   Database created
   Schema initialized
Installing Drive...
   Object storage ready
   Buckets created
Installing Cache...
   Cache server running
Creating bots from templates...
   default.gbai deployed
   announcements.gbai deployed
botserver ready at http://localhost:9000

3. Open Browser

http://localhost:9000

Start chatting with your bot!


What Just Happened?

Bootstrap Flow

Bootstrap Flow

The automatic bootstrap process:

  1. Detected your OS (Linux/macOS/Windows)
  2. Downloaded PostgreSQL database to botserver-stack/
  3. Downloaded drive (S3-compatible storage) to botserver-stack/
  4. Downloaded cache component to botserver-stack/
  5. Generated secure credentials
  6. Created database schema
  7. Deployed default bots to object storage
  8. Started UI server on port 8080

Zero manual configuration required!

Using Existing Services

If you already have PostgreSQL or drive storage running, configure them in config.csv of your bot:

name,value
database-url,postgres://myuser:mypass@myhost:5432/mydb
drive-server,http://my-drive:9000
drive-accesskey,my-access-key
drive-secret,my-secret-key

Step 2: Write a Simple Tool

How Tools Work

Tool Execution Flow

Tools are just .bas files. Create enrollment.bas:

' Student enrollment tool
PARAM name, email, course
DESCRIPTION "Processes student enrollment"

SAVE "enrollments.csv", name, email, course, NOW()
TALK "Welcome to " + course + ", " + name + "!"

The LLM automatically discovers this tool and knows when to call it!


Step 3: Add Knowledge Base

Drop documents in a .gbkb/ folder:

mybot.gbai/
  mybot.gbkb/
    docs/
      manual.pdf
      faq.docx
      guide.txt

The bot automatically:

  • Indexes documents with vector embeddings
  • Answers questions from the content
  • Updates when files change

Container Deployment (LXC)

For production isolation, botserver supports LXC (Linux Containers):

# Create container
lxc-create -n botserver -t download -- -d ubuntu -r jammy -a amd64

# Start and attach
lxc-start -n botserver
lxc-attach -n botserver

# Install botserver inside container
./botserver

Benefits:

  • Process isolation
  • Resource control
  • Easy management
  • Lightweight virtualization

Optional Components

After installation, add more features:

./botserver install email      # Email server
./botserver install directory  # Identity provider
./botserver install llm        # Local LLM server (offline mode)
./botserver install meeting    # Video conferencing

Example Bot Structure

mybot.gbai/
  mybot.gbdialog/          # Dialog scripts
    start.bas            # Entry point (required)
    get-weather.bas      # Tool (auto-discovered)
    send-email.bas       # Tool (auto-discovered)
  mybot.gbkb/              # Knowledge base
    docs/                # Document collection
    faq/                 # FAQ collection
  mybot.gbot/              # Configuration
    config.csv           # Bot parameters
  mybot.gbtheme/           # UI theme (optional)
    custom.css

Deploy new bots by uploading to object storage (creates a new bucket), not the local filesystem. The work/ folder is for internal use only.

Local Development with Auto-Sync

Edit bot files locally and sync automatically to drive storage:

Free S3 Sync Tools:

  • Cyberduck - GUI file browser (Windows/Mac/Linux)
  • rclone - Command-line sync (All platforms)
  • WinSCP - File manager with S3 (Windows)
  • S3 Browser - Freeware S3 client (Windows)

Quick Setup with rclone:

# Configure for drive storage
rclone config  # Follow prompts for S3-compatible storage

# Auto-sync local edits to bucket
rclone sync ./mybot.gbai drive:mybot --watch

Now when you:

  • Edit .csv → Bot config reloads automatically
  • Edit .bas → Scripts compile automatically
  • Add docs to .gbkb/ → Knowledge base updates

How It Really Works

You DON’T write complex dialog flows. Instead:

1. Add Documents

mybot.gbkb/
  policies/enrollment-policy.pdf
  catalog/courses.pdf

2. Create Tools (Optional)

' enrollment.bas - just define what it does
PARAM name AS string
PARAM course AS string
SAVE "enrollments.csv", name, course

3. Start Chatting!

User: I want to enroll in computer science
Bot: I'll help you enroll! What's your name?
User: John Smith
Bot: [Automatically calls enrollment.bas with collected params]
     Welcome to Computer Science, John Smith!

The LLM handles ALL conversation logic automatically!


Configuration (Optional)

Configure per-bot settings in config.csv:

name,value
server_port,8080
llm-url,http://localhost:8081
episodic-memory-threshold,4
theme-color1,#0d2b55

Troubleshooting

Port 8080 in use?

Edit templates/default.gbai/default.gbot/config.csv:

name,value
server_port,3000

Clean install?

# Remove everything and start fresh
rm -rf botserver-stack/
rm .env
./botserver  # Will regenerate everything

Check component status

./botserver status tables    # PostgreSQL
./botserver status drive     # Drive storage
./botserver status cache     # Cache component

Documentation


The Magic Formula

Documents + Tools + LLM = Intelligent Bot

What You DON’T Need:

  • IF/THEN logic
  • Intent detection
  • Dialog flow charts
  • State machines
  • Complex routing

What You DO:

  • Drop documents in .gbkb/
  • Create simple .bas tools (optional)
  • Start chatting!

The LLM understands context, calls tools, searches documents, and maintains conversation naturally.


Philosophy

  1. Just Run It - No manual configuration
  2. Simple Scripts - BASIC-like language anyone can learn
  3. Automatic Discovery - Tools and KBs auto-detected
  4. Secure by Default - Credentials auto-generated
  5. Production Ready - Built for real-world use

Real Example: Education Bot

  1. Add course materials:

    edu.gbkb/
      courses/computer-science.pdf
      policies/enrollment.pdf
    
  2. Create enrollment tool:

    ' enrollment.bas
    PARAM name AS string
    PARAM course AS string
    SAVE "enrollments.csv", name, course
    
  3. Just chat:

    User: What courses do you offer?
    Bot: [Searches PDFs] We offer Computer Science, Data Science...
    
    User: I want to enroll
    Bot: [Calls enrollment.bas] Let me help you enroll...
    

No programming logic needed - the LLM handles everything!

Installation

botserver installs itself automatically through the bootstrap process. Just run the binary.

Runtime Dependencies

Before running the botserver binary, you must install required system libraries on the host system.

Download and run the dependency installer:

curl -fsSL https://raw.githubusercontent.com/GeneralBots/botserver/main/scripts/install-dependencies.sh | sudo bash

Or if you have the script locally:

sudo ./scripts/install-dependencies.sh

Manual Install (Debian/Ubuntu)

sudo apt update
sudo apt install -y \
    libpq5 \
    libssl3 \
    liblzma5 \
    zlib1g \
    ca-certificates \
    curl \
    wget \
    libabseil-dev \
    libclang-dev \
    pkg-config

# For container support (LXC)
sudo snap install lxd
sudo lxd init --auto

Manual Install (Fedora/RHEL)

sudo dnf install -y \
    libpq \
    openssl-libs \
    xz-libs \
    zlib \
    ca-certificates \
    curl \
    wget \
    lxc

⚠️ Common Error: If you see error while loading shared libraries: libpq.so.5, install libpq5 (Debian/Ubuntu) or libpq (Fedora/RHEL).

System Requirements

ResourceMinimumProduction
OSLinux, macOS, WindowsLinux (Ubuntu/Debian)
RAM4GB16GB+
Disk10GB100GB SSD
CPU1 core2+ cores
GPUNoneRTX 3060+ (12GB VRAM) for local LLM

Quick Start

./botserver

The bootstrap process automatically:

  1. Detects your system (OS/architecture)
  2. Creates botserver-stack/ directory structure
  3. Downloads PostgreSQL, Drive, Cache, LLM server
  4. Initializes database and storage
  5. Deploys default bot
  6. Starts all services

First run takes 2-5 minutes.

Using Existing Services

If you have existing infrastructure, configure it in your bot’s config.csv:

name,value
database-url,postgres://myuser:mypass@myhost:5432/mydb
drive-server,http://my-drive:9000
drive-accesskey,my-access-key
drive-secret,my-secret-key

Default Ports

ServicePortConfig Key
UI Server8080server-port
PostgreSQL5432DATABASE_URL
Drive API9000DRIVE_SERVER
Drive Console9001-
LLM Server8081llm-server-port
Embedding8082embedding-url
Cache6379Internal

Verify Installation

# Check services
./botserver status

# Test database
psql $DATABASE_URL -c "SELECT version();"

# Test LLM
curl http://localhost:8081/v1/models

# Open UI
open http://localhost:9000

Bot Deployment

Bots deploy to object storage (not local filesystem):

mybot.gbai → creates 'mybot' bucket in drive

The work/ folder is for internal use only.

S3 Sync for Development

Use S3-compatible tools for local editing:

  • Cyberduck (GUI)
  • rclone (CLI)
  • WinSCP (Windows)
# rclone sync example
rclone sync ./mybot.gbai drive:mybot --watch

Edits sync automatically - changes reload without restart.

Memory Optimization

For limited RAM systems:

name,value
llm-server-ctx-size,2048
llm-server-parallel,2

Use quantized models (Q3_K_M, Q4_K_M) for smaller memory footprint.

GPU Setup

For GPU acceleration:

name,value
llm-server-gpu-layers,35

Requires CUDA installed and 12GB+ VRAM.

Deployment Options

MethodUse CaseGuide
LocalDevelopment, single instanceThis page
DockerProduction, microservicesDocker Deployment
LXCIsolated components, LinuxContainer Deployment
Brother ModeContainer managing host containersSee below

Container-on-Host (Brother Mode)

You can run botserver inside a container (Docker/LXC) while letting it manage other containers directly on the host system. This is useful for CI/CD pipelines or managing “host” deployment from a restricted environment.

Requirements:

  • Mount host’s LXD socket to container
  • Run container as privileged (if accessing host devices)

Docker Run Example:

docker run -d \
  --name botserver \
  --network host \
  --privileged \
  -v /var/snap/lxd/common/lxd/unix.socket:/var/snap/lxd/common/lxd/unix.socket \
  -e VAULT_ADDR="https://127.0.0.1:8200" \
  -e VAULT_TOKEN="<your-token>" \
  botserver:latest

Note: For non-snap LXD, use -v /var/lib/lxd/unix.socket:/var/lib/lxd/unix.socket instead.

The installer detects if it is running in a container but needs to manage the host (brother mode) and will configure the host’s LXD/LXC environment safely.

⚠️ IMPORTANT: Container create commands (botserver install ... --container) must be run from the host system, not inside a container.

Example: Create Vault and VectorDB

# Run on HOST system
botserver install vault --container --tenant mycompany
botserver install vector_db --container --tenant mycompany

# Verify containers
lxc list | grep mycompany

Troubleshooting

IssueSolution
Database connectionCheck DATABASE_URL, verify PostgreSQL running
Port conflictChange port in config or stop conflicting service
Memory issuesReduce llm-server-ctx-size, use quantized model
GPU not detectedVerify CUDA, set llm-server-gpu-layers,0 for CPU

Next Steps

Configuring Local Development Access

After bootstrap, botserver services are immediately accessible via IP addresses - no configuration required. For those who prefer friendly hostnames, optional DNS setup is also available.

Zero Configuration: IP Access (Default)

botserver certificates include 127.0.0.1 as a Subject Alternative Name (SAN), so mTLS works immediately via IP address without any system changes.

Service Ports

ComponentDescriptionIP:Port
apiMain botserver API127.0.0.1:8443 (HTTPS) / 127.0.0.1:9000 (HTTP)
tablesPostgreSQL database127.0.0.1:5432
driveObject storage (S3-compatible)127.0.0.1:9000
cacheRedis cache127.0.0.1:6379
vectordbVector database127.0.0.1:6333
vaultSecrets management127.0.0.1:8200
llmLocal LLM server127.0.0.1:8081
embeddingEmbedding server127.0.0.1:8082
directoryAuthentication/Identity127.0.0.1:8085
emailEmail server127.0.0.1:25 (SMTP) / 127.0.0.1:993 (IMAP)
meetVideo conferencing127.0.0.1:7880

Quick Test

# Test API (no config needed)
curl -k https://127.0.0.1:8443/health

# With mTLS client certificate
curl --cert ./botserver-stack/conf/system/certificates/api/client.crt \
     --key ./botserver-stack/conf/system/certificates/api/client.key \
     --cacert ./botserver-stack/conf/system/certificates/ca/ca.crt \
     https://127.0.0.1:8443/api/v1/status

Code Examples

Python

import requests

response = requests.get(
    "https://127.0.0.1:8443/api/v1/bots",
    cert=("./botserver-stack/conf/system/certificates/api/client.crt",
          "./botserver-stack/conf/system/certificates/api/client.key"),
    verify="./botserver-stack/conf/system/certificates/ca/ca.crt"
)
print(response.json())

Node.js

const https = require('https');
const fs = require('fs');

const options = {
  hostname: '127.0.0.1',
  port: 8443,
  path: '/api/v1/bots',
  method: 'GET',
  cert: fs.readFileSync('./botserver-stack/conf/system/certificates/api/client.crt'),
  key: fs.readFileSync('./botserver-stack/conf/system/certificates/api/client.key'),
  ca: fs.readFileSync('./botserver-stack/conf/system/certificates/ca/ca.crt')
};

https.request(options, (res) => {
  res.on('data', (d) => process.stdout.write(d));
}).end();

Rust

use reqwest::Certificate;
use std::fs;

#[tokio::main]
async fn main() -> Result<(), Box<dyn std::error::Error>> {
    let cert = fs::read("./botserver-stack/conf/system/certificates/api/client.crt")?;
    let key = fs::read("./botserver-stack/conf/system/certificates/api/client.key")?;
    let ca = fs::read("./botserver-stack/conf/system/certificates/ca/ca.crt")?;
    
    let identity = reqwest::Identity::from_pem(&[cert, key].concat())?;
    let ca_cert = Certificate::from_pem(&ca)?;
    
    let client = reqwest::Client::builder()
        .identity(identity)
        .add_root_certificate(ca_cert)
        .build()?;
    
    let response = client
        .get("https://127.0.0.1:8443/api/v1/bots")
        .send()
        .await?;
    
    println!("{}", response.text().await?);
    Ok(())
}

Remote Server Access

If botserver runs on a different machine (e.g., 192.168.1.100), regenerate certificates with additional IP SANs:

./botserver regenerate-certs --san-ip 192.168.1.100 --san-ip 10.0.0.50

Or configure before bootstrap in botserver-stack/conf/system/ca-config.toml:

[certificates.api]
san_names = ["localhost", "api.botserver.local", "127.0.0.1", "192.168.1.100"]

Optional: Hostname Access

For browser access with friendly URLs, configure your system to resolve *.botserver.local hostnames.

Hostname Mapping

ComponentHostname
apiapi.botserver.local
tablestables.botserver.local
drivedrive.botserver.local
cachecache.botserver.local
vectordbvectordb.botserver.local
vaultvault.botserver.local
llmllm.botserver.local
embeddingembedding.botserver.local
directorydirectory.botserver.local
emailemail.botserver.local
meetmeet.botserver.local

Option 1: Edit hosts file (Simplest)

Windows

Open Notepad as Administrator, edit C:\Windows\System32\drivers\etc\hosts:

127.0.0.1 botserver.local
127.0.0.1 api.botserver.local
127.0.0.1 tables.botserver.local
127.0.0.1 drive.botserver.local
127.0.0.1 cache.botserver.local
127.0.0.1 vectordb.botserver.local
127.0.0.1 vault.botserver.local
127.0.0.1 llm.botserver.local
127.0.0.1 embedding.botserver.local
127.0.0.1 directory.botserver.local
127.0.0.1 email.botserver.local
127.0.0.1 meet.botserver.local

PowerShell one-liner (run as Administrator):

@"
127.0.0.1 botserver.local
127.0.0.1 api.botserver.local
127.0.0.1 tables.botserver.local
127.0.0.1 drive.botserver.local
127.0.0.1 cache.botserver.local
127.0.0.1 vectordb.botserver.local
127.0.0.1 vault.botserver.local
127.0.0.1 llm.botserver.local
127.0.0.1 embedding.botserver.local
127.0.0.1 directory.botserver.local
127.0.0.1 email.botserver.local
127.0.0.1 meet.botserver.local
"@ | Add-Content C:\Windows\System32\drivers\etc\hosts

Linux

sudo tee -a /etc/hosts << 'EOF'
127.0.0.1 botserver.local
127.0.0.1 api.botserver.local
127.0.0.1 tables.botserver.local
127.0.0.1 drive.botserver.local
127.0.0.1 cache.botserver.local
127.0.0.1 vectordb.botserver.local
127.0.0.1 vault.botserver.local
127.0.0.1 llm.botserver.local
127.0.0.1 embedding.botserver.local
127.0.0.1 directory.botserver.local
127.0.0.1 email.botserver.local
127.0.0.1 meet.botserver.local
EOF

macOS

sudo tee -a /etc/hosts << 'EOF'
127.0.0.1 botserver.local
127.0.0.1 api.botserver.local
127.0.0.1 tables.botserver.local
127.0.0.1 drive.botserver.local
127.0.0.1 cache.botserver.local
127.0.0.1 vectordb.botserver.local
127.0.0.1 vault.botserver.local
127.0.0.1 llm.botserver.local
127.0.0.1 embedding.botserver.local
127.0.0.1 directory.botserver.local
127.0.0.1 email.botserver.local
127.0.0.1 meet.botserver.local
EOF

# Flush DNS cache
sudo dscacheutil -flushcache
sudo killall -HUP mDNSResponder

Option 2: Use botserver’s CoreDNS

botserver runs CoreDNS on port 53. Point your system to use it as DNS server.

Windows

# Get active interface
$interface = (Get-NetAdapter | Where-Object {$_.Status -eq "Up"}).Name

# Set DNS servers
Set-DnsClientServerAddress -InterfaceAlias $interface -ServerAddresses ("127.0.0.1","8.8.8.8")

Linux (systemd-resolved)

sudo mkdir -p /etc/systemd/resolved.conf.d/
sudo tee /etc/systemd/resolved.conf.d/botserver.conf << 'EOF'
[Resolve]
DNS=127.0.0.1
FallbackDNS=8.8.8.8 8.8.4.4
Domains=~botserver.local
EOF
sudo systemctl restart systemd-resolved

macOS

sudo mkdir -p /etc/resolver
sudo tee /etc/resolver/botserver.local << 'EOF'
nameserver 127.0.0.1
EOF

This routes only *.botserver.local queries to botserver’s DNS.


Trusting Self-Signed Certificates

For browser access without warnings, trust the CA certificate:

Windows

Import-Certificate -FilePath ".\botserver-stack\conf\system\certificates\ca\ca.crt" -CertStoreLocation Cert:\LocalMachine\Root

Linux

# Ubuntu/Debian
sudo cp ./botserver-stack/conf/system/certificates/ca/ca.crt /usr/local/share/ca-certificates/botserver-ca.crt
sudo update-ca-certificates

# Fedora/RHEL
sudo cp ./botserver-stack/conf/system/certificates/ca/ca.crt /etc/pki/ca-trust/source/anchors/botserver-ca.crt
sudo update-ca-trust

macOS

sudo security add-trusted-cert -d -r trustRoot -k /Library/Keychains/System.keychain ./botserver-stack/conf/system/certificates/ca/ca.crt

Firefox

Firefox uses its own certificate store:

  1. SettingsPrivacy & SecurityView Certificates
  2. AuthoritiesImport
  3. Select botserver-stack/conf/system/certificates/ca/ca.crt
  4. Check “Trust this CA to identify websites”

Troubleshooting

DNS Not Resolving

# Check CoreDNS is running
./botserver status dns

# Test DNS directly
dig @127.0.0.1 api.botserver.local

macOS .local Conflicts

macOS reserves .local for Bonjour. Use the /etc/resolver/ method which doesn’t conflict.

Reverting Changes

# Remove hosts entries (Linux/macOS)
sudo sed -i '/botserver\.local/d' /etc/hosts

# Remove macOS resolver
sudo rm /etc/resolver/botserver.local

# Reset DNS (Linux)
sudo rm /etc/systemd/resolved.conf.d/botserver.conf
sudo systemctl restart systemd-resolved

# Reset DNS (Windows)
Set-DnsClientServerAddress -InterfaceAlias $interface -ResetServerAddresses

Summary

Access MethodConfigurationBest For
IP AddressNoneDefault - works immediately, scripts, APIs
hosts fileMinimalBrowser access with friendly URLs
CoreDNSLowDevelopment teams, wildcard subdomains

Default recommendation: Use IP addresses for development. No configuration needed.

First Conversation

After botserver starts, you can immediately start chatting with your bot. No programming required!

Just Start Talking

Open your browser to http://localhost:9000 and start chatting:

You: Hi!
Bot: Hello! How can I help you today?

You: I want to enroll in a course
Bot: I'll help you with enrollment. Let me collect your information...
     [Bot automatically calls enrollment.bas tool]

You: What documents do you have?
Bot: [Searches .gbkb/ folders and answers from your documents]

That’s it! The LLM handles everything automatically.

How It Works

Drop Documents in .gbkb/

mybot.gbai/
  mybot.gbkb/
    policies/
      enrollment-policy.pdf
      course-catalog.pdf
    faqs/
      student-faq.docx
      payment-guide.txt

The bot automatically indexes all documents, creates vector embeddings, searches when users ask questions, and provides accurate answers from your content.

Create Tools as .bas Files

Create enrollment.bas:

PARAM name AS string          LIKE "John Smith"           DESCRIPTION "Student full name"
PARAM email AS string         LIKE "john@example.com"     DESCRIPTION "Contact email"
PARAM course AS string        LIKE "Computer Science"     DESCRIPTION "Course to enroll in"

DESCRIPTION "Enrollment tool - collects student information and processes enrollment"

SAVE "enrollments.csv", name, email, course, NOW()
TALK "Enrollment complete! Welcome to " + course + ", " + name

The LLM automatically discovers this tool, knows when to call it based on user intent, collects required parameters through natural conversation, and executes the tool when all information is gathered.

The LLM Does Everything Else

You don’t need to write IF/THEN logic, intent detection, dialog flows, state machines, or complex routing. The LLM automatically understands user intent, calls appropriate tools, searches knowledge bases, maintains conversation context, and handles follow-up questions naturally.

Real Example Conversation

With enrollment.bas tool and course documents in .gbkb/:

User: Hello, I'd like to know about your computer science program

Bot: I'd be happy to help you learn about our Computer Science program! 
     Based on our course catalog, the program offers a comprehensive 
     4-year curriculum covering software development, algorithms, 
     databases, and AI. Would you like to know about specific courses 
     or the enrollment process?

User: Yes, I want to enroll

Bot: Great! I'll help you enroll in the Computer Science program. 
     Let me collect your information. What's your full name?

User: John Smith

Bot: Thank you, John. What email address should we use for contact?

User: john.smith@gmail.com

Bot: Perfect! I'm enrolling you in Computer Science now...
     [Executes enrollment.bas tool]
     Enrollment complete! Welcome to Computer Science, John Smith!
     You'll receive a confirmation at john.smith@gmail.com.

User: What are the payment options?

Bot: [Searches payment-guide.txt from .gbkb/]
     According to our payment guide, we offer three options:
     1. Full payment with 10% discount
     2. Monthly installments over 12 months
     3. Semester-based payments
     Which option would you prefer?

The Magic Formula

Documents in .gbkb/ combined with tools in .bas files and the LLM create an intelligent bot. No complex programming needed!

Creating Your First Bot

Start by creating the directory structure for your bot with folders for dialog scripts, knowledge base documents, and bot configuration. Add your documents to the .gbkb/ directory including PDFs, Word documents, text files, and Markdown files. Optionally create tools as .bas files to handle specific actions like processing forms or calling APIs. Then restart botserver and start chatting. The LLM will answer questions from your documents, call your tools when appropriate, and handle the entire conversation naturally.

mkdir -p mybot.gbai/mybot.gbdialog
mkdir -p mybot.gbai/mybot.gbkb/docs
mkdir -p mybot.gbai/mybot.gbot

Example tool in mybot.gbdialog/my-tool.bas:

PARAM user_name AS string
PARAM request AS string

DESCRIPTION "Handles user requests"

result = CALL "/api/process", user_name, request
TALK "Done! " + result

No Programming Required

Traditional chatbots require complex logic with IF/THEN statements, intent detection, and multi-step dialog management. With botserver, you simply create the tool with parameters and a description, and the LLM handles all the conversation logic automatically.

Traditional approach (don’t do this):

' Complex multi-step dialog
IF intent = "enrollment" THEN
    TALK "Let me help you enroll. What's your name?"
    HEAR name
    TALK "What's your email?"
    HEAR email
    ' ... lots more code ...
ENDIF

botserver approach (just create the tool):

' In enrollment.bas - becomes a tool automatically
PARAM name AS string
PARAM email AS string
DESCRIPTION "Collects enrollment information"

SAVE "enrollments.csv", name, email
TALK "Successfully enrolled " + name

What Can You Build?

A customer support bot uses product manuals in .gbkb/ and a create-ticket.bas tool, allowing the LLM to answer questions and create support tickets automatically.

An HR assistant combines the employee handbook in .gbkb/ with a leave-request.bas tool so the LLM can explain policies and process leave requests.

An education platform stores course materials in .gbkb/ and provides enrollment.bas and submit-assignment.bas tools, enabling the LLM to teach content and manage student tasks.

A sales assistant uses product catalogs in .gbkb/ with a create-quote.bas tool, allowing the LLM to answer product questions and generate quotes.

Advanced Features

Dynamic Tool Loading

The LLM can load tools based on context. In start.bas, you simply specify which knowledge bases to use, and tools in .gbdialog/ are auto-discovered. The LLM handles the conversation naturally without explicit HEAR statements.

Multi-Language Support

The LLM handles multiple languages automatically. Users can write in Portuguese, Chinese, or any other language, and the bot responds appropriately in the same language.

Context Awareness

The LLM maintains conversation context throughout the interaction. If a user starts to enroll but then asks about prerequisites, the bot handles the tangent and can return to the enrollment process afterward.

Tips for Success

Organize documents clearly by creating folders for policies, products, FAQs, and tutorials within your .gbkb/ directory. This helps the LLM find relevant information quickly.

Name tools descriptively with names like enrollment.bas, create-ticket.bas, and schedule-meeting.bas. The LLM understands what each tool does from its name and description.

Always add descriptions to tools using the DESCRIPTION keyword. A good description like “This tool processes student enrollment for courses” helps the LLM know when to use the tool.

Let the LLM work without trying to control every aspect of the conversation. Allow it to rephrase responses naturally, handle unexpected questions, and maintain conversation flow on its own.

Next Steps

The Quick Start guide walks you through building your first bot. The Packages chapter explains the package structure in detail. The Tool Definition documentation covers creating sophisticated tools. The Knowledge Base chapter describes document management and indexing.

Remember: Just add documents and tools, and the LLM does the rest!

Sessions and Channels

Every conversation has memory. Sessions are the beating heart of botserver because they remember who you are, what you have said, and where you left off. Even if you close your browser and come back tomorrow, your conversation continues right where it paused.

Session Manager

What Is a Session?

A session is a persistent conversation container that tracks everything about an ongoing interaction. This includes who is talking through user identity, what has been said through message history, the current state including variables and context, any active tools and knowledge bases, and the bot configuration in use. Think of it like a phone call that can pause and resume anytime without losing the thread of conversation.

How Sessions Start

UI Interface

When a user opens http://localhost:9000, the browser receives a session token in the form of a UUID. This token is stored in localStorage for persistence across page loads. The session itself is created in PostgreSQL for durability and cached for fast access during active conversations.

API Access

Programmatic access to sessions uses the REST API. A POST request to /api/session returns a session ID and secret token. Subsequent requests include the token in the Authorization header as a Bearer token to maintain the session context.

# Get new session
curl -X POST http://localhost:9000/api/session
# Returns: {"session_id": "uuid-here", "token": "secret-token"}

# Use session
curl -H "Authorization: Bearer secret-token" \
     http://localhost:9000/api/chat

Anonymous vs Authenticated

Sessions come in two flavors depending on user identity. Anonymous sessions are auto-created with temporary identities for users who have not logged in. Authenticated sessions link to a user account and maintain permanent history that persists indefinitely.

Session Lifecycle

Session States

Sessions move through several states during their existence. Active sessions have no timeout while the user is actively chatting. Idle sessions timeout after 30 minutes by default, though this is configurable. Expired sessions are removed after 7 days for anonymous users, while authenticated sessions never expire automatically.

What Gets Stored

PostgreSQL (Permanent Storage)

The database stores the authoritative session record. The sessions table tracks the unique ID, optional user reference, which bot is being used, creation timestamp, and last activity time. The messages table stores each message with its session reference, role (user, assistant, or system), content, and timestamp. The session_state table holds variables as JSONB data and tracks the current knowledge base and tool context.

Cache (Fast Access)

The cache layer provides rapid access to active session data. Recent messages, current variables, active knowledge bases and tools, and last activity timestamps are all cached under keys prefixed with the session UUID. This caching ensures responsive conversations without constant database queries.

Session Variables

Variables set in BASIC scripts persist across messages automatically. When you store a variable in one message, you can retrieve it in a later message whether that is minutes or days later.

' First message
name = HEAR
SET user_name = name

' Later message (minutes or days later)
GET user_name
TALK "Welcome back, " + user_name

Storage happens automatically through several layers. Writes go to cache immediately for fast access. Every message triggers persistence to PostgreSQL for durability. If the cache misses, data restores automatically from the database.

Context Management

Each session maintains its own isolated context. When one session loads a knowledge base, other sessions remain unaffected. This isolation ensures users see only the information relevant to their conversation.

' Session A
USE KB "policies"
' Only this session sees policies

' Session B (different user)
USE KB "products"  
' This session only sees products

Session contexts include active knowledge bases, loaded tools, LLM configuration overrides, and custom prompts. All of these are scoped to the individual session and do not leak between users.

Multi-Bot Sessions

Different bots create entirely separate sessions. A user visiting /default gets one session connected to the default bot, while visiting /support creates a different session for the support bot. Each bot session is completely independent with its own conversation history, knowledge bases, configuration, and no data sharing between them.

Session Security

Token Generation

Session tokens use cryptographically secure random generation with 256-bit entropy. Tokens are encoded in URL-safe base64 format and are unique per session. This makes tokens effectively impossible to guess or predict.

Token Validation

Every request undergoes validation to ensure security. The system verifies that the token exists, has not expired, matches the claimed session, and that the session is still active. Any failure in this chain rejects the request.

Security Features

Multiple security measures protect sessions. Unguessable tokens prevent session hijacking. New tokens for each session prevent session fixation attacks. Automatic cleanup removes old sessions to prevent accumulation. Rate limiting per session prevents abuse.

Debugging Sessions

View Current Session

Within a BASIC script, you can access session information directly.

session_id = GET "session.id"
TALK "Session: " + session_id

Database Inspection

Direct database queries help debug session issues. You can find all active sessions by querying for recent activity, or view message history for a specific session ordered by timestamp.

Cache Inspection

The cache contents can be examined using the valkey-cli tool. List all session keys or retrieve specific session data like variables or context directly from the cache.

Session Limits

Default limits control resource usage, though all are configurable. Message history keeps the last 50 messages in context. Variable storage allows up to 1MB per session. File uploads accept up to 10MB per file. Each server handles up to 1000 concurrent sessions. Rate limiting restricts each session to 60 messages per minute.

Advanced Features

Session Persistence

Sessions persist across server restarts through the cache and database layers. When users reconnect after a restart, their session state restores automatically. This happens transparently without any action required from users or bot developers.

Session Context Isolation

Each session maintains its own context for knowledge base and tool usage. When you load a knowledge base or enable a tool, the change affects only the current session. Other users in other sessions remain unaffected by your context changes.

' Each session has isolated context
USE KB "docs"
' Only affects current session

How It Works Automatically

Sessions require zero configuration from bot developers. Creation happens automatically on the first request from any client. Storage to database and cache happens automatically as conversations progress. Cleanup runs automatically after sessions expire. Security through token generation happens automatically without any setup. Multi-channel support through automatic adapter selection means the same session infrastructure works across all platforms.

You never need to manage sessions directly. Just use the conversation keywords like TALK, HEAR, SET, and GET. Everything else happens behind the scenes.

Common Patterns

Welcome Back

Personalize greetings by remembering when users last visited. Store the last visit timestamp and check for it on subsequent sessions to customize the welcome message.

last_visit = GET BOT MEMORY "last_visit_" + session_id
IF last_visit THEN
  TALK "Welcome back! Last seen: " + last_visit
ELSE
  TALK "Welcome! First time here?"
END IF
SET BOT MEMORY "last_visit_" + session_id, NOW()

Progressive Disclosure

Reveal more features as users become more engaged by tracking message count and adjusting guidance accordingly.

msg_count = GET "session.message_count"
IF msg_count < 3 THEN
  TALK "I can help with basic questions"
ELSE IF msg_count < 10 THEN
  TALK "Try our advanced features!"
ELSE
  TALK "You're a power user! Check tools menu"
END IF

Multi-User Support

Each user automatically receives their own isolated session. The system handles user separation without any explicit code required. Simply write your dialog logic and trust that each user’s data remains private to their session.

Troubleshooting

If sessions are not persisting, check that PostgreSQL is running and accessible. Verify that the cache server is reachable. Look for disk space issues that might prevent database writes.

If sessions expire too soon, adjust the timeout setting in config.csv. Check that server clocks are synchronized. Monitor for memory pressure that might cause early cache eviction.

If you cannot resume a session, the token might have become invalid through expiration or corruption. The session could have passed its expiration window. Database connection issues can also prevent session restoration.

Write Once, Run Everywhere

The same BASIC script runs across all channels including the UI interface, mobile apps, WhatsApp, Microsoft Teams, email conversations, and voice assistants. Your investment in dialog development pays off everywhere because each channel adapter handles the platform specifics while you focus on conversation logic.

' This same script works everywhere

TALK "Hello! How can I help?"
answer = HEAR
TALK "I understand you need help with: " + answer

Summary

Sessions and channels work together seamlessly in botserver. Sessions handle state management automatically across any channel, persist data reliably through cache and database layers, and scale efficiently to thousands of concurrent conversations. You focus on writing the conversation flow while the system handles memory management and multi-channel delivery transparently.

Chapter 2: Architecture & Packages

Architecture and deployment reference for developers.

Overview

botserver is built in Rust with a modular architecture. Extend it by creating custom keywords, services, or entire applications.

Architecture

┌─────────────────────────────────────────┐
│              Web Server (Axum)          │
├─────────────────────────────────────────┤
│         BASIC Runtime (Rhai)            │
├──────────┬──────────┬──────────┬────────┤
│   LLM    │ Storage  │  Vector  │ Cache  │
│ Service  │ (MinIO)  │ (Qdrant) │(Valkey)│
├──────────┴──────────┴──────────┴────────┤
│            PostgreSQL                   │
└─────────────────────────────────────────┘

Deployment Options

MethodUse CaseGuide
LocalDevelopmentInstallation
DockerProductionDocker Deployment
LXCIsolated componentsContainer Deployment

Module Structure

ModulePurpose
web_serverHTTP/WebSocket handling
basicBASIC language runtime
llmLLM provider integration
driveObject storage
sharedDatabase models

Creating Custom Keywords

#![allow(unused)]
fn main() {
// In src/basic/keywords/my_keyword.rs
pub fn my_keyword(context: &mut EvalContext) -> Result<Dynamic, Box<EvalError>> {
    // Your keyword logic
    Ok(Dynamic::from("result"))
}
}

Register in keywords/mod.rs and rebuild.

Autonomous Task AI

General Bots enables autonomous task execution where the machine does the work:

Human describes intent → AI plans → AI generates → AI deploys → AI monitors

Key concepts:

  • Intent Compilation - LLM translates natural language to execution plans
  • CREATE SITE - Generates HTMX apps bound to botserver API
  • .gbdrive - Cloud-synced workspace for all task files
  • Autonomous Execution - System runs plans with approval gates

See Autonomous Task AI for complete documentation.

Chapter Contents

See Also

Architecture Overview

botserver follows a modular architecture designed for scalability, maintainability, and extensibility. Each module handles specific responsibilities and communicates through well-defined interfaces. This chapter provides a comprehensive tour of the system architecture and how components work together.

Core Architecture

The architecture diagrams below illustrate the major components and their relationships.

Data Flow Architecture

botserver Data Flow Architecture

System Architecture

botserver System Architecture

Module Dependency Graph

Module Dependency Graph

Module Organization

The codebase is organized into modules that group related functionality together. Each module has clear responsibilities and well-defined interfaces with other modules.

Data Flow Through Modules

Data Flow Through Modules

Core Modules

The auth/ module handles authentication and authorization throughout the system. It manages user accounts and group memberships, implements role-based access control (RBAC), handles JWT token generation and validation, provides OAuth integration for external identity providers, and supports two-factor authentication for enhanced security.

The automation/ module provides the workflow automation engine. It handles process automation for complex multi-step operations, manages scheduled tasks that run at specified intervals, enables event-driven automation that responds to system events, orchestrates workflows across multiple services, and integrates with external systems for extended capabilities.

The basic/ module implements the BASIC dialect interpreter and runtime environment. It provides keyword implementations for all BASIC commands, handles script compilation from source to executable form, manages variables and their scopes, implements flow control structures like loops and conditionals, integrates with external tools the LLM can invoke, and provides comprehensive error handling with helpful messages.

The bootstrap/ module handles system initialization and startup procedures. It verifies all required components are available, sequences service startup in the correct order, runs database migrations to update schema, deploys default templates for new installations, performs health checks to ensure system readiness, and loads configuration from files and environment variables.

The bot/ module manages bot instances and their interactions. It handles the bot lifecycle including creation, mounting, and unmounting. It processes conversations between users and bots, handles user input and routes it appropriately, coordinates response generation from various sources, manages multi-bot deployments on a single server, and ensures session isolation between different users and bots.

Communication Modules

The channels/ module provides multi-channel messaging adapters that allow bots to communicate across different platforms. Supported channels include the web interface for browser-based chat, WhatsApp Business API for messaging app integration, Microsoft Teams for enterprise collaboration, Slack for team communication, Instagram for social media engagement, SMS for text messaging, and voice for telephone interactions.

The meet/ module enables real-time communication features. It provides video conferencing capabilities for face-to-face meetings, voice calling for audio-only communication, screen sharing for presentations and collaboration, recording functionality for meeting archives, transcription services for accessibility, and meeting scheduling integration with calendars.

The web_server/ module implements the HTTP server and web interface. It serves static files for the UI, handles WebSocket connections for real-time chat, routes REST API requests to appropriate handlers, manages CORS policies for browser security, and processes requests and responses throughout the system.

AI and Knowledge Modules

The llm/ module provides large language model integration. It handles model selection based on configuration and requirements, formats prompts according to model expectations, manages token counting and context limits, streams responses for real-time display, tracks API costs for budgeting, and implements model fallbacks when primary providers are unavailable.

The llm_models/ module contains specific implementations for different model providers. OpenAI integration supports GPT-5 and o3 models. Anthropic integration provides access to Claude Sonnet 4.5 and Opus 4.5 models. Google integration enables Gemini model usage. Meta integration supports Llama models for local deployment. Local model support allows self-hosted inference. Custom model implementations can be added for specialized providers.

The prompt_manager/ module provides centralized prompt management capabilities. It maintains prompt templates for consistent interactions, handles variable substitution in prompts, optimizes prompts for specific models, supports version control of prompt changes, enables A/B testing of different approaches, and tracks prompt performance metrics.

The context/ module manages conversation context throughout interactions. It optimizes the context window to fit within model limits, manages conversation history retention, compresses context when necessary to preserve information, filters context for relevance to current queries, and tracks multi-turn conversations across messages.

Storage and Data Modules

The drive/ module handles file and document management. It supports file upload and download operations, processes documents for indexing and search, maintains version control of files, manages sharing permissions between users, enforces quota limits on storage usage, and indexes content for search functionality.

The drive_monitor/ module provides storage monitoring and synchronization. It detects changes to files for automatic processing, synchronizes content across storage locations, resolves conflicts when multiple changes occur, manages backups of important data, and provides analytics on storage usage patterns.

The package_manager/ module handles bot package management. It loads packages from storage into the runtime, resolves dependencies between packages, manages package versions and updates, supports hot reload of changed packages without restart, and validates packages before deployment.

Processing Modules

The engines/ module contains various processing engines for different tasks. The rule engine evaluates business rules and conditions. The workflow engine orchestrates complex processes. The event processor handles system and external events. The message queue manages asynchronous communication. The job scheduler executes background tasks.

The calendar_engine/ module provides calendar and scheduling functionality. It manages events and appointments, checks availability for scheduling, coordinates meetings between participants, sends reminders for upcoming events, and handles timezone conversions correctly.

The task_engine/ module implements the task management system. It creates tasks from user requests or automation, assigns tasks to appropriate parties, tracks task status through completion, manages dependencies between tasks, and sends notifications about task updates.

The email/ module provides email integration capabilities. It sends email via SMTP protocols, receives email via IMAP connections, manages email templates for consistent formatting, tracks email delivery and opens, and handles bounced emails appropriately.

Utility Modules

The session/ module manages user sessions throughout their interactions. It creates sessions for new users, persists session state to storage, enforces session timeouts for security, handles concurrent sessions from the same user, and recovers sessions after server restarts.

The config/ module handles configuration management. It loads configuration from files and databases, reads environment variables for deployment settings, supports hot reload of configuration changes, validates configuration values, and provides sensible defaults for optional settings.

The shared/ module contains shared utilities and models used across the system. It defines database models for persistence, provides common types used throughout the codebase, implements helper functions for repeated tasks, centralizes constants and magic values, and defines error types for consistent error handling.

The compliance/ module implements regulatory compliance features. It ensures GDPR compliance for data protection, enforces data retention policies, maintains comprehensive audit logging, provides privacy controls for sensitive data, and manages user consent records.

The nvidia/ module provides GPU acceleration support for local model inference. It integrates with CUDA for GPU computation, runs model inference on GPU hardware, batches requests for efficient processing, and optimizes performance for available hardware.

The ui_tree/ module manages UI component trees for the interface. It maintains a virtual DOM for efficient updates, manages component lifecycles, handles state across components, processes events from user interactions, and optimizes rendering performance.

The web_automation/ module provides web scraping and automation capabilities. It automates browser interactions for data gathering, extracts content from web pages, fills forms programmatically, captures screenshots for documentation, and monitors pages for changes.

Data Flow

Request Processing Pipeline

When a user sends a message, it flows through several processing stages. First, the Channel Adapter receives the user input from the appropriate platform. The Session Manager then identifies the existing session or creates a new one. The Context Manager loads conversation history and relevant context. The BASIC Interpreter executes the dialog script that handles the message. If needed, LLM Integration processes natural language to understand intent. The Knowledge Base provides relevant information from loaded documents. The Response Generator formats the output for the user. Finally, the Channel Adapter delivers the response back through the original platform.

Storage Architecture

The primary database uses PostgreSQL to store structured data including user accounts, bot configurations, session data, conversation history, and system metadata. The Diesel ORM provides type-safe database access.

Object storage using Drive provides S3-compatible storage for files including user uploads, processed documents, media files, system backups, and application logs.

The cache layer provides fast access to frequently needed data. It stores session information for quick retrieval, caches commonly accessed data, implements rate limiting counters, holds temporary processing data, and supports pub/sub messaging between components.

The vector database uses Qdrant to store document embeddings for semantic search. It maintains the semantic search index, stores knowledge base vectors, and performs similarity matching for relevant content retrieval.

Security Architecture

Authentication Flow

The authentication process follows a secure sequence. Users provide credentials through the login interface. The auth module validates credentials against stored records. Upon successful validation, a JWT token is issued. Each subsequent request includes this token for verification. A session is established to maintain state. Permissions are checked before any operation is performed.

Data Protection

Data protection operates at multiple layers. Encryption at rest protects data stored in the database and files. Encryption in transit using TLS/SSL protects data during transmission. Sensitive data masking prevents exposure in logs and displays. PII detection identifies and protects personal information. Secure key management protects cryptographic keys from exposure.

Access Control

Access control mechanisms ensure appropriate authorization. Role-based permissions determine what actions users can perform. Resource-level authorization controls access to specific objects. API rate limiting prevents abuse and ensures fair usage. IP allowlisting restricts access to known addresses when configured. Comprehensive audit logging records all significant actions.

Deployment Architecture

Container Structure

Production deployments typically use containers for isolation and portability. The main application container runs the botserver binary. PostgreSQL runs in a separate database container. Drive storage uses an S3-compatible container like MinIO. The cache layer uses Valkey in its own container. Qdrant provides vector database functionality in another container. Nginx serves as a reverse proxy for external traffic.

Scaling Strategy

The system scales to handle increased load through several mechanisms. Horizontal scaling adds more web server instances behind a load balancer. Read replicas for the database handle query load. Distributed cache spreads session data across nodes. Load balancing distributes requests across available instances. Auto-scaling policies adjust capacity based on demand.

High Availability

High availability configurations ensure continuous operation. Multi-zone deployment protects against facility failures. Database replication maintains copies of data. Storage redundancy prevents data loss. Health monitoring detects problems quickly. Automatic failover redirects traffic when components fail.

Performance Optimization

Caching Strategy

Caching improves response times throughout the system. Response caching stores generated responses for reuse. Query result caching avoids repeated database queries. Static asset caching serves files directly from cache. API response caching stores external API results. Knowledge base caching keeps frequently accessed content in memory.

Async Processing

Asynchronous processing improves throughput and responsiveness. Background jobs handle long-running tasks without blocking. Message queues decouple producers from consumers. Event-driven architecture responds to changes efficiently. Non-blocking I/O maximizes resource utilization. Worker pools distribute processing across threads.

Resource Management

Careful resource management ensures efficient operation. Connection pooling reuses database connections. Memory management prevents leaks and excessive usage. Token optimization minimizes LLM API costs. Query optimization reduces database load. Lazy loading defers work until necessary.

Monitoring and Observability

Metrics Collection

Comprehensive metrics provide visibility into system behavior. System metrics track CPU, memory, and disk usage. Application metrics measure request rates and latencies. Business metrics track user engagement and outcomes. User analytics show usage patterns. Performance tracking identifies bottlenecks.

Logging

Structured logging supports debugging and analysis. All logs use consistent structured formats. Log aggregation collects logs from all components. Error tracking captures and groups exceptions. Audit trails record security-relevant events. Debug logging provides detailed information when needed.

Health Checks

Health checks ensure system availability and readiness. Liveness probes confirm the application is running. Readiness probes verify the application can serve requests. Dependency checks validate external services are available. Performance monitoring tracks response times. The alert system notifies operators of problems.

Extension Points

Plugin System

The system provides extension points for customization. Custom keywords extend the BASIC language with new capabilities. External tools integrate third-party services. API integrations connect to external systems. Custom channels add support for new platforms. Model providers integrate additional LLM services.

Webhook Support

Webhooks enable event-driven integrations. Incoming webhooks accept notifications from external systems. Outgoing webhooks notify external systems of events. Event subscriptions define what events trigger webhooks. Callback handling processes webhook responses. Retry mechanisms ensure delivery despite transient failures.

API Integration

Multiple API protocols support different integration needs. The REST API provides standard HTTP access. GraphQL support is planned for flexible queries. WebSocket connections enable real-time bidirectional communication. gRPC support is planned for high-performance integrations. OpenAPI specifications document all endpoints.

Development Workflow

Local Development

Setting up a local development environment follows a straightforward process. First, clone the repository to your machine. Install required dependencies using Cargo and system packages. Configure environment variables for local services. Run database migrations to set up the schema. Start the required services like PostgreSQL and cache. Load default templates for testing.

Testing Strategy

Testing ensures code quality at multiple levels. Unit tests verify individual functions and methods. Integration tests check interactions between components. End-to-end tests validate complete user workflows. Load testing measures performance under stress. Security testing identifies vulnerabilities.

CI/CD Pipeline

Continuous integration and deployment automates quality assurance. Automated testing runs on every commit. Code quality checks enforce standards. Security scanning identifies known vulnerabilities. The build process produces deployable artifacts. Deployment automation pushes releases to environments.

Future Architecture Plans

Planned Enhancements

Future development will expand system capabilities. Microservices migration will enable independent scaling of components. Kubernetes native deployment will simplify orchestration. Multi-region support will improve global performance. Edge deployment will reduce latency for distributed users. Serverless functions will enable elastic scaling for specific workloads.

Performance Goals

Performance targets guide optimization efforts. Response times should be sub-100ms for typical requests. The system should support 10,000 or more concurrent users. Uptime should reach 99.99% for production deployments. Elastic scaling should handle traffic spikes automatically. Global CDN integration should improve worldwide access times.

Module Structure

botserver is a single Rust crate (not a workspace) with multiple modules. The application is defined in Cargo.toml as the botserver crate, version 6.0.8.

Main Entry Points

The primary entry point is src/main.rs, which starts the Axum web server and initializes all components. The public library interface in src/lib.rs exports all major modules for external use.

Core Modules

The following modules are exported in src/lib.rs and comprise the core functionality:

User & Bot Management

The auth module handles user authentication, password hashing using Argon2, and session token management. The bot module manages bot lifecycle, configuration, and runtime operations. The session module provides user session handling and state management across conversations.

Conversation & Scripting

The basic module implements the BASIC-like scripting language interpreter for .gbdialog files. The context module manages conversation context and memory throughout user interactions. The channels module provides multi-channel support for web, voice, and various messaging platforms.

Knowledge & AI

The llm module provides LLM provider integration for OpenAI and local models. The llm_models module contains model-specific implementations and configurations. The nvidia module offers NVIDIA GPU acceleration support for local inference.

Infrastructure

The bootstrap module handles system initialization and the auto-bootstrap process. The package_manager module manages component installation and lifecycle. The config module provides application configuration and environment management. The shared module contains shared utilities, database models, and common types used throughout the codebase. The web_server module implements the Axum-based HTTP server and API endpoints.

Features & Integration

The automation module provides scheduled tasks and event-driven triggers. The drive_monitor module handles file system monitoring and change detection. The email module provides email integration via IMAP and SMTP as a conditional feature. The file module handles file processing and operations. The meet module integrates video meeting functionality through LiveKit.

Testing & Development

The tests module contains test utilities and test suites for validating functionality across the codebase.

Internal Modules

Several directories exist in src/ that are either internal implementations or not fully integrated into the public API.

The api/ directory contains the api/drive subdirectory with drive-related API code. The drive/ directory provides drive (S3-compatible) integration and vector database functionality through vectordb.rs. The ui/ directory contains UI-related modules including drive.rs, stream.rs, sync.rs, and local-sync.rs. The ui_tree/ directory provides UI tree structure functionality used in main.rs but not exported in lib.rs. The prompt_manager/ directory stores the prompt library and is not a Rust module but contains prompts.csv. The riot_compiler/ directory contains a Riot.js component compiler that exists but is currently unused. The web_automation/ directory is an empty placeholder for future functionality.

Dependency Management

All dependencies are managed through a single Cargo.toml at the project root.

The web framework layer uses axum, tower, and tower-http for HTTP handling. The async runtime is tokio for concurrent operations. Database access uses diesel for PostgreSQL and redis for cache component connectivity. AI and ML functionality relies on qdrant-client for vector database operations as an optional feature. Storage operations use aws-sdk-s3 for drive and S3-compatible storage backends. Scripting uses rhai as the BASIC-like language runtime. Security features include argon2 for password hashing and aes-gcm for encryption. Desktop support uses tauri as an optional feature.

Feature Flags

The crate supports optional features for customizing builds:

[features]
default = ["desktop"]
vectordb = ["qdrant-client"]
email = ["imap"]
desktop = ["dep:tauri", "dep:tauri-plugin-dialog", "dep:tauri-plugin-opener"]

Building

To build the project with different configurations:

# Standard build
cargo build --release

# Build without desktop features
cargo build --release --no-default-features

# Build with vector database support
cargo build --release --features vectordb

# Build with all features
cargo build --release --all-features

Module Organization Pattern

Most modules follow a consistent structure with a mod.rs file containing the main module implementation and a module_name.test.rs file for module-specific tests. Some modules have additional submodules or specialized files such as drive/vectordb.rs and ui/drive.rs for feature-specific functionality.

Service Layer

botserver’s service layer is organized into functional modules that handle specific aspects of the platform. Each module encapsulates related functionality and provides a clear API for interaction with other parts of the system. This chapter describes each service module and its responsibilities within the overall architecture.

Core Service Modules

Authentication and Security

The auth module provides secure user authentication and session management throughout the platform. Password hashing uses the Argon2 algorithm for secure password storage that resists both CPU and GPU-based attacks. Session token generation creates and validates unique tokens for maintaining authenticated state. User verification authenticates users against the database using stored credentials. Bot authentication manages bot-level authentication for API access, allowing bots to make authenticated requests to external services.

The module’s key responsibilities include hashing passwords with Argon2 before storage, generating cryptographically secure session tokens, validating user credentials during login, and managing the complete session lifecycle from creation through expiration.

Bot Management

The bot module handles bot lifecycle and configuration throughout the system. Bot creation initializes new bot instances with their required components. Configuration management loads and applies bot settings from config.csv files. Bot state tracking monitors bot status and health for operational awareness. Multi-tenant support isolates bots by tenant to prevent data leakage between organizations.

This module creates and deletes bot instances, loads bot configuration from the database, manages bot lifecycle including start, stop, and restart operations, and associates bots with users and sessions for proper isolation.

Session Management

The session module maintains user conversation state across interactions. Session storage persists conversation context to both cache and database. State management tracks user progress through dialogs and remembers variable values. Session cleanup removes expired sessions to free resources. Multi-user support isolates sessions by user to ensure privacy.

The module creates new sessions when users connect, stores and retrieves session variables, maintains conversation history for context, and cleans up abandoned sessions after timeout periods.

Conversation and Scripting Services

BASIC Interpreter

The basic module implements the BASIC-like scripting language for .gbdialog files. Script parsing reads BASIC dialog scripts and converts them to executable form. The execution engine powered by the Rhai scripting engine runs the parsed scripts. Keyword implementation provides custom keywords like TALK, HEAR, and LLM for bot functionality. Variable management handles script variables and maintains execution context across statements.

This module loads and parses .gbdialog scripts from bot packages, executes BASIC commands in sequence, provides custom keywords that extend the language for bot functionality, and manages script execution context including variables and flow control state.

Context Management

The context module manages conversation context and memory for LLM interactions. Conversation history storage maintains the message history for each session. Context retrieval loads relevant context for LLM calls based on the current query. Memory management limits context size to fit within model token limits. Context compaction summarizes old conversations to preserve meaning while reducing tokens.

The module appends messages to conversation history as they occur, retrieves appropriate context for LLM queries, implements context window management to stay within limits, and provides context to knowledge base queries for improved relevance.

Channel Abstraction

The channels module provides a unified interface for multiple communication channels. The web interface enables browser-based chat through the default UI. WebSocket support provides real-time bidirectional communication for responsive interactions. Voice integration handles audio input and output for voice-enabled bots. Platform adapters provide an extensible channel system for adding new platforms.

This module abstracts channel-specific implementations behind a common interface, routes messages to appropriate handlers based on channel type, formats responses appropriately for specific channels, and handles channel-specific features like typing indicators and read receipts.

AI and Knowledge Services

LLM Integration

The llm module integrates with large language models for natural language understanding and generation. Provider abstraction supports multiple LLM providers through a common interface. API communication handles API calls to LLM services including authentication and rate limiting. Streaming responses support token streaming for real-time response display. Error handling provides graceful degradation when API calls fail.

The module sends prompts to LLM providers using appropriate formats, parses and streams responses back to callers, handles API authentication and key management, and manages rate limiting with automatic retries when necessary.

LLM Models

The llm_models module contains model-specific implementations for different providers. Model configurations define parameters and capabilities for different models. Prompt templates handle model-specific prompt formatting requirements. Token counting estimates token usage before making API calls. Model selection chooses the appropriate model for each task based on requirements.

This module defines model capabilities and limits for each supported model, formats prompts according to each model’s expectations, calculates token costs for usage tracking, and selects optimal models for specific query types.

NVIDIA Integration

The nvidia module provides GPU acceleration support for local model inference. GPU detection identifies available NVIDIA GPUs in the system. Acceleration enables GPU-accelerated inference for local models. Resource management allocates GPU resources among concurrent requests.

Infrastructure Services

Bootstrap

The bootstrap module handles system initialization and first-time setup. Component installation downloads and installs required components including PostgreSQL, cache, and drive storage. Database setup creates schemas and applies migrations to prepare the database. Credential generation creates secure passwords for all services. Environment configuration writes .env files with generated settings. Template upload deploys bot templates to storage for immediate use.

The module detects installation mode to determine whether it is running locally or in containers, installs and starts all system components in the correct order, initializes the database with migrations and seed data, configures drive storage with appropriate buckets, and creates default bots from included templates.

Package Manager

The package_manager module manages component installation and lifecycle. The component registry tracks available components and their versions. Installation downloads and installs components from configured sources. Lifecycle management starts, stops, and restarts components as needed. Dependency resolution ensures components start in the correct order based on their dependencies.

Managed components include tables for PostgreSQL database, cache for Valkey caching, drive for S3-compatible object storage, llm for local LLM server, email for email server integration, proxy for reverse proxy functionality, directory for LDAP directory services, alm for application lifecycle management, dns for DNS server operations, meeting for LiveKit video conferencing, and vector_db for Qdrant vector database functionality.

Configuration

The config module loads and validates application configuration. Environment variables load from .env files and system environment. Validation ensures all required configuration is present before startup. Defaults provide sensible values for optional settings. Type safety parses configuration into strongly-typed structs for compile-time checking.

The module loads DATABASE_URL, DRIVE_SERVER, API keys, and other settings, validates configuration completeness at startup, provides configuration access to other modules through a shared struct, and handles configuration errors with helpful messages.

Shared Utilities

The shared module contains common functionality used across the system. Database models define the Diesel schema and models for all tables. Connection pooling manages R2D2 connection pools for efficient database access. Utilities provide common helper functions for repeated tasks. Types define shared type definitions used throughout the codebase.

This module defines the database schema with Diesel macros, provides database connection helpers for consistent access patterns, implements common utility functions for string manipulation and data transformation, and shares types across modules to ensure consistency.

Web Server

The web_server module implements the HTTP API using Axum. API routes define RESTful endpoints for bot interaction and management. The WebSocket handler manages real-time communication channels. Static files serve web UI assets for the browser interface. CORS configuration enables cross-origin resource sharing for embedded deployments. Middleware handles logging, authentication, and error handling for all requests.

This module defines API routes and their handlers, processes HTTP requests and generates responses, manages WebSocket connections for real-time chat, and serves static web interface files for the UI.

Feature Services

Automation

The automation module provides scheduled and event-driven task execution. Cron scheduling runs tasks on defined schedules using standard cron syntax. Event triggers react to system events by executing associated handlers. Background jobs execute long-running tasks without blocking the main thread. Job management tracks running jobs and allows cancellation when needed.

Drive Monitor

The drive_monitor module watches for file system changes in bot packages. File watching detects file creation, modification, and deletion events. Event processing handles file change events by triggering appropriate actions. Automatic indexing adds new documents to the knowledge base when they appear in monitored directories.

Email Integration

The email module handles email communication as an optional feature. IMAP support reads emails from configured inbox folders. SMTP support sends emails via the Lettre library. Email parsing extracts text content and attachments from received messages. Template rendering generates HTML emails from templates with variable substitution.

File Handling

The file module processes various file types for knowledge base ingestion. PDF extraction pulls text from PDF documents using pdf-extract. Document parsing handles various document formats including Word and plain text. File upload processes multipart file uploads from users. Storage integration saves processed files to drive storage for persistence.

Meeting Integration

The meet module integrates with LiveKit for video conferencing capabilities. Room creation establishes meeting rooms with appropriate settings. Token generation creates access tokens for meeting participants. Participant management tracks who is in each meeting. Recording captures meeting sessions for later review.

Storage Services

Drive

The drive module provides S3-compatible object storage integration. Drive integration uses the AWS SDK S3 client for compatibility with various providers. Bucket management creates and manages storage buckets for different bots. Object operations handle upload, download, and delete operations for files. Vector database integration connects to Qdrant for semantic search functionality.

UI Components

The ui module contains UI-related functionality for the web interface. Drive UI provides a file browser interface for managing documents. Stream handling implements server-sent events for real-time updates. Sync logic manages synchronization between local and remote files. Local sync enables desktop app file synchronization for offline access.

Testing

The tests module provides test utilities and integration tests for the platform. Test fixtures provide common test data and setup procedures. Integration tests validate end-to-end functionality across modules. Mock services substitute for external dependencies during testing. Test helpers provide utilities for writing consistent, readable tests.

Service Interaction Patterns

Layered Architecture

Services are organized into layers with clear dependencies. The infrastructure layer contains bootstrap, package_manager, config, shared, and web_server modules that provide foundational capabilities. The data layer contains drive, file, and session modules that handle persistence. The domain layer contains bot, auth, context, and basic modules that implement core business logic. The AI layer contains llm, llm_models, and nvidia modules for machine learning integration. The feature layer contains automation, email, meet, and drive_monitor modules that add optional capabilities. The presentation layer contains channels and ui modules that handle user interaction.

Dependency Injection

Services use Rust’s module system and trait-based design for dependency injection. Database connections are shared via connection pools managed by R2D2. Configuration is passed through the AppConfig struct which is initialized at startup and shared immutably. Services access their dependencies through function parameters rather than global state.

Error Handling

All services use anyhow::Result<T> for error handling, allowing errors to propagate up the call stack with context. Each layer adds relevant context to errors before propagating them. Critical services log errors using the log crate with appropriate severity levels. User-facing errors are translated to helpful messages without exposing internal details.

Async Operations

Most services are async and use Tokio as the runtime. This design allows concurrent handling of multiple user sessions without blocking. External API calls run concurrently to minimize latency. Background tasks use Tokio’s task spawning for parallel execution. The async design enables efficient resource utilization even under high load.

Building from Source

This guide covers building botserver from source, including dependencies, feature flags, and platform-specific considerations.

Quick Start

For server deployment (no desktop GUI):

cargo build --release --no-default-features

For desktop application development (requires GTK libraries):

sudo apt install -y libglib2.0-dev libgtk-3-dev libgdk-pixbuf-2.0-dev libcairo2-dev libpango1.0-dev libatk1.0-dev libxkbcommon-dev libxcb-render0-dev libxcb-shape0-dev libxcb-xfixes0-dev
cargo build --release

Prerequisites

System Requirements

  • Operating System: Linux, macOS, or Windows
  • Rust: 1.90 or later (2021 edition)
  • Memory: 4GB RAM minimum (8GB recommended)
  • Disk Space: 8GB for development environment

Install Git

Git is required to clone the repository and manage submodules.

Linux

sudo apt install git

macOS

brew install git

Windows

Download and install from: https://git-scm.com/download/win

Or use winget:

winget install Git.Git

Install Rust

If you don’t have Rust installed:

curl --proto '=https' --tlsv1.2 -sSf https://sh.rustup.rs | sh
source $HOME/.cargo/env

Verify installation:

rustc --version
cargo --version

System Dependencies

Linux (Ubuntu/Debian)

Base dependencies (required for all builds):

sudo apt update
sudo apt install -y \
    clang \
    lld \
    build-essential \
    pkg-config \
    libssl-dev \
    libpq-dev \
    cmake \
    git

Desktop GUI dependencies (required for Tauri/desktop builds):

sudo apt install -y \
    libglib2.0-dev \
    libgtk-3-dev \
    libwebkit2gtk-4.1-dev \
    libjavascriptcoregtk-4.1-dev \
    libayatana-appindicator3-dev \
    librsvg2-dev \
    libsoup-3.0-dev

Note: The webkit2gtk library must be version 4.1, not 4.0. Using the wrong version will cause build failures with error: error: failed to run custom build command for webkit2gtk-sys v2.0.2

Note: Desktop GUI dependencies are only needed if building with --features desktop. For minimal builds without desktop GUI, these libraries are not required.

Configure Rust to use clang as the linker:

mkdir -p ~/.cargo
cat >> ~/.cargo/config.toml << EOF
[target.x86_64-unknown-linux-gnu]
linker = "clang"
rustflags = ["-C", "link-arg=-fuse-ld=lld"]
EOF

Linux (Fedora/RHEL)

Base dependencies (required for all builds):

sudo dnf install -y \
    clang \
    lld \
    gcc \
    gcc-c++ \
    make \
    pkg-config \
    openssl-devel \
    postgresql-devel \
    cmake \
    git

Desktop GUI dependencies (required for Tauri/desktop builds):

sudo dnf install -y \
    glib2-devel \
    gobject-introspection-devel \
    gtk3-devel \
    webkit2gtk3-devel \
    javascriptcore-gtk-devel \
    libappindicator-gtk3-devel \
    librsvg2-devel \
    libsoup3-devel

Note: Desktop GUI dependencies are only needed if building with --features desktop. For minimal builds without desktop GUI, these libraries are not required.

Configure Rust to use clang as the linker:

mkdir -p ~/.cargo
cat >> ~/.cargo/config.toml << EOF
[target.x86_64-unknown-linux-gnu]
linker = "clang"
rustflags = ["-C", "link-arg=-fuse-ld=lld"]
EOF

macOS

brew install postgresql openssl cmake git
xcode-select --install

Windows

Install Visual Studio Build Tools with C++ support from: https://visualstudio.microsoft.com/downloads/

Select “Desktop development with C++” workload during installation.

Then install PostgreSQL manually from: https://www.postgresql.org/download/windows/

Clone Repository

git clone --recursive https://github.com/GeneralBots/gb.git
cd gb

If you cloned without --recursive, initialize submodules:

git submodule update --init --recursive

Build Cache with sccache

sccache caches compilation artifacts for faster rebuilds.

Install and configure:

cargo install sccache
mkdir -p ~/.cargo
echo '[build]
compiler = "sccache"' >> ~/.cargo/config.toml
export RUSTC_WRAPPER=sccache

Verify cache hits:

sccache --show-stats

Clear cache if needed:

sccache --zero-stats

Build Configurations

Build without desktop GUI dependencies:

cargo build --release --no-default-features

This excludes:

  • Desktop GUI (Tauri) - No GTK libraries required
  • Vector database (Qdrant)
  • Email integration (IMAP)

Use this for: Server deployments, Docker containers, CI/CD, or when GTK libraries are not available.

Standard Build (Requires Desktop GUI Dependencies)

Build with default features (includes desktop support):

cargo build --release

Requires: GTK development libraries (see “Desktop GUI dependencies” in prerequisites).

Use this for: Desktop application development or when you need native GUI features.

Feature-Specific Builds

With Vector Database

Enable Qdrant vector database support:

cargo build --release --features vectordb

With Email Support

Enable IMAP email integration:

cargo build --release --features email

Desktop Application

Build as desktop app with Tauri (default):

cargo build --release --features desktop

All Features

Build with all optional features:

cargo build --release --all-features

Feature Flags

botserver supports the following features defined in Cargo.toml:

[features]
default = ["desktop"]
vectordb = ["qdrant-client"]
email = ["imap"]
desktop = ["dep:tauri", "dep:tauri-plugin-dialog", "dep:tauri-plugin-opener"]

Feature Details

FeatureDependenciesPurpose
desktoptauri, tauri-plugin-dialog, tauri-plugin-openerNative desktop application with system integration
vectordbqdrant-clientSemantic search with Qdrant vector database
emailimapIMAP email integration for reading emails

Build Profiles

Debug Build

For development with debug symbols and no optimizations:

cargo build

Binary location: target/debug/botserver

Release Build

Optimized for production with LTO and size optimization:

cargo build --release

Binary location: target/release/botserver

The release profile in Cargo.toml uses aggressive optimization:

[profile.release]
lto = true              # Link-time optimization
opt-level = "z"         # Optimize for size
strip = true            # Strip symbols
panic = "abort"         # Abort on panic (smaller binary)
codegen-units = 1       # Better optimization (slower build)

Platform-Specific Builds

Linux

Standard build works on most distributions:

cargo build --release

For static linking (portable binary):

RUSTFLAGS='-C target-feature=+crt-static' cargo build --release --target x86_64-unknown-linux-gnu

macOS

Build for current architecture:

cargo build --release

Build universal binary (Intel + Apple Silicon):

rustup target add x86_64-apple-darwin aarch64-apple-darwin
cargo build --release --target x86_64-apple-darwin
cargo build --release --target aarch64-apple-darwin
lipo -create \
    target/x86_64-apple-darwin/release/botserver \
    target/aarch64-apple-darwin/release/botserver \
    -output botserver-universal

Windows

Build with MSVC toolchain:

cargo build --release

Binary location: target\release\botserver.exe

Cross-Compilation

Install Cross-Compilation Tools

cargo install cross

Build for Linux from macOS/Windows

cross build --release --target x86_64-unknown-linux-gnu

Build for Windows from Linux/macOS

cross build --release --target x86_64-pc-windows-gnu

Troubleshooting

Linker Errors (Linux)

Error: linker 'clang' not found

This occurs when clang/lld is not installed:

sudo apt install clang lld

Then configure Rust to use clang:

mkdir -p ~/.cargo
cat >> ~/.cargo/config.toml << EOF
[target.x86_64-unknown-linux-gnu]
linker = "clang"
rustflags = ["-C", "link-arg=-fuse-ld=lld"]
EOF

OpenSSL Errors

If you encounter OpenSSL linking errors:

Linux:

sudo apt install libssl-dev

macOS:

export OPENSSL_DIR=$(brew --prefix openssl)
cargo build --release

Windows:

# Use vcpkg
vcpkg install openssl:x64-windows
$env:OPENSSL_DIR="C:\vcpkg\installed\x64-windows"
cargo build --release

libsoup-3.0 Dependency Warning

Warning: pkg-config exited with status code 1 when searching for libsoup-3.0

This occurs when the libsoup-3.0 development library is missing but is required by a transitive dependency for desktop features.

Solution (Linux):

sudo apt install libsoup-3.0-dev

Solution (Fedora/RHEL):

sudo dnf install libsoup3-devel

Solution (macOS):

brew install libsoup
export PKG_CONFIG_PATH=$(brew --prefix libsoup)/lib/pkgconfig:$PKG_CONFIG_PATH

Note: This library is only needed for desktop/Tauri builds with GTK dependencies. If building with --no-default-features for server-only deployments, this library is not required and the warning can be safely ignored.

PostgreSQL Library Errors

If libpq is not found:

Linux:

sudo apt install libpq-dev

macOS:

brew install postgresql
export PQ_LIB_DIR=$(brew --prefix postgresql)/lib

Windows:

# Ensure PostgreSQL is in PATH
$env:PQ_LIB_DIR="C:\Program Files\PostgreSQL\15\lib"

Out of Memory During Build

Use sccache to cache compilations:

cargo install sccache
export RUSTC_WRAPPER=sccache
cargo build --release

Or reduce parallel jobs:

cargo build --release -j 2

Or limit memory per job:

CARGO_BUILD_JOBS=2 cargo build --release

Linker Errors

Ensure you have a C/C++ compiler:

Linux:

sudo apt install build-essential

macOS:

xcode-select --install

Windows: Install Visual Studio Build Tools with C++ support.

Common Build Errors

Error: linker 'clang' not found

Cause: The C/C++ toolchain is missing or not configured.

Solution (Linux):

  1. Install clang and lld:
sudo apt update
sudo apt install -y clang lld build-essential
  1. Configure Rust to use clang:
mkdir -p ~/.cargo
cat > ~/.cargo/config.toml << 'EOF'
[build]
rustflags = ["-C", "linker=clang", "-C", "link-arg=-fuse-ld=lld"]

[target.x86_64-unknown-linux-gnu]
linker = "clang"
EOF
  1. Clean and rebuild:
cargo clean
cargo build --release

Solution (macOS):

xcode-select --install

Solution (Windows):

Install Visual Studio Build Tools with “Desktop development with C++” workload.

Error: could not find native library pq

Cause: PostgreSQL development libraries are missing.

Solution:

Linux:

sudo apt install libpq-dev

macOS:

brew install postgresql
export PQ_LIB_DIR=$(brew --prefix postgresql)/lib

Windows: Install PostgreSQL from postgresql.org

Error: openssl-sys build failures

Cause: OpenSSL headers are missing.

Solution:

Linux:

sudo apt install libssl-dev pkg-config

macOS:

brew install openssl
export OPENSSL_DIR=$(brew --prefix openssl)
export OPENSSL_LIB_DIR=$(brew --prefix openssl)/lib
export OPENSSL_INCLUDE_DIR=$(brew --prefix openssl)/include

Error: Out of memory during build

Cause: Too many parallel compilation jobs.

Solution:

Reduce parallel jobs:

CARGO_BUILD_JOBS=2 cargo build --release

Or limit memory:

ulimit -v 4000000  # Limit to 4GB
cargo build --release

Error: Submodule references not found

Cause: Submodules not initialized.

Solution:

git submodule update --init --recursive

Or re-clone with submodules:

git clone --recursive https://github.com/GeneralBots/gb.git

Error: #[derive(RustEmbed)] folder '$CARGO_MANIFEST_DIR/ui' does not exist

Cause: The rust-embed crate cannot expand the $CARGO_MANIFEST_DIR variable without the interpolate-folder-path feature enabled.

Solution:

Ensure the workspace Cargo.toml has the feature enabled:

rust-embed = { version = "8.5", features = ["interpolate-folder-path"] }

This feature allows rust-embed to expand cargo environment variables like $CARGO_MANIFEST_DIR in the folder path attribute.

Warning: ignoring invalid dependency 'botserver' which is missing a lib target

Cause: The bottest crate incorrectly specifies botserver as a dependency, but botserver is a binary-only crate with no library target.

Solution:

Remove the invalid dependency from bottest/Cargo.toml:

[dependencies]
# Note: botserver is a binary-only crate, tested by spawning the process
botlib = { path = "../botlib", features = ["database"] }

Integration tests should spawn the botserver binary as a separate process rather than linking against it as a library.

Verify Build

After building, verify the binary works:

./target/release/botserver --version

Expected output: botserver 6.2.0 or similar.

Development Builds

Watch Mode

Auto-rebuild on file changes:

cargo install cargo-watch
cargo watch -x 'build --release'

Check Without Building

Fast syntax and type checking:

cargo check

With specific features:

cargo check --features vectordb,email

Testing

Run All Tests

cargo test

Run Tests for Specific Module

cargo test --package botserver --lib bootstrap::tests

Run Integration Tests

cargo test --test '*'

Code Quality

Format Code

cargo fmt

Lint Code

cargo clippy -- -D warnings

Check Dependencies

cargo tree

Find duplicate dependencies:

cargo tree --duplicates

Security Audit

Run security audit to check for known vulnerabilities in dependencies:

cargo install cargo-audit
cargo audit

This should be run regularly during development to ensure dependencies are secure.

Quick Build Check

Check if everything compiles without building:

cargo check --all-features

This is much faster than a full build and catches most errors.

Build Artifacts

After a successful release build, you’ll have:

  • target/release/botserver - Main executable
  • target/release/build/ - Build script outputs
  • target/release/deps/ - Compiled dependencies

Size Optimization

The release profile already optimizes for size. To further reduce:

Strip Binary Manually

strip target/release/botserver

Use UPX Compression

upx --best --lzma target/release/botserver

Note: UPX may cause issues with some systems. Test thoroughly.

Clean Build

Remove all build artifacts:

cargo clean

CI/CD Builds

For automated builds in CI/CD pipelines:

GitHub Actions

name: Build

on: [push, pull_request]

jobs:
  build:
    runs-on: ubuntu-latest
    steps:
      - uses: actions/checkout@v4
        with:
          submodules: recursive
      
      - name: Install Rust
        uses: dtolnay/rust-toolchain@stable
      
      - name: Install Dependencies
        run: |
          sudo apt update
          sudo apt install -y clang lld build-essential pkg-config libssl-dev libpq-dev cmake
      
      - name: Cache sccache
        uses: actions/cache@v3
        with:
          path: ~/.cache/sccache
          key: ${{ runner.os }}-sccache-${{ hashFiles('**/Cargo.lock') }}
      
      - name: Build
        run: cargo build --release --all-features

LXC Build

Build inside LXC container:

# Create build container
lxc-create -n botserver-build -t download -- -d ubuntu -r jammy -a amd64

# Configure container with build resources
cat >> /var/lib/lxc/botserver-build/config << EOF
lxc.cgroup2.memory.max = 4G
lxc.cgroup2.cpu.max = 400000 100000
EOF

# Start container
lxc-start -n botserver-build

# Install build dependencies
lxc-attach -n botserver-build -- bash -c "
apt-get update
apt-get install -y clang lld build-essential pkg-config libssl-dev libpq-dev cmake curl git
curl --proto '=https' --tlsv1.2 -sSf https://sh.rustup.rs | sh -s -- -y
source \$HOME/.cargo/env
"

# Build botserver
lxc-attach -n botserver-build -- bash -c "
git clone --recursive https://github.com/GeneralBots/gb.git /build
cd /build
source \$HOME/.cargo/env
cargo build --release --no-default-features
"

# Copy binary from container
lxc-attach -n botserver-build -- cat /build/target/release/botserver > /usr/local/bin/botserver
chmod +x /usr/local/bin/botserver

Installation

After building, install system-wide:

sudo install -m 755 target/release/botserver /usr/local/bin/

Or create a symlink:

ln -s $(pwd)/target/release/botserver ~/.local/bin/botserver

Verify installation:

botserver --version

Expected output: botserver 6.2.0 or similar.

Quick Reference

CommandPurpose
cargo build --releaseOptimized production build
cargo build --release -j 2Build with limited parallelism
cargo checkFast syntax/type checking
cargo testRun all tests
cargo clippyLint code
cargo cleanRemove build artifacts
CARGO_BUILD_JOBS=2 cargo buildLimit build jobs
RUSTC_WRAPPER=sccache cargo buildUse compilation cache

Next Steps

After building:

  1. Run the bootstrap process to install dependencies
  2. Configure .env file with database credentials
  3. Start botserver and access web interface
  4. Create your first bot from templates

See Chapter 01: Run and Talk for next steps.

Cargo Tools Reference

This chapter documents essential Cargo tools for botserver development, including code coverage, security auditing, performance profiling, and code quality tools.

Overview

The Rust ecosystem provides powerful tools through Cargo extensions. These tools help maintain code quality, identify security vulnerabilities, measure test coverage, and optimize performance.

Code Coverage with cargo-tarpaulin

Installation

cargo install cargo-tarpaulin

Basic Usage

Run code coverage analysis:

cargo tarpaulin

This generates a coverage report showing which lines of code are exercised by tests.

Output Formats

Generate HTML report:

cargo tarpaulin --out Html

Generate multiple formats (coverage report, lcov for CI):

cargo tarpaulin --out Html --out Lcov --out Json

Coverage with Features

Test with specific features enabled:

cargo tarpaulin --features vectordb,email

Test all features:

cargo tarpaulin --all-features

Excluding Files

Exclude test files and generated code from coverage:

cargo tarpaulin --ignore-tests --exclude-files "gen/*" "tests/*"

Coverage Thresholds

Fail if coverage drops below a threshold (useful for CI):

cargo tarpaulin --fail-under 80

Verbose Output

Show detailed coverage per function:

cargo tarpaulin --verbose

Integration with CI

Example GitHub Actions workflow:

- name: Install tarpaulin
  run: cargo install cargo-tarpaulin

- name: Generate coverage
  run: cargo tarpaulin --out Xml --fail-under 70

- name: Upload coverage to Codecov
  uses: codecov/codecov-action@v3
  with:
    files: cobertura.xml

Configuration File

Create .tarpaulin.toml for project-wide settings:

[config]
command = "test"
features = "vectordb"
ignore-tests = true
out = ["Html", "Lcov"]
exclude-files = ["gen/*"]
timeout = "120s"

Security Auditing with cargo-audit

Installation

cargo install cargo-audit

Basic Usage

Check for known security vulnerabilities in dependencies:

cargo audit

Continuous Auditing

Run audit as part of CI pipeline to catch new vulnerabilities:

cargo audit --deny warnings

Fix Vulnerabilities

Generate a fix for vulnerable dependencies (when possible):

cargo audit fix

Database Updates

Update the vulnerability database:

cargo audit fetch

Ignore Known Issues

Create .cargo/audit.toml to ignore specific advisories:

[advisories]
ignore = [
    "RUSTSEC-2020-0071",  # Reason for ignoring
]

JSON Output for CI

cargo audit --json > audit-report.json

Dependency Analysis with cargo-deny

Installation

cargo install cargo-deny

Configuration

Create deny.toml:

[advisories]
vulnerability = "deny"
unmaintained = "warn"

[licenses]
unlicensed = "deny"
allow = [
    "MIT",
    "Apache-2.0",
    "BSD-3-Clause",
]

[bans]
multiple-versions = "warn"
deny = [
    { name = "openssl" },  # Prefer rustls
]

[sources]
unknown-registry = "deny"
unknown-git = "deny"

Usage

Check all configured rules:

cargo deny check

Check specific categories:

cargo deny check advisories
cargo deny check licenses
cargo deny check bans
cargo deny check sources

Code Formatting with cargo-fmt

Usage

Format all code:

cargo fmt

Check formatting without changes:

cargo fmt --check

Configuration

Create rustfmt.toml:

edition = "2021"
max_width = 100
tab_spaces = 4
use_small_heuristics = "Default"
reorder_imports = true
group_imports = "StdExternalCrate"

Linting with cargo-clippy

Usage

Run clippy with warnings as errors:

cargo clippy -- -D warnings

Run with all lints:

cargo clippy -- -W clippy::all -W clippy::pedantic

Fix Suggestions Automatically

cargo clippy --fix --allow-dirty

Configuration

Add to Cargo.toml:

[lints.clippy]
all = "warn"
pedantic = "warn"
unwrap_used = "deny"
expect_used = "deny"

Or create .clippy.toml:

avoid-breaking-exported-api = false
msrv = "1.70"

Documentation with cargo-doc

Generate Documentation

cargo doc --open

With private items:

cargo doc --document-private-items --open

Check Documentation

Find broken links and missing docs:

cargo rustdoc -- -D warnings

Benchmarking with cargo-criterion

Installation

cargo install cargo-criterion

Usage

Run benchmarks:

cargo criterion

Benchmark Example

Create benches/my_benchmark.rs:

#![allow(unused)]
fn main() {
use criterion::{black_box, criterion_group, criterion_main, Criterion};

fn fibonacci(n: u64) -> u64 {
    match n {
        0 => 1,
        1 => 1,
        n => fibonacci(n-1) + fibonacci(n-2),
    }
}

fn benchmark(c: &mut Criterion) {
    c.bench_function("fib 20", |b| b.iter(|| fibonacci(black_box(20))));
}

criterion_group!(benches, benchmark);
criterion_main!(benches);
}

Dependency Management with cargo-outdated

Installation

cargo install cargo-outdated

Usage

Check for outdated dependencies:

cargo outdated

Show only root dependencies:

cargo outdated --root-deps-only

Binary Size Analysis with cargo-bloat

Installation

cargo install cargo-bloat

Usage

Show largest functions:

cargo bloat --release -n 20

Show largest crates:

cargo bloat --release --crates

Size Comparison

Compare sizes between releases:

cargo bloat --release --crates > before.txt
# Make changes
cargo bloat --release --crates > after.txt
diff before.txt after.txt

Dependency Tree with cargo-tree

Usage

View full dependency tree:

cargo tree

Find duplicate dependencies:

cargo tree --duplicates

Find why a dependency is included:

cargo tree --invert tokio

Watch Mode with cargo-watch

Installation

cargo install cargo-watch

Usage

Auto-rebuild on changes:

cargo watch -x build

Auto-test on changes:

cargo watch -x test

Run multiple commands:

cargo watch -x check -x test -x clippy

Memory Profiling with cargo-valgrind

Installation (Linux)

sudo apt install valgrind
cargo install cargo-valgrind

Usage

cargo valgrind run

LLVM Coverage with cargo-llvm-cov

Installation

cargo install cargo-llvm-cov

Usage

More accurate coverage than tarpaulin for some cases:

cargo llvm-cov

Generate HTML report:

cargo llvm-cov --html

Example complete CI configuration using these tools:

name: CI

on: [push, pull_request]

jobs:
  check:
    runs-on: ubuntu-latest
    steps:
      - uses: actions/checkout@v4
      - uses: dtolnay/rust-toolchain@stable
        with:
          components: rustfmt, clippy
      
      - name: Format check
        run: cargo fmt --check
      
      - name: Clippy
        run: cargo clippy -- -D warnings
      
      - name: Build
        run: cargo build --release
      
      - name: Test
        run: cargo test

  coverage:
    runs-on: ubuntu-latest
    steps:
      - uses: actions/checkout@v4
      - uses: dtolnay/rust-toolchain@stable
      
      - name: Install tarpaulin
        run: cargo install cargo-tarpaulin
      
      - name: Coverage
        run: cargo tarpaulin --out Xml --fail-under 70
      
      - uses: codecov/codecov-action@v3

  security:
    runs-on: ubuntu-latest
    steps:
      - uses: actions/checkout@v4
      - uses: dtolnay/rust-toolchain@stable
      
      - name: Install audit
        run: cargo install cargo-audit
      
      - name: Security audit
        run: cargo audit --deny warnings

Quick Reference

ToolPurposeCommand
cargo-tarpaulinCode coveragecargo tarpaulin
cargo-auditSecurity vulnerabilitiescargo audit
cargo-denyLicense/dependency rulescargo deny check
cargo-fmtCode formattingcargo fmt
cargo-clippyLintingcargo clippy
cargo-docDocumentationcargo doc --open
cargo-criterionBenchmarkingcargo criterion
cargo-outdatedOutdated dependenciescargo outdated
cargo-bloatBinary size analysiscargo bloat --release
cargo-treeDependency treecargo tree
cargo-watchAuto-rebuildcargo watch -x build
cargo-llvm-covLLVM coveragecargo llvm-cov

Installation Script

Install all recommended tools at once:

#!/bin/bash
# install-cargo-tools.sh

cargo install cargo-tarpaulin
cargo install cargo-audit
cargo install cargo-deny
cargo install cargo-outdated
cargo install cargo-bloat
cargo install cargo-watch
cargo install cargo-criterion
cargo install cargo-llvm-cov

Next Steps

After setting up these tools:

  1. Run cargo audit regularly to catch security issues
  2. Add cargo tarpaulin to your CI pipeline
  3. Use cargo clippy before every commit
  4. Set up pre-commit hooks for automatic formatting

See Building from Source for build-specific information.

Container Deployment (LXC)

botserver uses LXC (Linux Containers) for isolated component deployment with system-level containerization.

⚠️ IMPORTANT: All container create and management commands must be run from the host system, not from inside a container. The botserver binary manages LXC containers from the host level.

What is LXC?

  • System containers - Full Linux userspace (lightweight VMs)
  • Shared kernel - More efficient than virtual machines
  • Isolation - Separate processes, networking, filesystems
  • Resource control - CPU, memory, I/O limits

Automatic Setup

Run on the host system:

./botserver --container

This automatically:

  1. Detects LXC/LXD availability
  2. Initializes LXD if needed
  3. Creates Debian 12 containers per component
  4. Mounts directories for persistent data
  5. Configures networking and ports
  6. Installs and starts services

Container Architecture

Container Naming

{tenant}-tables      → PostgreSQL
{tenant}-drive       → S3-compatible storage
{tenant}-cache       → Valkey cache
{tenant}-llm         → LLM server (optional)
{tenant}-email       → Mail server (optional)

Default tenant: defaultdefault-tables, default-drive, etc.

Directory Mounting

Host: botserver-stack/tables/data/  → Container: /opt/gbo/data/
Host: botserver-stack/tables/conf/  → Container: /opt/gbo/conf/
Host: botserver-stack/tables/logs/  → Container: /opt/gbo/logs/

Data persists even if containers are deleted.

Port Forwarding

Container PortHost PortService
54325432PostgreSQL
90009000Drive API
90019001Drive Console
63796379Cache

Common Operations

Run these commands on the host system:

# List containers
lxc list

# Execute command in container
lxc exec default-tables -- psql -U gbuser botserver

# View logs
lxc exec default-tables -- journalctl -u tables

# Stop/Start
lxc stop default-tables
lxc start default-tables

# Delete (data in mounts persists)
lxc delete default-tables --force

Resource Limits

lxc config set default-tables limits.cpu 2
lxc config set default-tables limits.memory 4GB

Snapshots

# Create
lxc snapshot default-tables backup-2024-01-15

# List
lxc info default-tables

# Restore
lxc restore default-tables backup-2024-01-15

Troubleshooting

IssueSolution
LXC not installedsudo snap install lxd && sudo lxd init --auto
Permission deniedsudo usermod -aG lxd $USER && newgrp lxd
Container won’t startlxc console default-tables --show-log
Port in usesudo netstat -tulpn | grep PORT

Container vs Local

Use Containers WhenUse Local When
Clean isolation neededMaximum performance
Multiple instancesLXC not available
Easy cleanup/reinstallSimple deployment
Security isolationDirect service access

Example: Create Vault and VectorDB Containers

Run on the host system:

# Install Vault for secrets management
botserver install vault --container --tenant mycompany

# Install VectorDB (Qdrant) for embeddings
botserver install vector_db --container --tenant mycompany

# Verify containers are running
lxc list | grep mycompany

# Get container IPs
lxc list mycompany-vault -c n4 --format csv
lxc list mycompany-vectordb -c n4 --format csv

# Test services
curl http://<vault-ip>:8200/v1/sys/health
curl http://<vectordb-ip>:6333/health

Migration

Local → Container

Run on the host system:

pg_dump botserver > backup.sql
./botserver --container
lxc exec default-tables -- psql -U gbuser botserver < backup.sql

Container → Local

Run on the host system:

lxc exec default-tables -- pg_dump -U gbuser botserver > backup.sql
./botserver uninstall tables
./botserver install tables --local
psql -U gbuser botserver < backup.sql

Brother Mode Configuration

If you are running botserver itself inside a container (e.g., LXC or Docker) but want it to manage other LXC containers on the host (“Brother Mode”), you must expose the host’s LXD socket.

Required LXD Profile

To allow child containers to communicate with the host LXD daemon, add the lxd-sock proxy device to the default profile. This maps the host’s socket to /tmp/lxd.sock inside the container, avoiding conflicts with missing /var/lib/lxd directories in standard images.

LXD installed via snap uses /var/snap/lxd/common/lxd/unix.socket:

lxc profile device add default lxd-sock proxy \
  connect=unix:/var/snap/lxd/common/lxd/unix.socket \
  listen=unix:/tmp/lxd.sock \
  bind=container \
  uid=0 gid=0 mode=0660

For LXD installed via packages (non-snap), use:

lxc profile device add default lxd-sock proxy \
  connect=unix:/var/lib/lxd/unix.socket \
  listen=unix:/tmp/lxd.sock \
  bind=container \
  uid=0 gid=0 mode=0660

Note: The botserver installer attempts to configure this automatically. If you encounter “socket not found” errors, verify this proxy device exists.

See Also

Docker Deployment

Note: Docker support is currently experimental.

Deployment Options

OptionDescriptionBest For
All-in-OneSingle container with all componentsDevelopment, testing
MicroservicesSeparate containers per componentProduction, scaling

Option 1: All-in-One Container

Quick Start

docker run -d \
  --name botserver \
  -p 8000:8000 \
  -p 9000:9000 \
  -v botserver-data:/opt/gbo/data \
  -e ADMIN_PASS=your-secure-password \
  pragmatismo/botserver:latest

Docker Compose

version: '3.8'

services:
  botserver:
    image: pragmatismo/botserver:latest
    restart: unless-stopped
    ports:
      - "8000:8000"
      - "9000:9000"
      - "9001:9001"
    volumes:
      - botserver-data:/opt/gbo/data
      - ./work:/opt/gbo/work
    environment:
      - ADMIN_PASS=${ADMIN_PASS:-changeme}
      - DOMAIN=${DOMAIN:-localhost}
    healthcheck:
      test: ["CMD", "curl", "-f", "http://localhost:8000/health"]
      interval: 30s
      timeout: 10s
      retries: 3

volumes:
  botserver-data:

Resources: 2 CPU cores, 4GB RAM minimum

Option 2: Microservices

version: '3.8'

services:
  postgres:
    image: postgres:16-alpine
    volumes:
      - postgres-data:/var/lib/postgresql/data
    environment:
      POSTGRES_USER: botserver
      POSTGRES_PASSWORD: ${DB_PASSWORD}
      POSTGRES_DB: botserver
    networks:
      - gb-network

  minio:
    image: minio/minio:latest
    command: server /data --console-address ":9001"
    ports:
      - "9000:9000"
      - "9001:9001"
    volumes:
      - minio-data:/data
    environment:
      MINIO_ROOT_USER: ${DRIVE_ACCESSKEY}
      MINIO_ROOT_PASSWORD: ${DRIVE_SECRET}
    networks:
      - gb-network

  qdrant:
    image: qdrant/qdrant:latest
    ports:
      - "6333:6333"
    volumes:
      - qdrant-data:/qdrant/storage
    networks:
      - gb-network

  botserver:
    image: pragmatismo/botserver:latest
    depends_on:
      - postgres
      - minio
      - qdrant
    ports:
      - "8000:8000"
    volumes:
      - ./work:/opt/gbo/work
    environment:
      DATABASE_URL: postgres://botserver:${DB_PASSWORD}@postgres:5432/botserver
      DRIVE_URL: http://minio:9000
      DRIVE_ACCESSKEY: ${DRIVE_ACCESSKEY}
      DRIVE_SECRET: ${DRIVE_SECRET}
      QDRANT_URL: http://qdrant:6333
      ADMIN_PASS: ${ADMIN_PASS}
    networks:
      - gb-network

networks:
  gb-network:

volumes:
  postgres-data:
  minio-data:
  qdrant-data:

Environment File (.env)

DB_PASSWORD=secure-db-password
DRIVE_ACCESSKEY=minioadmin
DRIVE_SECRET=secure-minio-secret
ADMIN_PASS=admin-password
DOMAIN=your-domain.com

Kubernetes

apiVersion: apps/v1
kind: Deployment
metadata:
  name: botserver
spec:
  replicas: 3
  selector:
    matchLabels:
      app: botserver
  template:
    spec:
      containers:
      - name: botserver
        image: pragmatismo/botserver:latest
        ports:
        - containerPort: 8000
        resources:
          requests:
            memory: "512Mi"
            cpu: "250m"
          limits:
            memory: "2Gi"
            cpu: "1000m"
        livenessProbe:
          httpGet:
            path: /health
            port: 8000
---
apiVersion: v1
kind: Service
metadata:
  name: botserver
spec:
  selector:
    app: botserver
  ports:
  - port: 80
    targetPort: 8000
  type: LoadBalancer

Health Endpoints

ServiceEndpoint
botserverGET /health
PostgreSQLpg_isready
MinIOGET /minio/health/live
QdrantGET /health

Troubleshooting

IssueSolution
Container won’t startdocker logs gb-botserver
DB connection faileddocker exec -it gb-botserver psql $DATABASE_URL -c "SELECT 1"
Memory issuesIncrease limits in compose or add deploy.resources.limits.memory

Migration from Non-Docker

# 1. Backup data
pg_dump botserver > backup.sql
mc cp --recursive /path/to/drive minio/backup/

# 2. Start Docker containers

# 3. Restore
docker exec -i gb-postgres psql -U botserver < backup.sql
docker exec gb-minio mc cp --recursive /backup minio/drive/

See Also

Kubernetes Deployment

Scaling and Load Balancing

General Bots is designed to scale from a single instance to a distributed cluster using LXC containers. This chapter covers auto-scaling, load balancing, sharding strategies, and failover systems.

Scaling Architecture

General Bots uses a horizontal scaling approach with LXC containers:

                    ┌─────────────────┐
                    │   Caddy Proxy   │
                    │  (Load Balancer)│
                    └────────┬────────┘
                             │
         ┌───────────────────┼───────────────────┐
         │                   │                   │
         ▼                   ▼                   ▼
┌─────────────────┐ ┌─────────────────┐ ┌─────────────────┐
│  LXC Container  │ │  LXC Container  │ │  LXC Container  │
│   botserver-1   │ │   botserver-2   │ │   botserver-3   │
└────────┬────────┘ └────────┬────────┘ └────────┬────────┘
         │                   │                   │
         └───────────────────┼───────────────────┘
                             │
         ┌───────────────────┼───────────────────┐
         │                   │                   │
         ▼                   ▼                   ▼
┌─────────────────┐ ┌─────────────────┐ ┌─────────────────┐
│   PostgreSQL    │ │     Redis       │ │     Qdrant      │
│   (Primary)     │ │   (Cluster)     │ │   (Cluster)     │
└─────────────────┘ └─────────────────┘ └─────────────────┘

Auto-Scaling Configuration

config.csv Parameters

Configure auto-scaling behavior in your bot’s config.csv:

# Auto-scaling settings
scale-enabled,true
scale-min-instances,1
scale-max-instances,10
scale-cpu-threshold,70
scale-memory-threshold,80
scale-request-threshold,1000
scale-cooldown-seconds,300
scale-check-interval,30
ParameterDescriptionDefault
scale-enabledEnable auto-scalingfalse
scale-min-instancesMinimum container count1
scale-max-instancesMaximum container count10
scale-cpu-thresholdCPU % to trigger scale-up70
scale-memory-thresholdMemory % to trigger scale-up80
scale-request-thresholdRequests/min to trigger scale-up1000
scale-cooldown-secondsWait time between scaling events300
scale-check-intervalSeconds between metric checks30

Scaling Rules

Define custom scaling rules:

# Scale up when average response time exceeds 2 seconds
scale-rule-response-time,2000
scale-rule-response-action,up

# Scale down when CPU drops below 30%
scale-rule-cpu-low,30
scale-rule-cpu-low-action,down

# Scale up on queue depth
scale-rule-queue-depth,100
scale-rule-queue-action,up

LXC Container Management

Creating Scaled Instances

# Create additional botserver containers
for i in {2..5}; do
  lxc launch images:debian/12 botserver-$i
  lxc config device add botserver-$i port-$((8080+i)) proxy \
    listen=tcp:0.0.0.0:$((8080+i)) connect=tcp:127.0.0.1:9000
done

Container Resource Limits

Set resource limits per container:

# CPU limits (number of cores)
lxc config set botserver-1 limits.cpu 4

# Memory limits
lxc config set botserver-1 limits.memory 8GB

# Disk I/O priority (0-10)
lxc config set botserver-1 limits.disk.priority 5

# Network bandwidth (ingress/egress)
lxc config device set botserver-1 eth0 limits.ingress 100Mbit
lxc config device set botserver-1 eth0 limits.egress 100Mbit

Auto-Scaling Script

Create /opt/gbo/scripts/autoscale.sh:

#!/bin/bash

# Configuration
MIN_INSTANCES=1
MAX_INSTANCES=10
CPU_THRESHOLD=70
SCALE_COOLDOWN=300
LAST_SCALE_FILE="/tmp/last_scale_time"

get_avg_cpu() {
    local total=0
    local count=0
    for container in $(lxc list -c n --format csv | grep "^botserver-"); do
        cpu=$(lxc exec $container -- cat /proc/loadavg | awk '{print $1}')
        total=$(echo "$total + $cpu" | bc)
        count=$((count + 1))
    done
    echo "scale=2; $total / $count * 100" | bc
}

get_instance_count() {
    lxc list -c n --format csv | grep -c "^botserver-"
}

can_scale() {
    if [ ! -f "$LAST_SCALE_FILE" ]; then
        return 0
    fi
    last_scale=$(cat "$LAST_SCALE_FILE")
    now=$(date +%s)
    diff=$((now - last_scale))
    [ $diff -gt $SCALE_COOLDOWN ]
}

scale_up() {
    current=$(get_instance_count)
    if [ $current -ge $MAX_INSTANCES ]; then
        echo "Already at max instances ($MAX_INSTANCES)"
        return 1
    fi
    
    new_id=$((current + 1))
    echo "Scaling up: creating botserver-$new_id"
    
    lxc launch images:debian/12 botserver-$new_id
    lxc config set botserver-$new_id limits.cpu 4
    lxc config set botserver-$new_id limits.memory 8GB
    
    # Copy configuration
    lxc file push /opt/gbo/conf/botserver.env botserver-$new_id/opt/gbo/conf/
    
    # Start botserver
    lxc exec botserver-$new_id -- /opt/gbo/bin/botserver &
    
    # Update load balancer
    update_load_balancer
    
    date +%s > "$LAST_SCALE_FILE"
    echo "Scale up complete"
}

scale_down() {
    current=$(get_instance_count)
    if [ $current -le $MIN_INSTANCES ]; then
        echo "Already at min instances ($MIN_INSTANCES)"
        return 1
    fi
    
    # Remove highest numbered instance
    target="botserver-$current"
    echo "Scaling down: removing $target"
    
    # Drain connections
    lxc exec $target -- /opt/gbo/bin/botserver drain
    sleep 30
    
    # Stop and delete
    lxc stop $target
    lxc delete $target
    
    # Update load balancer
    update_load_balancer
    
    date +%s > "$LAST_SCALE_FILE"
    echo "Scale down complete"
}

update_load_balancer() {
    # Generate upstream list
    upstreams=""
    for container in $(lxc list -c n --format csv | grep "^botserver-"); do
        ip=$(lxc list $container -c 4 --format csv | cut -d' ' -f1)
        upstreams="$upstreams\n        to $ip:9000"
    done
    
    # Update Caddy config
    cat > /opt/gbo/conf/caddy/upstream.conf << EOF
upstream botserver {
    $upstreams
    lb_policy round_robin
    health_uri /api/health
    health_interval 10s
}
EOF
    
    # Reload Caddy
    lxc exec proxy-1 -- caddy reload --config /etc/caddy/Caddyfile
}

# Main loop
while true; do
    avg_cpu=$(get_avg_cpu)
    echo "Average CPU: $avg_cpu%"
    
    if can_scale; then
        if (( $(echo "$avg_cpu > $CPU_THRESHOLD" | bc -l) )); then
            scale_up
        elif (( $(echo "$avg_cpu < 30" | bc -l) )); then
            scale_down
        fi
    fi
    
    sleep 30
done

Load Balancing

Caddy Configuration

Primary load balancer configuration (/opt/gbo/conf/caddy/Caddyfile):

{
    admin off
    auto_https on
}

(common) {
    encode gzip zstd
    header {
        -Server
        X-Content-Type-Options "nosniff"
        X-Frame-Options "DENY"
        Referrer-Policy "strict-origin-when-cross-origin"
    }
}

bot.example.com {
    import common
    
    # Health check endpoint (no load balancing)
    handle /api/health {
        reverse_proxy localhost:9000
    }
    
    # WebSocket connections (sticky sessions)
    handle /ws* {
        reverse_proxy botserver-1:9000 botserver-2:9000 botserver-3:9000 {
            lb_policy cookie
            lb_try_duration 5s
            health_uri /api/health
            health_interval 10s
            health_timeout 5s
        }
    }
    
    # API requests (round robin)
    handle /api/* {
        reverse_proxy botserver-1:9000 botserver-2:9000 botserver-3:9000 {
            lb_policy round_robin
            lb_try_duration 5s
            health_uri /api/health
            health_interval 10s
            fail_duration 30s
        }
    }
    
    # Static files (any instance)
    handle {
        reverse_proxy botserver-1:9000 botserver-2:9000 botserver-3:9000 {
            lb_policy first
        }
    }
}

Load Balancing Policies

PolicyDescriptionUse Case
round_robinRotate through backendsGeneral API requests
firstUse first availableStatic content
least_connFewest active connectionsLong-running requests
ip_hashConsistent by client IPSession affinity
cookieSticky sessions via cookieWebSocket, stateful
randomRandom selectionTesting

Rate Limiting

Configure rate limits in config.csv:

# Rate limiting
rate-limit-enabled,true
rate-limit-requests,100
rate-limit-window,60
rate-limit-burst,20
rate-limit-by,ip

# Per-endpoint limits
rate-limit-api-chat,30
rate-limit-api-files,50
rate-limit-api-auth,10

Rate limiting in Caddy:

bot.example.com {
    # Global rate limit
    rate_limit {
        zone global {
            key {remote_host}
            events 100
            window 1m
        }
    }
    
    # Stricter limit for auth endpoints
    handle /api/auth/* {
        rate_limit {
            zone auth {
                key {remote_host}
                events 10
                window 1m
            }
        }
        reverse_proxy botserver:9000
    }
}

Sharding Strategies

Database Sharding Options

Option 1: Tenant-Based Sharding

Each tenant gets their own database:

┌─────────────────┐
│   Router/Proxy  │
└────────┬────────┘
         │
    ┌────┴────┬──────────┐
    │         │          │
    ▼         ▼          ▼
┌───────┐ ┌───────┐ ┌───────┐
│Tenant1│ │Tenant2│ │Tenant3│
│  DB   │ │  DB   │ │  DB   │
└───────┘ └───────┘ └───────┘

Configuration:

# Tenant sharding
shard-strategy,tenant
shard-tenant-db-prefix,gb_tenant_
shard-auto-create,true

Option 2: Hash-Based Sharding

Distribute data by hash of primary key:

User ID: 12345
Hash: 12345 % 4 = 1
Shard: shard-1

Configuration:

# Hash sharding
shard-strategy,hash
shard-count,4
shard-key,user_id
shard-algorithm,modulo

Option 3: Range-Based Sharding

Partition by ID ranges:

# Range sharding
shard-strategy,range
shard-ranges,0-999999:shard1,1000000-1999999:shard2,2000000-:shard3

Option 4: Geographic Sharding

Route by user location:

# Geographic sharding
shard-strategy,geo
shard-geo-us,postgres-us.example.com
shard-geo-eu,postgres-eu.example.com
shard-geo-asia,postgres-asia.example.com
shard-default,postgres-us.example.com

Vector Database Sharding (Qdrant)

Qdrant supports automatic sharding:

# Qdrant sharding
qdrant-shard-count,4
qdrant-replication-factor,2
qdrant-write-consistency,majority

Collection creation with sharding:

#![allow(unused)]
fn main() {
// In vectordb code
let collection_config = CreateCollection {
    collection_name: format!("kb_{}", bot_id),
    vectors_config: VectorsConfig::Single(VectorParams {
        size: 384,
        distance: Distance::Cosine,
    }),
    shard_number: Some(4),
    replication_factor: Some(2),
    write_consistency_factor: Some(1),
    ..Default::default()
};
}

Redis Cluster

For high-availability caching:

# Redis cluster
cache-mode,cluster
cache-nodes,redis-1:6379,redis-2:6379,redis-3:6379
cache-replicas,1

Failover Systems

Health Checks

Configure health check endpoints:

# Health check configuration
health-enabled,true
health-endpoint,/api/health
health-interval,10
health-timeout,5
health-retries,3

Health check response:

{
  "status": "healthy",
  "version": "6.1.0",
  "uptime": 86400,
  "checks": {
    "database": "ok",
    "cache": "ok",
    "vectordb": "ok",
    "llm": "ok"
  },
  "metrics": {
    "cpu": 45.2,
    "memory": 62.1,
    "connections": 150
  }
}

Automatic Failover

Database Failover (PostgreSQL)

Using Patroni for PostgreSQL HA:

# patroni.yml
scope: botserver-cluster
name: postgres-1

restapi:
  listen: 0.0.0.0:8008
  connect_address: postgres-1:8008

etcd:
  hosts: etcd-1:2379,etcd-2:2379,etcd-3:2379

bootstrap:
  dcs:
    ttl: 30
    loop_wait: 10
    retry_timeout: 10
    maximum_lag_on_failover: 1048576
    postgresql:
      use_pg_rewind: true
      parameters:
        max_connections: 200
        shared_buffers: 2GB

postgresql:
  listen: 0.0.0.0:5432
  connect_address: postgres-1:5432
  data_dir: /var/lib/postgresql/data
  authentication:
    superuser:
      username: postgres
      password: ${POSTGRES_PASSWORD}
    replication:
      username: replicator
      password: ${REPLICATION_PASSWORD}

Cache Failover (Redis Sentinel)

# Redis Sentinel configuration
cache-mode,sentinel
cache-sentinel-master,mymaster
cache-sentinel-nodes,sentinel-1:26379,sentinel-2:26379,sentinel-3:26379

Circuit Breaker

Prevent cascade failures:

# Circuit breaker settings
circuit-breaker-enabled,true
circuit-breaker-threshold,5
circuit-breaker-timeout,30
circuit-breaker-half-open-requests,3

States:

  • Closed: Normal operation
  • Open: Failing, reject requests immediately
  • Half-Open: Testing if service recovered

Graceful Degradation

Configure fallback behavior:

# Fallback configuration
fallback-llm-enabled,true
fallback-llm-provider,local
fallback-llm-model,DeepSeek-R3-Distill-Qwen-1.5B

fallback-cache-enabled,true
fallback-cache-mode,memory

fallback-vectordb-enabled,true
fallback-vectordb-mode,keyword-search

Monitoring Scaling

Metrics Collection

Key metrics to monitor:

# Scaling metrics
metrics-scaling-enabled,true
metrics-container-count,true
metrics-scaling-events,true
metrics-load-distribution,true

Alerting Rules

Configure alerts for scaling issues:

# alerting-rules.yml
groups:
  - name: scaling
    rules:
      - alert: HighCPUUsage
        expr: avg(cpu_usage) > 80
        for: 5m
        labels:
          severity: warning
        annotations:
          summary: "High CPU usage detected"
          
      - alert: MaxInstancesReached
        expr: container_count >= max_instances
        for: 1m
        labels:
          severity: critical
        annotations:
          summary: "Maximum instances reached, cannot scale up"
          
      - alert: ScalingFailed
        expr: scaling_errors > 0
        for: 1m
        labels:
          severity: critical
        annotations:
          summary: "Scaling operation failed"

Best Practices

Scaling

  1. Start small - Begin with auto-scaling disabled, monitor patterns first
  2. Set appropriate thresholds - Too low causes thrashing, too high causes poor performance
  3. Use cooldown periods - Prevent rapid scale up/down cycles
  4. Test failover - Regularly test your failover procedures
  5. Monitor costs - More instances = higher infrastructure costs

Load Balancing

  1. Use sticky sessions for WebSockets - Required for real-time features
  2. Enable health checks - Remove unhealthy instances automatically
  3. Configure timeouts - Prevent hanging connections
  4. Use connection pooling - Reduce connection overhead

Sharding

  1. Choose the right strategy - Tenant-based is simplest for SaaS
  2. Plan for rebalancing - Have procedures to move data between shards
  3. Avoid cross-shard queries - Design to minimize these
  4. Monitor shard balance - Uneven distribution causes hotspots

Next Steps

Infrastructure Design

This chapter covers the complete infrastructure design for General Bots, including scaling, security, secrets management, observability, and high availability.

Architecture Overview

General Bots uses a modular architecture where each component runs in isolated LXC containers. This provides isolation where each service has its own filesystem and process space, scalability through adding more containers to handle increased load, security since compromised components cannot affect others, and portability allowing containers to move between hosts easily.

Component Diagram

High Availability Architecture

Infrastructure Architecture

Production-ready infrastructure with automatic scaling, load balancing, and multi-tenant isolation.

Encryption at Rest

All data stored by General Bots is encrypted at rest using AES-256-GCM.

Database Encryption

PostgreSQL uses Transparent Data Encryption (TDE):

# config.csv
encryption-at-rest,true
encryption-algorithm,aes-256-gcm
encryption-key-source,vault

Enable in PostgreSQL:

-- Enable pgcrypto extension
CREATE EXTENSION IF NOT EXISTS pgcrypto;

-- Encrypted columns use pgp_sym_encrypt
ALTER TABLE bot_memories 
ADD COLUMN value_encrypted bytea;

UPDATE bot_memories 
SET value_encrypted = pgp_sym_encrypt(value, current_setting('app.encryption_key'));

File Storage Encryption

MinIO server-side encryption is enabled using SSE-S3 for automatic encryption or SSE-C for customer-managed keys:

# Enable SSE-S3 encryption
mc encrypt set sse-s3 local/gbo-bucket

# Or use customer-managed keys (SSE-C)
mc encrypt set sse-c local/gbo-bucket

Configuration:

# config.csv
drive-encryption,true
drive-encryption-type,sse-s3
drive-encryption-key,vault:gbo/encryption/drive_key

Redis Encryption

Redis with TLS and encrypted RDB provides secure caching:

# redis.conf
tls-port 6379
port 0
tls-cert-file /opt/gbo/conf/certificates/redis/server.crt
tls-key-file /opt/gbo/conf/certificates/redis/server.key
tls-ca-cert-file /opt/gbo/conf/certificates/ca.crt

# Enable RDB encryption (Redis 7.2+)
rdb-save-incremental-fsync yes

Vector Database Encryption

Qdrant with encrypted storage uses TLS for transport and filesystem-level encryption for data at rest:

# qdrant/config.yaml
storage:
  storage_path: /opt/gbo/data/qdrant
  on_disk_payload: true
  
service:
  enable_tls: true
  
# Disk encryption handled at filesystem level

Filesystem-Level Encryption

For comprehensive encryption, use LUKS on the data partition:

# Create encrypted partition for /opt/gbo/data
cryptsetup luksFormat /dev/sdb1
cryptsetup open /dev/sdb1 gbo-data
mkfs.ext4 /dev/mapper/gbo-data
mount /dev/mapper/gbo-data /opt/gbo/data

Media Processing: LiveKit

LiveKit handles all media processing needs for General Bots. WebRTC is native to LiveKit. Recording is built-in via the Egress service. Transcoding uses the Egress service. Streaming and AI integration are built into LiveKit.

LiveKit’s Egress service handles room recording, participant recording, livestreaming to YouTube and Twitch, and track composition.

LiveKit Configuration

# config.csv
meet-provider,livekit
meet-server-url,wss://localhost:7880
meet-api-key,vault:gbo/meet/api_key
meet-api-secret,vault:gbo/meet/api_secret
meet-recording-enabled,true
meet-transcription-enabled,true

Messaging: Redis

General Bots uses Redis for all messaging needs including session state, PubSub for real-time communication, and Streams for persistence:

#![allow(unused)]
fn main() {
// Session state
redis::cmd("SET").arg("session:123").arg(state_json)

// PubSub for real-time
redis::cmd("PUBLISH").arg("channel:bot-1").arg(message)

// Streams for persistence (optional)
redis::cmd("XADD").arg("stream:events").arg("*").arg("event").arg(data)
}

Configuration:

# config.csv
messaging-provider,redis
messaging-persistence,streams
messaging-retention-hours,24

Sharding Strategies

Each tenant or organization gets isolated databases.

Multi-Tenant Architecture

Each tenant gets isolated resources with dedicated database schemas, cache namespaces, and vector collections. The router maps tenant IDs to their respective data stores automatically.

Key isolation features include database-per-tenant or schema-per-tenant options, namespace isolation in Valkey cache, collection isolation in Qdrant vectors, and bucket isolation in SeaweedFS storage.

Configuration:

# config.csv
shard-strategy,tenant
shard-auto-provision,true
shard-isolation-level,database

Advantages include complete data isolation which is compliance friendly, easy backup and restore per tenant, simplicity, and no cross-tenant queries. Disadvantages include more resources per tenant, complex tenant migration, and connection pool overhead.

Option 2: Hash-Based Sharding

Distribute by user or session ID hash. For example, a user_id of 12345 produces a hash that modulo num_shards equals 2, routing to shard-2.

Configuration:

# config.csv
shard-strategy,hash
shard-count,4
shard-key,user_id
shard-algorithm,consistent-hash

Advantages include even distribution, predictable routing, and good performance for high-volume single-tenant deployments. Disadvantages include complex resharding, difficult cross-shard queries, and no tenant isolation.

Option 3: Time-Based Sharding

For time-series data like logs and analytics:

# config.csv
shard-strategy,time
shard-interval,monthly
shard-retention-months,12
shard-auto-archive,true

This automatically creates partitions named messages_2024_01, messages_2024_02, messages_2024_03, and so on.

Option 4: Geographic Sharding

Route by user location:

# config.csv
shard-strategy,geo
shard-regions,us-east,eu-west,ap-south
shard-default,us-east
shard-detection,ip

Geographic Distribution

The global router uses GeoIP to direct users to the nearest regional cluster. US-East in Virginia runs a full cluster, EU-West in Frankfurt runs a full cluster, and AP-South in Singapore runs a full cluster. Each regional cluster runs independently with data replication between regions for disaster recovery.

Auto-Scaling with LXC

Configuration

# config.csv - Auto-scaling settings
scale-enabled,true
scale-min-instances,1
scale-max-instances,10
scale-cpu-threshold,70
scale-memory-threshold,80
scale-request-threshold,1000
scale-cooldown-seconds,300
scale-check-interval,30

Scaling Rules

MetricScale UpScale Down
CPU> 70% for 2 min< 30% for 5 min
Memory> 80% for 2 min< 40% for 5 min
Requests/sec> 1000< 200
Response time> 2000ms< 500ms
Queue depth> 100< 10

Auto-Scale Service

The auto-scaler runs as a systemd service:

# /etc/systemd/system/gbo-autoscale.service
[Unit]
Description=General Bots Auto-Scaler
After=network.target

[Service]
Type=simple
ExecStart=/opt/gbo/scripts/autoscale.sh
Restart=always
RestartSec=10

[Install]
WantedBy=multi-user.target

Container Lifecycle

The startup flow begins with creating the LXC container from a template, then configuring resources for CPU, memory, and storage, then starting the botserver binary, and finally marking the container as ready and adding it to the load balancer pool.

The shutdown flow begins with an active container serving requests, then draining to stop accepting new connections, then stopping with a graceful botserver shutdown, and finally deleting or returning the container to the pool.

Load Balancing

Caddy Configuration

{
    admin off
    auto_https on
}

bot.example.com {
    # Rate limiting
    rate_limit {
        zone api {
            key {remote_host}
            events 100
            window 1m
        }
    }
    
    # WebSocket (sticky sessions)
    handle /ws* {
        reverse_proxy botserver-1:9000 botserver-2:9000 {
            lb_policy cookie
            health_uri /api/health
            health_interval 10s
        }
    }
    
    # API (round robin)
    handle /api/* {
        reverse_proxy botserver-1:9000 botserver-2:9000 {
            lb_policy round_robin
            fail_duration 30s
        }
    }
}

Rate Limiting Configuration

# config.csv - Rate limiting
rate-limit-enabled,true
rate-limit-requests,100
rate-limit-window,60
rate-limit-burst,20
rate-limit-by,ip

# Per-endpoint limits
rate-limit-api-chat,30
rate-limit-api-files,50
rate-limit-api-auth,10
rate-limit-api-llm,20

Failover Systems

Health Checks

Every service exposes /health:

{
  "status": "healthy",
  "version": "6.1.0",
  "checks": {
    "database": {"status": "ok", "latency_ms": 5},
    "cache": {"status": "ok", "latency_ms": 2},
    "vectordb": {"status": "ok", "latency_ms": 10},
    "llm": {"status": "ok", "latency_ms": 50}
  }
}

Circuit Breaker

# config.csv
circuit-breaker-enabled,true
circuit-breaker-threshold,5
circuit-breaker-timeout,30
circuit-breaker-half-open-requests,3

The circuit breaker has three states. Closed represents normal operation while counting failures. Open means failing fast and returning errors immediately. Half-Open tests with limited requests before deciding to close or reopen.

Database Failover

PostgreSQL with streaming replication provides high availability.

Database Replication

PostgreSQL replication is managed by Patroni for automatic failover. The Primary serves as the write leader handling all write operations. The Replica provides synchronous replication from the primary for read scaling. Patroni acts as the failover manager performing automatic leader election on failure.

Failover happens automatically within seconds, with clients redirected via the connection pooler.

Graceful Degradation

# config.csv - Fallbacks
fallback-llm-enabled,true
fallback-llm-provider,local
fallback-llm-model,DeepSeek-R3-Distill-Qwen-1.5B

fallback-cache-enabled,true
fallback-cache-mode,memory

fallback-vectordb-enabled,true
fallback-vectordb-mode,keyword-search

Secrets Management (Vault)

Architecture

The minimal .env file contains only Vault connection details. All other secrets are stored in Vault and fetched at runtime. The Vault server stores secrets organized by path including gbo/drive for access keys, gbo/tables for database credentials, gbo/cache for passwords, gbo/directory for client credentials, gbo/email for mail credentials, gbo/llm for provider API keys, gbo/encryption for master and data keys, and gbo/meet for API credentials.

Zitadel vs Vault

Zitadel handles user authentication, OAuth/OIDC, and MFA. Vault handles service credentials, API keys, and encryption keys. Use both together where Zitadel manages user identity and SSO while Vault manages service secrets and encryption keys.

Minimal .env with Vault

# .env - Only Vault and Directory needed
VAULT_ADDR=https://localhost:8200
VAULT_TOKEN=hvs.your-token-here

# Directory for user auth (Zitadel)
DIRECTORY_URL=https://localhost:9000
DIRECTORY_CLIENT_ID=your-client-id
DIRECTORY_CLIENT_SECRET=your-client-secret

# All other secrets fetched from Vault at runtime

Observability

Option 1: InfluxDB + Grafana (Current)

For time-series metrics:

# config.csv
observability-provider,influxdb
observability-url,http://localhost:8086
observability-org,pragmatismo
observability-bucket,metrics

Vector serves as a log and metric aggregator. botserver logs flow to Vector which pipelines them to InfluxDB for metrics storage and Grafana for dashboards.

Vector configuration:

# vector.toml
[sources.botserver_logs]
type = "file"
include = ["/opt/gbo/logs/*.log"]

[transforms.parse_logs]
type = "remap"
inputs = ["botserver_logs"]
source = '''
. = parse_json!(.message)
'''

[sinks.influxdb]
type = "influxdb_metrics"
inputs = ["parse_logs"]
endpoint = "http://localhost:8086"
org = "pragmatismo"
bucket = "metrics"

Replacing log.* Calls with Vector

Instead of replacing all log calls, configure Vector to collect logs from files, parse and enrich them, and route to appropriate sinks:

# Route errors to alerts
[transforms.filter_errors]
type = "filter"
inputs = ["parse_logs"]
condition = '.level == "error"'

[sinks.alertmanager]
type = "http"
inputs = ["filter_errors"]
uri = "http://alertmanager:9093/api/v1/alerts"

Search: Qdrant

Qdrant handles all search needs in General Bots, providing both vector similarity search for semantic queries and payload filtering for keyword-like queries.

Hybrid Search with Qdrant

Qdrant supports hybrid search combining vector similarity with keyword filters:

#![allow(unused)]
fn main() {
// Combine vector similarity + keyword filter
let search_request = SearchPoints {
    collection_name: "kb".to_string(),
    vector: query_embedding,
    limit: 10,
    filter: Some(Filter {
        must: vec![
            Condition::Field(FieldCondition {
                key: "content".to_string(),
                r#match: Some(Match::Text("keyword".to_string())),
            }),
        ],
        ..Default::default()
    }),
    ..Default::default()
};
}

Workflow Scheduling: SET SCHEDULE

General Bots uses the SET SCHEDULE keyword for all scheduling needs:

REM Run every day at 9 AM
SET SCHEDULE "daily-report" TO "0 9 * * *"
    TALK "Running daily report..."
    result = GET "/api/reports/daily"
    SEND MAIL "admin@example.com", "Daily Report", result
END SCHEDULE

MFA with Zitadel

Configuration

MFA is handled transparently by Zitadel:

# config.csv
auth-mfa-enabled,true
auth-mfa-methods,totp,sms,email,whatsapp
auth-mfa-required-for,admin,sensitive-operations
auth-mfa-grace-period-days,7

Zitadel MFA Settings

In the Zitadel console, navigate to Settings then Login Behavior. Enable Multi-Factor Authentication and select allowed methods including TOTP for authenticator apps, SMS, Email, and WebAuthn/FIDO2.

WhatsApp MFA Channel

# config.csv
auth-mfa-whatsapp-enabled,true
auth-mfa-whatsapp-provider,twilio
auth-mfa-whatsapp-template,mfa_code

The flow proceeds as follows: the user logs in with password, Zitadel triggers MFA, a code is sent via WhatsApp, the user enters the code, and the session is established.

Summary: What You Need

PostgreSQL, Redis, Qdrant, MinIO, and Zitadel are required components. Vault, InfluxDB, and LiveKit are recommended for production deployments. Vector is optional for log aggregation.

Next Steps

The Scaling and Load Balancing chapter provides a detailed scaling guide. The Container Deployment chapter covers LXC setup. The Security Features chapter offers a security deep dive. The LLM Providers appendix helps with model selection.

Observability

This chapter describes the observability infrastructure that General Bots provides for monitoring system health, collecting logs, and tracking metrics. The observability system operates automatically without requiring code changes, giving administrators visibility into platform behavior and helping identify issues before they impact users.

Understanding the Observability System

General Bots implements observability through an integrated pipeline that collects, parses, routes, and stores operational data from all system components. The pipeline reads log files from the centralized logs directory within the botserver-stack folder, extracts structured information including log levels, timestamps, and messages, routes different types of data to appropriate destinations such as alerts for errors and storage for metrics, and enriches entries with contextual information like hostnames and service names.

This automated approach means administrators don’t need to instrument code or configure complex logging frameworks. The system captures operational data from all components using consistent formats and routes it to useful destinations without manual intervention.

Log Directory Organization

The logging system organizes output by component within the ./botserver-stack/logs/ directory. System logs from the main botserver application appear in the system subdirectory. Storage service operations are captured in the drive subdirectory. Database activity from PostgreSQL goes to the tables subdirectory. The cache subdirectory contains logs from the caching layer. LLM server interactions are recorded in the llm subdirectory.

Additional services have their own logging locations. Email service logs appear in the email subdirectory. Identity and authentication events are captured in the directory subdirectory. Vector database operations go to the vectordb subdirectory. Video meeting activities are logged in the meet subdirectory.

This organization makes it straightforward to investigate issues in specific components without wading through unrelated log entries.

Installation and Configuration

The observability component installs automatically during the bootstrap process, ensuring that monitoring begins from the first system start. Administrators who need to install it separately can use the botserver install command with the observability parameter.

Configuration for the observability pipeline resides in the monitoring configuration file within the botserver-stack conf directory. This Vector configuration file controls how logs are collected, parsed, transformed, and routed to their destinations.

Log Format Conventions

botserver generates logs in a standard format that includes the timestamp in ISO 8601 format with millisecond precision, the log level indicating severity, the module path identifying the code location, and the message describing what occurred. This structured format enables automated parsing while remaining human-readable for direct inspection.

The pipeline parses these logs automatically, extracting fields for indexing and routing. Errors are identified by level and routed to alerting systems while informational messages flow to long-term storage for historical analysis.

Metrics Collection

The platform exposes operational metrics through a Prometheus-compatible endpoint at /api/metrics, enabling integration with standard monitoring infrastructure. Available metrics track log event counts by severity level, error totals broken down by service, currently active session counts, total messages processed since startup, and LLM response latency measurements.

These metrics enable administrators to understand system behavior over time, identify trends that might indicate developing problems, and verify that the platform operates within expected parameters. The Prometheus format ensures compatibility with common visualization and alerting tools.

Alerting Configuration

The observability system can send alerts automatically when error conditions occur. Webhook alerts POST event data to the admin alerts API endpoint, enabling integration with custom alerting systems. Slack integration sends notifications to configured channels when properly configured. Email alerts reach administrators directly when SMTP settings are provided.

Alert thresholds are configurable through the bot’s config.csv file. The CPU threshold setting triggers alerts when processor utilization exceeds the specified percentage. Memory threshold configuration works similarly for RAM usage. Response time thresholds flag slow operations that might indicate performance degradation.

Tuning these thresholds for your environment prevents alert fatigue from false positives while ensuring genuine issues receive attention.

Dashboard Visualization

A pre-built Grafana dashboard template is available in the templates directory, providing immediate visualization of key metrics. The dashboard includes panels for active sessions showing current load, messages per minute indicating throughput, error rates highlighting problems, and LLM latency percentiles revealing AI response performance.

Importing this dashboard into a Grafana instance connected to your metrics storage creates an operational overview suitable for operations teams and helps during incident investigation.

Log Level Configuration

The logging system supports four severity levels that control which messages are captured and the volume of output generated.

Error level captures failures that require attention, such as database connection losses or file permission problems. Warning level records unexpected conditions that were handled but might indicate developing issues. Info level logs normal operations and key events, providing a record of system activity without excessive detail. Debug level includes detailed flow information useful during development and troubleshooting but too verbose for normal production operation.

The log level setting in config.csv controls the minimum severity that produces output. Setting it to info captures everything except debug messages, providing operational visibility without overwhelming log storage.

Troubleshooting Common Issues

When logs aren’t being collected as expected, several common causes should be investigated. First, verify that the observability service is running and hasn’t crashed or been stopped. Second, check that the log directory permissions allow the collection process to read the files. Third, review the observability service’s own logs for errors that might explain the collection failure.

High log volume can overwhelm storage and make analysis difficult. Raising the log level from debug to info significantly reduces volume by eliminating detailed trace messages. Configuring retention policies in the metrics storage prevents unbounded growth. Filtering debug-level logs before they reach long-term storage reduces costs while preserving important operational data.

Operational Guidelines

Effective observability requires attention to both technical configuration and operational practices. Log content should never include sensitive data like passwords, tokens, or personally identifiable information, as logs often flow to systems with broader access than the application itself.

Using appropriate log levels keeps signal-to-noise ratios manageable. Reserve error level for actual failures requiring investigation. Use info level for normal operations that help understand system behavior. Avoid overusing warning level, which loses meaning when applied too broadly.

Monitoring should focus on trends rather than just instantaneous values. Gradual increases in error rates or response times often indicate developing problems before they become critical failures. Alert configuration should consider baseline behavior and flag deviations rather than simple threshold crossings.

Establishing observability early in deployment ensures that baseline data exists when problems occur. Trying to instrument a system during an incident rarely produces useful results.

For additional context on operating General Bots at scale, the Scaling and Load Balancing chapter explains how observability integrates with clustered deployments. The Infrastructure Design chapter provides the full architectural overview showing how observability fits into the complete system. The Monitoring Dashboard section describes the built-in monitoring interface available through the administrative UI.

Monitoring Setup

Autonomous Task AI

The Machine Does the Work


Overview

Autonomous Tasks let you describe what you want and the system builds it. No coding required - just describe your application in plain language.

You say:

“Create a CRM for my cellphone store”

You get:

  • Working HTMX application at /apps/cellphone-crm
  • Database tables: customers, products, sales, repairs
  • Forms, lists, search, filters - all functional
  • Direct connection to botserver API

Architecture

┌─────────────────────────────────────────────────────────────────┐
│                         Your App                                 │
│                                                                  │
│   ┌──────────┐    ┌──────────┐    ┌──────────┐                 │
│   │  Forms   │    │  Lists   │    │  Actions │                 │
│   └────┬─────┘    └────┬─────┘    └────┬─────┘                 │
│        │               │               │                        │
│        └───────────────┼───────────────┘                        │
│                        │ HTMX                                   │
└────────────────────────┼────────────────────────────────────────┘
                         │
                         ▼
┌─────────────────────────────────────────────────────────────────┐
│                      botserver API                               │
│                                                                  │
│   /api/db/*          /api/drive/*         /api/llm/*           │
│   CRUD operations    File storage         AI features           │
│                                                                  │
└─────────────────────────────────────────────────────────────────┘
                         │
                         ▼
┌─────────────────────────────────────────────────────────────────┐
│              PostgreSQL + MinIO + LLM                           │
│              (user_data virtual table)                          │
└─────────────────────────────────────────────────────────────────┘

Key insight: Apps talk directly to botserver. No middleware, no generated backend code - just HTMX calling the API.


The user_data Virtual Table

All app data lives in one flexible table system:

App: cellphone-crm
Table: customers
     │
     ▼
Namespace: cellphone-crm.customers
     │
     ▼
Storage: user_data table with proper indexing

Your app calls /api/db/customers and botserver handles the rest.

Benefits

  • No migrations - Tables created on demand
  • Isolation - Each app’s data is separate
  • Flexibility - Add fields anytime
  • Security - Per-app access control

How It Works

1. Describe

Tell the system what you want:

"Create a CRM for my cellphone store with:
- Customer tracking (name, phone, email)
- Product inventory with stock levels
- Sales linked to customers
- Repair status board"

2. Plan

System creates execution steps:

Step 1: Create tables (customers, products, sales, repairs)
Step 2: Generate HTMX application
Step 3: Add search and filters
Step 4: Configure repair workflow

3. Execute

Each step runs and shows progress:

[████████████████░░░░] 75%
Step 3 of 4: Adding search...

4. Deliver

Your app is ready:

✅ Application: /apps/cellphone-crm
✅ Tables: customers, products, sales, repairs
✅ Features: CRUD, search, status board

Generated App Structure

.gbdrive/apps/cellphone-crm/
├── index.html          # HTMX application
├── _assets/
│   ├── htmx.min.js     # HTMX library
│   ├── app.js          # Helpers
│   └── styles.css      # Styling
└── schema.json         # Table definitions

HTMX Patterns

List with Auto-Refresh

<div id="customers"
     hx-get="/api/db/customers"
     hx-trigger="load, every 30s"
     hx-swap="innerHTML">
    Loading...
</div>

Create Form

<form hx-post="/api/db/customers"
      hx-target="#customers"
      hx-swap="afterbegin">
    <input name="name" required>
    <input name="phone">
    <button type="submit">Add</button>
</form>
<input type="search"
       hx-get="/api/db/customers"
       hx-trigger="keyup changed delay:300ms"
       hx-target="#customers"
       placeholder="Search...">

Delete

<button hx-delete="/api/db/customers/${id}"
        hx-target="closest tr"
        hx-confirm="Delete?">
    🗑️
</button>

API Mapping

HTMXEndpointAction
hx-get/api/db/customersList
hx-get/api/db/customers/123Get one
hx-post/api/db/customersCreate
hx-put/api/db/customers/123Update
hx-delete/api/db/customers/123Delete

Query Parameters

?q=john              # Search
?status=active       # Filter
?sort=created_at     # Sort
?order=desc          # Direction
?limit=20&offset=40  # Pagination

Task Steps Storage

Every task stores its steps for:

  • Continuation - Resume if interrupted
  • Progress - Know exactly where you are
  • Debugging - See what happened
{
  "task_id": "abc123",
  "steps": [
    {"order": 1, "name": "Create tables", "status": "completed"},
    {"order": 2, "name": "Generate UI", "status": "running", "progress": 60},
    {"order": 3, "name": "Add search", "status": "pending"}
  ]
}

Execution Modes

ModeBehavior
AutomaticRuns without stopping
SupervisedPauses before each step
Dry RunShows what would happen

Dev Chat Widget

Test your app without leaving the page:

  1. Add ?dev=1 to URL or run on localhost
  2. Click the floating chat icon (or Ctrl+Shift+D)
  3. Talk to modify your app in real-time
<script src="/_assets/dev-chat.js"></script>

The dev chat uses the same user_data system for history storage.


Example: Cellphone Store CRM

Request:

“CRM for cellphone store with customers, products, sales, and repair tracking”

Result:

TableFields
customersid, name, phone, email, notes
productsid, name, brand, model, price, stock
salesid, customer_id, product_id, quantity, total
repairsid, customer_id, device, status, price

Features:

  • Customer list with search
  • Product inventory with stock alerts
  • Sales entry form
  • Repair status board (Kanban)

Access: /apps/cellphone-crm


Best Practices

Be Specific

✅ Good:

“CRM for cellphone store with customer tracking, sales, and repair status workflow”

❌ Vague:

“Make an app”

Include Workflows

✅ Good:

“Repair status: received → diagnosing → repairing → ready → delivered”

❌ Missing:

“Track repairs”

Mention Relationships

✅ Good:

“Sales linked to customers and products”

❌ Unclear:

“Sales tracking”


See Also

The gbapp Philosophy: Let Machines Do Machine Work

Core Principle: Automation First

In 2025, the gbapp philosophy is simple and powerful: if a machine can do the work, let it do the work. This principle guides every decision about how to build and extend General Bots. Rather than writing code manually, you describe what you need and let AI handle the implementation details.

The Hierarchy of Development

The development approach in General Bots follows a clear hierarchy based on what percentage of work falls into each category.

LLM First (90% of cases)

The vast majority of work should be handled by letting AI write the code for you. Instead of implementing complex logic yourself, describe what you want in natural language and let the LLM generate the solution.

' Don't write complex logic - describe what you want
result = LLM "Generate a function that validates email addresses and returns true/false: " + email

BASIC for Flow Control (9% of cases)

BASIC serves as the orchestration layer that connects AI calls together. Think of it as glue code that manages the flow between different operations. The logic itself lives in LLM calls while BASIC handles sequencing and data flow.

' BASIC is just glue between AI calls
data = GET "api/data"
processed = LLM "Process this: " + data
SET "results", processed

Rust for Core Only (1% of cases)

Writing Rust code should be reserved for rare situations where you are contributing new keywords to the core platform, building fundamental infrastructure that many bots will use, or optimizing critical performance paths where every millisecond matters. Most developers will never need to write Rust because BASIC and LLM calls handle nearly every use case.

What gbapp Really Is

Understanding what gbapp is and is not helps clarify the development model.

The gbapp concept is not about external plugin packages that you download separately. It is not about separate npm modules or package managers. It is not a way to bypass BASIC and write custom code. It is not about runtime extensions that modify behavior dynamically.

Instead, gbapp represents virtual crates inside the src/ directory that are Rust modules compiling together into a single binary. The concept serves as a bridge between older plugin-based thinking and the modern integrated approach. It provides a familiar mental model for developers who want to contribute to the platform. Most importantly, gbapp embodies a mindset of coding through automation rather than manual implementation.

Real-World Examples

The contrast between traditional development and the General Bots approach becomes clear through examples.

Traditional Approach

In the old way of thinking, you might write hundreds of lines of custom Node.js, Python, or C# code for data validation. A function like validateComplexBusinessRules would contain extensive logic handling edge cases, format checking, and business rule verification. This code requires maintenance, testing, and documentation.

The General Bots Approach

With the automation-first philosophy, the same task takes three lines. You fetch your business rules from a file, ask the LLM to validate data against those rules, and handle the result. The AI understands the rules and applies them correctly without you implementing the validation logic.

' 3 lines - let AI handle complexity
rules = GET "business-rules.txt"
validation = LLM "Validate this data against these rules: " + data + " Rules: " + rules
IF validation CONTAINS "valid" THEN TALK "Approved" ELSE TALK "Rejected: " + validation

The Multi-SDK Reality

You do not need separate SDKs or plugins for different services. Everything integrates through BASIC combined with LLM calls.

Integrating Any API

When you need to work with an external API, you do not need to find and install an SDK. Just fetch the data and let the LLM interpret and format it.

' No SDK needed - just describe what you want
data = GET "https://server/data"
answer = LLM "Do a good report from this json: " + data
TALK answer

Working with Any Database

Database operations do not require an ORM or query builder. The AI understands SQL and can generate queries from natural language descriptions.

' No ORM needed - AI understands SQL
results = FIND "users", "all users who logged in today"

Processing Any Format

You do not need parser libraries for different file formats. The LLM can transform data between formats based on your description.

' No parser library needed
xml_data = GET "complex.xml"
json = LLM "Convert this XML to JSON: " + xml_data
SET BOT MEMORY "processed_data", json

When to Write Code

Understanding when each approach applies helps you work efficiently.

Use LLM When

LLM calls are appropriate for processing unstructured data, implementing business logic, transforming between formats, making decisions, generating content, and analyzing patterns. This covers roughly ninety percent of everything you might want to do.

Use BASIC When

BASIC code handles orchestrating AI calls in sequence, simple flow control with conditionals and loops, managing state and variables, and connecting different systems together. Think of BASIC as the glue that holds everything together.

Use Rust When

Rust development is only necessary when building new keywords that will become part of the core platform, creating a new gbapp module in the src/ directory, performing system-level optimization for critical paths, or contributing new features that will benefit all users. Almost no one needs to write Rust for their bots.

The gbapp Mindset

Shifting your thinking is the most important part of adopting this philosophy.

Stop thinking about how to code a solution, what library you need to import, or how to extend the system with plugins. Start thinking about how to describe what you want to AI, what the simplest BASIC flow looks like, and how your patterns could help everyone using the platform.

Data Enrichment Example

Consider a data enrichment task that pulls information about companies from their websites.

The traditional approach requires over a thousand lines of code spread across multiple npm packages. You need complex error handling for network requests, HTML parsing for different website structures, and a maintenance nightmare as websites change their formats.

The General Bots approach handles the same task in a few lines. You find companies that need enrichment, loop through them, fetch each website, ask the LLM to extract company information, and save the results. The AI handles all the complexity of parsing different website formats.

items = FIND "companies", "needs_enrichment=true"
FOR EACH item IN items
    website = WEBSITE OF item.company
    page = GET website
    enriched = LLM "Extract company info from: " + page
    SET "companies", "id=" + item.id, "data=" + enriched
NEXT

Report Generation Example

Generating reports traditionally requires a custom reporting engine, template systems, complex formatting logic, and PDF libraries. That infrastructure takes significant development and ongoing maintenance.

With General Bots, you find the relevant data, ask the LLM to create an executive summary, and generate a site with the results. Three lines replace an entire reporting infrastructure.

data = FIND "sales", "month=current"
report = LLM "Create executive summary from: " + data
CREATE SITE "report", "template", report

The Ultimate Test

Before writing any code, ask yourself three questions in order. First, can the LLM do this? The answer is usually yes. Second, can BASIC orchestrate it? Almost always yes. Third, do you really need Rust? Almost never.

Only proceed to writing custom code if you have genuinely exhausted the first two options. The LLM and BASIC combination handles far more than most developers initially expect.

Benefits of This Approach

For Developers

This approach enables development that is roughly one hundred times faster than traditional coding. You have no dependency management headaches and no version conflicts between packages. The maintenance burden drops dramatically because there is no custom code to maintain. You can focus on business logic and what you want to accomplish rather than implementation details.

For Organizations

Organizations benefit from reduced complexity in their bot deployments. Maintenance costs drop because there is less custom code to support. Iterations happen faster since changes involve modifying descriptions rather than rewriting code. There is no vendor lock-in to specific libraries or frameworks. Anyone in the organization can contribute because they do not need traditional programming skills.

For the Community

Shared improvements benefit everyone using the platform. There is no fragmentation into incompatible plugin ecosystems. Users experience consistency across different bots and deployments. The community advances collectively rather than each organization maintaining separate extensions.

The Future is Already Here

In 2025, this approach is not aspirational but reality. Applications built entirely with BASIC and LLM calls run in production today. Most use cases require zero custom code. AI handles complexity better than hand-written algorithms in many domains. Machines do machine work while humans focus on human work like understanding requirements and making decisions.

Migration Path

From Extensions to Virtual Crates

If you have existing plugin-style extensions, the migration path consolidates them into the main source tree. An old extension might have been a separate folder with hundreds of lines of JavaScript, a package.json, and complex logic. The new approach places a small Rust module in src/ that registers BASIC keywords, while the actual logic moves to a few lines of BASIC in your .gbdialog folder that leverage LLM calls.

From Code to Descriptions

Migration from traditional code involves converting algorithms into natural language descriptions. Instead of writing the logic to process data, you describe what processing you need and let the LLM implement it.

From Libraries to LLM

Instead of importing twenty npm packages for various functionality, you make single LLM calls with descriptions of what you need. The AI has knowledge of countless libraries and formats built into its training.

Development Guidelines

Follow these practices to work effectively with the automation-first philosophy. Describe problems to the LLM in clear, specific terms. Use BASIC as minimal glue between AI operations. Contribute keywords to the core when you discover patterns that would benefit everyone. Share your patterns with the community so others can learn. Think automation-first for every task you encounter.

Avoid common mistakes that fight against this philosophy. Do not write complex algorithms when a description would suffice. Do not build separate plugins that fragment the ecosystem. Do not create custom frameworks that add unnecessary complexity. Do not maintain separate codebases when everything should be in one place. Do not fight the machine by insisting on manual implementation.

The Virtual Crate Architecture

Each gbapp is now a module in the src/ directory. The structure maps conceptually familiar package names to Rust modules. The core gbapp lives in src/core/. The BASIC interpreter is src/basic/. Channel adapters are in src/channels/. Your contribution would go in src/your_feature/. This elegant mapping preserves the conceptual model of separate packages while leveraging Rust’s module system and compiling everything into a single optimized binary.

Conclusion

The gbapp concept in 2025 has evolved from external packages to virtual crates. These Rust modules inside src/ compile into a single, optimized binary while preserving the familiar mental model of separate functional packages.

The philosophy remains constant: machines are better at machine work. Your job is to describe what you want, not implement how to do it. The combination of BASIC for orchestration and LLM for logic eliminates the need for traditional programming in almost all cases.

Examples Repository

The /templates/ directory contains real-world examples of applications built entirely with BASIC and LLM calls. A CRM system requires about fifty lines of BASIC. Email automation needs around thirty lines. Data pipelines work in twenty lines. Report generators take about fifteen lines. Each of these would have required thousands of lines of traditional code, demonstrating the dramatic productivity improvement this philosophy enables.

.gbai Architecture

A bot is just a folder. The .gbai extension marks a directory as a botserver package containing everything needed to run a conversational AI bot - scripts, documents, configuration, and themes.

The Dead Simple Structure

my-bot.gbai/                    # This folder = your entire bot
  my-bot.gbdialog/           # BASIC conversation scripts
  my-bot.gbkb/               # Documents for Q&A
  my-bot.gbot/               # Configuration
  my-bot.gbtheme/            # Optional UI customization

That’s it. No manifests, no build files, no dependencies. Copy the folder to deploy.

Visual Architecture

Architecture

Package Structure

How Bootstrap Finds Bots

At startup, botserver scans templates/ for any folder ending in .gbai:

templates/
  default.gbai/       → Creates bot at /default
  support.gbai/       → Creates bot at /support  
  sales.gbai/         → Creates bot at /sales

Each .gbai becomes a URL endpoint automatically. Zero configuration.

What Goes Where

.gbdialog/ - Your Bot’s Brain

BASIC scripts that control conversation flow:

my-bot.gbdialog/
  start.bas           # Optional - needed to activate tools/KB
  auth.bas            # Login flow
  tools/              # Callable functions
    book-meeting.bas
    check-status.bas
  handlers/           # Event responses
    on-email.bas

Example start.bas (optional, but required for tools/KB):

USE KB "policies"
USE TOOL "book-meeting"
USE TOOL "check-status"
TALK "Hi! I'm your assistant with tools and knowledge ready."

Note: If you don’t need tools or knowledge bases, start.bas is optional. The LLM will handle basic conversations without it.

.gbkb/ - Your Bot’s Knowledge

Documents organized by topic:

my-bot.gbkb/
  policies/           # HR documents
    vacation.pdf
    handbook.docx
  products/           # Product info
    catalog.pdf
    pricing.xlsx
  support/            # Help docs
    faq.md

Each folder becomes a searchable collection. Drop files in, bot learns automatically.

.gbot/ - Your Bot’s Settings

Single config.csv file with key-value pairs:

llm-model,your-preferred-model
temperature,0.7
max-tokens,2000
welcome-message,Hello! How can I help?
session-timeout,1800

No complex JSON or YAML. Just simple CSV that opens in Excel.

.gbtheme/ - Your Bot’s Look (Optional)

Custom web interface styling:

my-bot.gbtheme/
  styles.css          # Custom CSS
  logo.png           # Brand assets
  templates/         # HTML overrides
    chat.html

If missing, uses default theme. Most bots don’t need this.

Real Example: Support Bot

Here’s a complete customer support bot:

support.gbai/
  support.gbdialog/
    start.bas         # Optional, but needed for tools/KB
    tools/
      create-ticket.bas
      check-status.bas
  support.gbkb/
    faqs/
      common-questions.pdf
    guides/
      troubleshooting.docx
  support.gbot/
    config.csv

start.bas (activates tools and knowledge bases):

USE KB "faqs"
USE KB "guides"
USE TOOL "create-ticket"
USE TOOL "check-status"

TALK "Support bot ready. How can I help?"

create-ticket.bas:

PARAM issue, priority
DESCRIPTION "Creates support ticket"

ticket_id = GENERATE_ID()
SAVE "tickets.csv", ticket_id, issue, priority, NOW()
TALK "Ticket #" + ticket_id + " created"

config.csv:

llm-model,your-preferred-model
bot-name,TechSupport
greeting,Welcome to support!

Deployment = Copy Folder

Local Development

cp -r my-bot.gbai/ templates/
./botserver restart
# Visit http://localhost:9000/my-bot

Production Server

scp -r my-bot.gbai/ server:~/botserver/templates/
ssh server "cd botserver && ./botserver restart"

Deployment

LXC Container

lxc file push my-bot.gbai/ container/app/templates/

No build step. No compilation. Just copy files.

Multi-Bot Hosting

One botserver runs multiple bots:

templates/
  support.gbai/       # support.example.com
  sales.gbai/         # sales.example.com
  internal.gbai/      # internal.example.com
  public.gbai/        # www.example.com

Each bot:

  • Gets own URL endpoint
  • Has isolated sessions
  • Runs independently
  • Shares infrastructure

Naming Conventions

Required

  • Folder must end with .gbai
  • Subfolders must match: botname.gbdialog, botname.gbkb, etc.
  • start.bas is optional, but required if you want to use tools or knowledge bases (must USE TOOL and USE KB to activate them)
  • Use lowercase with hyphens: customer-service.gbai
  • Group related bots: support-tier1.gbai, support-tier2.gbai
  • Version in folder name if needed: chatbot-v2.gbai

Bootstrap Process

When botserver starts:

Template Deployment Flow

Takes about 5-10 seconds per bot.

UI Architecture

The web interface uses HTMX with server-side rendering - minimal client-side code:

  • Askama templates for HTML generation
  • HTMX for dynamic updates without JavaScript
  • No webpack, no npm build
  • Edit and refresh to see changes
  • Zero compilation time

Package Size Limits

Default limits (configurable in config.csv):

SettingDefaultconfig.csv key
Total package100MBpackage-max-size
Single document10MBuser-file-limit
Number of files1000user-file-count
Script size1MBscript-max-size
Collection count50kb-max-collections

Example override in your bot’s config.csv:

name,value
package-max-size,209715200
user-file-limit,52428800

Troubleshooting

Bot not appearing?

  • Check folder ends with .gbai
  • Verify subfolders match bot name
  • If using tools/KB, ensure start.bas exists with USE TOOL/USE KB commands

Documents not searchable?

  • Ensure files are in .gbkb/ subfolder
  • Check file format is supported
  • Wait 30 seconds for indexing

Scripts not running?

  • Validate BASIC syntax
  • Check file has .bas extension
  • Review logs for errors

Best Practices

Do’s

  • Keep packages under 50MB
  • Organize knowledge by topic
  • Use clear folder names
  • Test locally first

Don’ts

  • Don’t nest .gbai folders
  • Don’t mix test/prod in same folder
  • Don’t hardcode absolute paths
  • Don’t store secrets in scripts

Summary

The .gbai architecture keeps bot development simple. No complex frameworks, no build systems, no deployment pipelines. Just organize your files in folders, and botserver handles the rest. Focus on content and conversation, not configuration.

Next: Learn about .gbdialog Dialogs for writing conversation scripts.

.gbdialog Dialogs

The .gbdialog package contains BASIC scripts that define conversation flows, tool integrations, and bot behavior.

⚠️ System Limits & Safety

All .gbdialog scripts run in a sandboxed environment with enforced limits to prevent abuse:

LimitValueDescription
Loop Iterations100,000Maximum iterations per loop (WHILE/FOR)
Script Timeout300 secondsMaximum execution time
String Length10 MBMaximum string size
Array Length1,000,000Maximum array elements
File Size100 MBMaximum file size for GET/SAVE
API Calls1,000/minuteRate limit per user

Important:

  • Loops automatically terminate if they exceed the iteration limit
  • Scripts that run too long are forcefully stopped
  • Excessive API calls return HTTP 429 (Too Many Requests)
  • File operations are restricted to the bot’s .gbdrive scope

See System Limits for complete documentation.

What is .gbdialog?

.gbdialog files are written in a specialized BASIC dialect that controls:

  • Tool execution and integrations
  • LLM prompting and context
  • Knowledge base activation
  • Session and memory management
  • External API calls

Modern Approach: Let the LLM Work

Minimal BASIC Philosophy

Instead of complex logic, use the LLM’s natural understanding:

' Example from announcements.gbai/update-summary.bas
' Generate summaries from documents
text = GET "announcements.gbkb/news/news.pdf"
resume = LLM "In a few words, resume this: " + text
SET BOT MEMORY "resume", resume

' Example from law.gbai/case.bas
' Load context and let LLM answer questions
text = GET "case-" + cod + ".pdf"
text = "Based on this document, answer the person's questions:\n\n" + text
SET CONTEXT text
TALK "Case loaded. You can ask me anything about the case."

Key Components

1. LLM Integration

' LLM is for background processing only - generates content once for all users
' Example: Generate a summary that all users will see
text = GET "document.pdf"
summary = LLM "Summarize this document: " + text
SET BOT MEMORY "daily_summary", summary

' For interactive conversations, use SET CONTEXT and TALK
SET CONTEXT "user_type" AS "premium customer"
TALK "How can I help you today?"

2. Tool Execution

' Define tools with parameters
PARAM name AS string LIKE "John Smith" DESCRIPTION "Customer name"
PARAM email AS string LIKE "john@example.com" DESCRIPTION "Email"

' LLM automatically knows when to call this
SAVE "customers.csv", name, email
TALK "Registration complete!"

3. Knowledge Base Usage

See Knowledge Base documentation for details.

' Activate knowledge base collections
USE KB "products"
USE KB "policies"

' The system AI searches these automatically during conversations
' No LLM command needed - just TALK to the user
TALK "What product information can I help you with?"

Script Structure

Entry Point: start.bas (Optional)

The start.bas file in the .gbdialog folder is optional, but required if you want to activate tools or knowledge bases:

' Optional start script - needed only to activate tools/KB
USE KB "company_docs"
USE TOOL "book-meeting"
USE TOOL "check-status"
TALK "Welcome! How can I assist you today?"

When you need start.bas:

  • To activate knowledge bases with USE KB
  • To activate tools with USE TOOL
  • To set initial context or configuration

When you don’t need start.bas:

  • For simple conversational bots
  • When the LLM can handle everything without tools/KB
  • For basic Q&A without document search

Tool Definitions

Create separate .bas files for each tool. See KB and Tools for more information:

' enrollment.bas - The LLM knows when to use this
PARAM student_name AS string
PARAM course AS string
DESCRIPTION "Enrolls a student in a course"

SAVE "enrollments.csv", student_name, course, NOW()
TALK "Enrolled successfully!"

Best Practices

1. Minimal Logic

' Good - Let system AI handle the conversation naturally
TALK "How can I help you?"
' System AI understands context and responds appropriately

' Avoid - Don't micromanage the flow
' IF user_says_this THEN do_that...

2. Clear Tool Descriptions

DESCRIPTION "This tool books appointments for customers"
' The LLM uses this description to know when to call the tool

3. Context Over Conditions

' Provide context, not rules
SET CONTEXT "business_hours" AS "9AM-5PM weekdays"
TALK "When would you like to schedule?"
' System AI naturally understands to mention hours when relevant

4. Trust the System AI

' The system AI handles conversations naturally
TALK "Hello! I'm here to help."
' System handles greetings, questions, complaints naturally

Common Patterns

Document Summarization - Background Processing (from announcements.gbai)

' Schedule automatic updates - runs in background
SET SCHEDULE "59 * * * *"

' Fetch and summarize documents ONCE for all users
text = GET "announcements.gbkb/news/news.pdf"
resume = LLM "In a few words, resume this: " + text
SET BOT MEMORY "resume", resume  ' Stored for all users

Interactive Case Analysis - User Conversations (from law.gbai)

' Ask for case number - interactive with user
TALK "What is the case number?"
HEAR cod

' Load case document
text = GET "case-" + cod + ".pdf"

IF text THEN 
    ' Set context for system AI to use in conversation
    text = "Based on this document, answer the person's questions:\n\n" + text
    SET CONTEXT text 
    TALK "Case loaded. Ask me anything about it."
ELSE
    TALK "Case not found, please try again."
END IF

Tool Definition Pattern

' Tool parameters (auto-discovered by LLM)
PARAM name AS string
PARAM email AS string
DESCRIPTION "Enrollment tool"

' Tool logic (called when LLM decides)
SAVE "enrollments.csv", name, email
TALK "Successfully enrolled " + name
USE KB "products"
USE KB "reviews"  
USE KB "specifications"

' System AI searches these collections automatically during conversation
TALK "What would you like to know about our products?"

Advanced Features

Memory Management

See Storage documentation for persistent data options.

SET BOT MEMORY "company_policy", policy_text
' Available across all sessions

retrieved = GET BOT MEMORY "company_policy"

External APIs

See External APIs chapter for integration patterns.

result = GET "https://api.example.com/data"
' For background processing only
summary = LLM "Summarize this data: " + result
SET BOT MEMORY "api_summary", summary

Suggestions

See UI Interface for UI integration.

ADD SUGGESTION "Schedule Meeting" AS "schedule"
ADD SUGGESTION "View Products" AS "products"
' UI shows these as quick actions

Error Handling

The system handles errors gracefully:

  • Syntax errors caught at compile time
  • Runtime errors logged but don’t crash
  • LLM provides fallback responses
  • Timeouts prevent infinite operations
  • Loop limit exceeded: Script terminates with “Maximum iterations exceeded” error
  • Rate limit exceeded: Returns HTTP 429 with retry_after_secs value
  • File too large: Operation fails with “Limit exceeded for file_size” error

Script Execution

Scripts run in a sandboxed environment with:

  • Access to session state
  • LLM generation capabilities
  • Knowledge base search
  • Tool execution rights
  • External API access (configured)
  • Enforced resource limits (see System Limits above)
  • No direct filesystem access - only .gbdrive via keywords
  • Rate limiting - excessive requests return 429

Migration from Traditional Bots

Old Way (Complex Logic)

' DON'T DO THIS - 1990s style
' IF INSTR(user_input, "order") > 0 THEN
'   IF INSTR(user_input, "status") > 0 THEN
'     TALK "Checking order status..."
'   ELSE IF INSTR(user_input, "new") > 0 THEN
'     TALK "Creating new order..."
'   END IF
' END IF

New Way (System AI Intelligence)

' DO THIS - Let system AI handle conversation naturally
TALK "How can I help you with your order?"
' System AI understands context and intent automatically

The key is to trust the system AI and write less code for more intelligent behavior.

Important Distinction

  • LLM Command: For background/batch processing, generates content ONCE, stored in BOT MEMORY for all users
  • Interactive Conversations: Use HEAR/TALK/SET CONTEXT, system AI handles the natural conversation flow

See Also

.gbkb Knowledge Base

The .gbkb package contains your bot’s domain knowledge - documents that the AI uses to answer questions accurately about your specific organization, products, or services.

What It Does

When you place documents in a .gbkb folder, the system automatically:

  1. Extracts text from your files (PDF, DOCX, TXT, MD, HTML, CSV)
  2. Creates searchable indexes using vector embeddings
  3. Enables semantic search so users can ask questions naturally

This means your bot answers based on YOUR documents, not just general AI knowledge.

Folder Structure

mybot.gbai/
└── mybot.gbkb/
    ├── policies/           ← Collection: "policies"
    │   ├── vacation.pdf
    │   └── handbook.docx
    ├── products/           ← Collection: "products"
    │   ├── catalog.pdf
    │   └── specs.xlsx
    └── support/            ← Collection: "support"
        └── faq.md

Each subfolder becomes a collection you can activate independently.

Using in BASIC Scripts

' Activate collections for this conversation
USE KB "policies"
USE KB "products"

' Now the AI automatically searches these when answering
TALK "How can I help you today?"

' Later, clear when done
CLEAR KB "policies"

Supported File Types

FormatExtensions
PDF.pdf
Word.docx, .doc
Text.txt, .md
Web.html
Data.csv, .json

Key Points

  • Automatic indexing - Just drop files in folders
  • Semantic search - Users don’t need exact keywords
  • Multiple collections - Organize by topic, activate as needed
  • No code required - The AI handles search automatically

Learn More

.gbot Bot Configuration

The .gbot package serves as the configuration center for your bot, containing the settings that define how the bot behaves, which AI models it uses, and how it interacts with users. This chapter explains the configuration system and guides you through the available options.

Understanding Bot Configuration

Every bot in General Bots requires configuration to operate effectively. The .gbot folder within your bot package holds these settings, primarily through a config.csv file that uses simple key-value pairs. This approach makes configuration accessible to anyone comfortable with spreadsheet applications while remaining powerful enough for complex deployments.

The configuration system influences several aspects of bot behavior. Bot identity settings control how the bot presents itself to users. LLM configuration determines which language model powers the bot’s intelligence and how it generates responses. Context management settings affect how the bot maintains conversation history and retrieves relevant information. Integration parameters connect the bot to external services and APIs.

The config.csv File

Configuration lives in a straightforward CSV format with two columns: key and value. This design choice prioritizes accessibility—you can edit configuration in any text editor or spreadsheet application without learning complex syntax. Each row represents a single setting, making it easy to scan and modify.

The file supports various data types implicitly. Text values are stored as-is, numbers are parsed when needed, and boolean values typically use “true” and “false” strings. The system handles type conversion automatically when reading configuration, so you rarely need to worry about explicit typing.

Bot Identity Configuration

Identity settings establish how your bot presents itself during conversations. The bot_name parameter provides the display name users see when interacting with the bot. A descriptive bot_description helps users understand the bot’s purpose and capabilities. Version tracking through the version parameter supports deployment management and debugging.

These identity settings matter because they shape user expectations. A bot named “Legal Document Assistant” with an appropriate description sets different expectations than a generic “Helper Bot.” Clear identity configuration improves user experience by establishing context before conversations begin.

Language Model Settings

LLM configuration represents perhaps the most important settings in your bot. The llm_provider parameter specifies which AI service powers your bot, supporting options like OpenAI, Azure OpenAI, or local model servers. The llm_model parameter identifies the specific model to use, such as GPT-5, Claude Sonnet 4.5, or a local GGUF model.

Response characteristics are controlled through several parameters. The temperature setting affects response creativity, with lower values producing more focused and deterministic outputs while higher values allow more varied and creative responses. The max_tokens parameter limits response length, preventing runaway generation and managing costs for cloud-based providers.

The system_prompt parameter provides instructions that shape the bot’s personality and behavior throughout conversations. This prompt is prepended to every interaction, giving the model consistent guidance about how to respond, what tone to use, and what boundaries to respect.

Context Management

Context settings control how the bot maintains awareness of conversation history and relevant information. The context_window parameter determines how many previous messages remain visible to the model during each interaction. Larger windows provide better continuity but consume more tokens.

The context_provider setting influences how context is assembled and presented to the model. Different providers may apply various strategies for selecting and formatting context, optimizing for different use cases.

Memory functionality, controlled by the memory_enabled setting, allows bots to retain information across sessions. When enabled, bots can remember user preferences, previous interactions, and other persistent data that improves personalization.

Configuration Loading and Precedence

The system assembles configuration from multiple sources, applying them in a specific order that allows flexible overrides. Default values provide baseline behavior when no explicit configuration exists. Settings in your .gbot/config.csv file override these defaults for your specific bot.

Environment variables can override config.csv settings, useful for deployment scenarios where configuration varies between environments. Database configuration provides another override layer, supporting runtime configuration changes that persist across restarts. Finally, runtime API calls can temporarily adjust settings without permanent changes.

This precedence system enables sophisticated deployment patterns. Development environments might use local configuration files while production deployments pull settings from environment variables or databases. The same bot package can behave differently across environments without modification.

Dynamic Configuration with Bot Memory

Beyond static configuration, bots can store and retrieve dynamic settings using bot memory. The SET BOT MEMORY keyword stores values that persist across all sessions, effectively creating runtime configuration that can be modified through bot scripts.

This capability supports scenarios where configuration needs to adapt based on usage patterns, administrative decisions, or external inputs. A bot might store preferred response styles, accumulated statistics, or cached data that influences its behavior.

Best Practices

Effective configuration follows several principles. Keep identity settings clear and accurate—users trust bots more when their purpose is evident. Choose LLM settings that balance capability with cost and latency requirements. Set appropriate context windows that provide continuity without excessive token consumption.

Document non-obvious configuration choices, either in comments within config.csv or in accompanying documentation. This practice helps future maintainers understand why settings were chosen and whether they should be adjusted.

Test configuration changes in development environments before deploying to production. Some settings interact in non-obvious ways, and testing catches issues before they affect users.

Summary

The .gbot configuration system provides comprehensive control over bot behavior through accessible CSV files augmented by environment variables, database settings, and runtime adjustments. Understanding these configuration options and their precedence helps you build bots that behave predictably across different deployment scenarios while remaining adaptable to changing requirements.

.gbtheme UI Theming

The .gbtheme package provides visual customization for your bot’s web interface through straightforward CSS-based theming. This approach prioritizes simplicity—you create CSS files that override default styles, without needing complex build tools, template engines, or JavaScript frameworks.

The Philosophy of Simple Theming

Many theming systems require elaborate toolchains, preprocessors, and build processes that create barriers for non-developers who want to customize their bot’s appearance. General Bots takes a different approach by using plain CSS files that any web designer can create and modify.

This simplicity doesn’t sacrifice capability. CSS custom properties (variables) provide the flexibility to change colors, typography, spacing, and other visual characteristics throughout the interface by modifying a few central values. The bot’s default interface handles all the complex layout and functionality concerns, leaving themes to focus purely on appearance.

Theme Structure

A theme consists of one or more CSS files placed in the .gbtheme folder within your bot package. The simplest theme might be a single default.css file containing variable overrides. More complex setups might include multiple theme files for different contexts—a dark theme for evening use, a high-contrast theme for accessibility, or seasonal themes for special occasions.

The system automatically loads the default theme on startup, and scripts can switch between available themes at runtime based on user preferences, time of day, or any other logic you implement.

CSS Variables and Customization

The bot interface defines a set of CSS custom properties that control fundamental visual characteristics. By overriding these properties in your theme file, you can transform the interface’s appearance with minimal code.

The primary-color variable establishes your main brand color, used for headers, buttons, and other prominent elements. The secondary-color provides accent coloring for highlights and interactive elements. Background and text-color control the basic page appearance and readability.

Typography settings including font-family let you match your organization’s brand standards. Structural properties like border-radius affect the overall feel—sharp corners suggest professionalism while rounded corners feel friendlier. Spacing controls help maintain consistent visual rhythm throughout the interface.

These variables cascade through the interface components, meaning a single change propagates everywhere that property is used. This approach makes comprehensive theming achievable with just a handful of variable overrides.

Creating Effective Themes

Building a theme starts with understanding your visual goals. Corporate deployments often need to match existing brand guidelines, requiring specific colors, fonts, and visual treatments. Consumer-facing bots might prioritize approachability and visual appeal. Internal tools might emphasize clarity and efficiency over aesthetics.

A minimal theme might override only the primary and secondary colors to match brand standards while accepting defaults for everything else. This approach gets results quickly with minimal effort. As needs grow, you can progressively add more customization.

When creating dark themes, remember to adjust not just the background color but also text colors, borders, shadows, and any other elements that assume a light background. Contrast matters for readability—test your themes with actual content to ensure text remains legible.

Accessibility considerations should inform theme design. Ensure sufficient contrast ratios between text and backgrounds, avoid relying solely on color to convey information, and test with various visual impairments in mind.

Dynamic Theme Switching

Bots can change themes at runtime through the CHANGE THEME keyword in BASIC scripts. This capability enables several useful patterns.

User preference systems let visitors choose their preferred theme, with the selection stored in user memory for future visits. Time-based switching can apply dark themes during evening hours automatically. Contextual theming might use different visual treatments for different conversation modes or topics.

Theme switching happens instantly without page reloads, providing smooth transitions that maintain conversation flow.

Configuration Integration

Theme settings can also be specified in the bot’s config.csv file, providing default values that themes can override. The theme parameter specifies which theme file to load by default. The theme-color1 and theme-color2 parameters provide primary and secondary colors that the interface uses when no theme file specifies otherwise.

These configuration values serve as fallbacks—CSS files in the .gbtheme folder take precedence when they define the same properties. This layering allows simple color customization through configuration while supporting full CSS theming for more sophisticated needs.

No Build Process Required

Unlike many modern web development workflows, .gbtheme requires no build tools, preprocessors, or compilation steps. You write CSS files, place them in the appropriate folder, and they take effect. Changes appear immediately through hot reloading, making iterative design work fast and responsive.

This simplicity means designers without development environment setup can contribute themes. Anyone who can write CSS can customize the interface, lowering barriers to visual customization.

Migrating from Complex Systems

Organizations moving from platforms with complex theming systems can extract their essential visual parameters and recreate them as CSS variable overrides. The process typically involves identifying brand colors and typography from the existing theme, mapping those values to General Bots CSS variables, testing the result against the interface, and iteratively refining until the appearance matches expectations.

Much of the complexity in traditional theming systems exists to handle layout and functionality concerns that General Bots manages through its default interface. By focusing themes purely on visual styling, the migration process becomes much simpler.

Best Practices

Effective theming follows several principles. Keep theme files focused and minimal—override only what you need to change rather than redefining everything. Start with a single default.css file and add complexity only as requirements demand.

Test themes across different devices and screen sizes to ensure they work well everywhere. Pay attention to interactive states like hover, focus, and active to ensure the interface remains usable and visually coherent.

Document theme choices, especially when values differ from brand guidelines for technical reasons. Future maintainers will appreciate understanding why specific decisions were made.

Maintain consistency within themes—if you override one color, consider whether related elements need adjustment to maintain visual harmony.

Summary

The .gbtheme system demonstrates that powerful customization doesn’t require complex tooling. Through CSS variables and standard stylesheets, you can transform the bot interface’s appearance while the platform handles the underlying complexity. This approach respects the skills of designers and developers alike, enabling visual customization without artificial barriers.

.gbdrive File Storage

The .gbdrive system provides centralized file storage for all bot packages, leveraging S3-compatible object storage to deliver reliable, scalable storage infrastructure. This chapter explains how file storage works, how files are organized, and how to interact with stored content.

Understanding File Storage in General Bots

Every bot requires storage for its various components—scripts, documents, configuration files, user uploads, and generated content. Rather than managing files across disparate locations, General Bots consolidates storage through the .gbdrive system, which provides a consistent interface regardless of the underlying storage backend.

The storage system builds on S3-compatible object storage, meaning it works with self-hosted solutions like MinIO as well as cloud providers like AWS S3, Backblaze B2, or DigitalOcean Spaces. This flexibility allows deployments to choose storage solutions that match their requirements for cost, performance, and data residency.

Beyond simple file storage, the system provides versioning capabilities, access control, automatic synchronization, and integration with other bot components like knowledge bases and themes.

Storage Organization

Files are organized using a bucket-per-bot structure that keeps each bot’s content isolated and manageable. Within a bot’s storage bucket, the familiar package structure appears: .gbdialog for scripts, .gbkb for knowledge base collections, .gbot for configuration, and .gbtheme for interface customization.

Additionally, each bot has space for user-uploaded files, generated content, and other runtime data. This organization mirrors the logical structure you work with during development, making it intuitive to understand where files reside and how they relate to bot functionality.

The system maintains this structure automatically when bots are deployed or updated, ensuring that the storage state reflects the current bot configuration without manual intervention.

.gbusers - Per-User Storage

The .gbusers folder within .gbdrive provides isolated storage space for each user interacting with the bot. This enables personalized document storage, user-specific settings, and application data that persists across sessions.

User Folder Structure

User folders are identified by the user’s email address or phone number:

mybot.gbai/
  mybot.gbdrive/
    users/
      john@example.com/           # User identified by email
        papers/
          current/                # Active/working documents
            untitled-1.md
            meeting-notes.md
          named/                  # Saved/named documents
            quarterly-report/
              document.md
              attachments/
            project-proposal/
              document.md
        uploads/                  # User file uploads
        exports/                  # Generated exports (PDF, DOCX, etc.)
        settings/                 # User preferences
          preferences.json
      +5511999887766/             # User identified by phone number
        papers/
          current/
          named/
        uploads/

User Identifier Format

Users are identified by their primary contact method:

  • Email: john@example.com, maria@company.com.br
  • Phone: +5511999887766, +1234567890 (E.164 format)

The identifier is sanitized for filesystem compatibility while remaining human-readable.

Paper Document Storage

The Paper application stores user documents in the papers/ directory:

  • papers/current/: Working documents that are actively being edited. These may be auto-saved drafts or recently accessed files.
  • papers/named/: Documents that have been explicitly saved with a name. Each named document gets its own folder to support attachments and metadata.

Example document structure:

papers/
  current/
    untitled-1.md           # Auto-saved draft
    untitled-2.md           # Another working document
  named/
    meeting-notes-2024/
      document.md           # The main document content
      metadata.json         # Title, created_at, updated_at, etc.
      attachments/          # Embedded images or files
        image-001.png
    research-paper/
      document.md
      metadata.json

Accessing User Storage from BASIC

BASIC scripts can access user storage using the USER DRIVE keyword:

' Read a user's document
content = READ USER DRIVE "papers/current/notes.md"

' Write to user's storage
SAVE USER DRIVE "papers/named/report/document.md", report_content

' List user's papers
papers = LIST USER DRIVE "papers/named/"

' Delete a user document
DELETE USER DRIVE "papers/current/draft.md"

User Storage API

The REST API provides endpoints for user storage operations:

GET  /api/drive/user/list?path=papers/current/
POST /api/drive/user/read
     { "path": "papers/named/report/document.md" }
POST /api/drive/user/write
     { "path": "papers/current/notes.md", "content": "..." }
POST /api/drive/user/delete
     { "path": "papers/current/draft.md" }

All user storage API calls require authentication and automatically scope operations to the authenticated user’s folder.

Storage Quotas

Each user has configurable storage limits:

SettingDefaultDescription
user-storage-quota100MBMaximum total storage per user
user-file-limit5MBMaximum single file size
user-file-count500Maximum number of files

Configure in config.csv:

user-storage-quota,104857600
user-file-limit,5242880
user-file-count,500

Working with Files

File operations in General Bots happen through several interfaces depending on your needs. The BASIC scripting language provides keywords for reading file content directly into scripts, enabling bots to process documents, load data, or access configuration dynamically.

Files can also be managed through the administrative API for bulk operations, migrations, or integration with external systems. The web interface provides user-facing upload and download capabilities where appropriate.

When files change in storage, the system detects modifications and triggers appropriate responses. Script changes cause recompilation, document changes trigger knowledge base reindexing, and configuration changes reload bot settings. This hot-reloading capability accelerates development and enables runtime updates without service interruption.

Integration with Bot Components

The storage system integrates deeply with other bot components, serving as the foundation for several capabilities.

Knowledge bases draw their source documents from storage, with the indexing system monitoring for changes and updating embeddings accordingly. When you add a document to a .gbkb folder, it automatically becomes part of the bot’s searchable knowledge.

Theme assets including CSS files and images are served from storage, with appropriate caching to ensure good performance. Changes to theme files take effect quickly without requiring restarts.

Tool scripts in .gbdialog folders are loaded from storage, parsed, and made available for execution. The compilation system tracks dependencies and rebuilds as needed when source files change.

Paper Application Integration

The Paper document editor automatically saves to the user’s .gbusers folder:

  1. Auto-save: Every 30 seconds, working documents are saved to papers/current/
  2. Explicit save: When users click “Save”, documents move to papers/named/{document-name}/
  3. Export: Generated exports (PDF, DOCX) are saved to exports/ and offered for download
  4. AI-generated content: AI responses can be inserted into documents and saved automatically

Access Control

Different files require different access levels, and the storage system enforces appropriate controls:

  • Public files: Accessible without authentication, suitable for shared resources
  • Authenticated access: Requires valid user credentials, protects user-specific content
  • User-scoped access: Users can only access their own .gbusers folder content
  • Bot-internal files: Accessible only to the bot system itself
  • Administrative files: Require elevated privileges to access or modify

User storage in .gbusers is strictly isolated—users cannot access other users’ folders through any API or BASIC keyword.

Storage Backend Options

The storage system supports multiple backends to accommodate different deployment scenarios. The default configuration uses self-hosted S3-compatible object storage, providing full control over where data resides. Any S3-compatible service works as an alternative, including major cloud providers.

For development and testing, local filesystem storage offers simplicity and easy inspection of files. Production deployments might use hybrid configurations with multiple backends providing redundancy or geographic distribution.

Backend selection happens through configuration, and the rest of the system interacts with storage through a consistent interface regardless of which backend is active. This abstraction allows deployments to change storage strategies without modifying bot code.

Directory Structure Reference

Complete .gbdrive structure with all components:

mybot.gbai/
  mybot.gbdrive/
    dialogs/              # Compiled dialog scripts cache
    kb/                   # Knowledge base index data
    cache/                # Temporary cache files
    exports/              # Bot-level exports
    uploads/              # Bot-level uploads
    users/                # Per-user storage (.gbusers)
      user@email.com/
        papers/
          current/        # Working documents
          named/          # Saved documents
        uploads/          # User uploads
        exports/          # User exports
        settings/         # User preferences
      +1234567890/
        papers/
        uploads/
        exports/
        settings/

Summary

The .gbdrive storage system provides the foundation for all file-based operations in General Bots. Through S3-compatible object storage, organized bucket structures, automatic synchronization, and deep integration with other components, it delivers reliable file management that supports both development workflows and production operation.

The .gbusers folder structure enables personalized storage for each user, supporting applications like Paper that require persistent document storage. By organizing user data under their email or phone identifier, the system maintains clear separation while enabling powerful per-user features.

Understanding how storage works helps you organize bot content effectively and leverage the automatic capabilities the system provides.

Bot Templates

botserver includes pre-built bot templates for various use cases. Each template is a complete .gbai package ready to deploy.


Complete Template List (Flat Reference)

#TemplateCategoryFolderKey Features
1DefaultCoredefault.gbaiMinimal starter bot
2TemplateCoretemplate.gbaiReference implementation
3AnnouncementsCommunicationsannouncements.gbaiCompany news, multiple KB
4AI SearchSearchai-search.gbaiQR codes, document search
5API ClientIntegrationapi-client.gbaiREST API patterns
6BackupAdministrationbackup.gbaiServer backup scripts
7BIAnalyticsbi.gbaiDashboards, role separation
8BroadcastCommunicationsbroadcast.gbaiMass messaging
9CrawlerSearchcrawler.gbaiWeb indexing
10CRMSalescrm.gbaiCustomer management
11EducationEducationedu.gbaiCourse management
12ERPOperationserp.gbaiProcess automation
13LawLegallaw.gbaiDocument templates
14LLM ServerAIllm-server.gbaiModel hosting
15LLM ToolsAIllm-tools.gbaiPrompt engineering
16MarketingMarketingmarketing.gbaiCampaign tools
17Public APIsIntegrationpublic-apis.gbaiWeather, news APIs
18ReminderProductivityreminder.gbaiTask reminders
19StoreE-commercestore.gbaiProduct catalog
20Talk to DataAnalyticstalk-to-data.gbaiNatural language SQL
21WhatsAppMessagingwhatsapp.gbaiWhatsApp Business
22OfficeProductivityoffice.gbaiDocument processing
23Employee ManagementHRhr/employees.gbaiEmployee CRUD
24IT HelpdeskITit/helpdesk.gbaiTicket management
25Sales PipelineCRMcrm/sales-pipeline.gbaiDeal tracking
26Contact DirectoryCRMcrm/contacts.gbaiContact management

Templates by Category

Core Templates

TemplateFolderPurpose
Defaultdefault.gbaiMinimal starter bot for learning
Templatetemplate.gbaiComplete example structure

HR & People

TemplateFolderKey Files
Employee Managementhr/employees.gbaistart.bas, add-employee.bas, search-employee.bas
Leave Managementhr/leave.gbaistart.bas, request-leave.bas, approve-leave.bas
Recruitmenthr/recruitment.gbaistart.bas, post-job.bas, add-applicant.bas

IT & Support

TemplateFolderKey Files
IT Helpdeskit/helpdesk.gbaistart.bas, create-ticket.bas, update-ticket.bas
Asset Trackingit/assets.gbaistart.bas, add-asset.bas, checkout-asset.bas

CRM & Sales

TemplateFolderKey Files
CRMcrm.gbailead-management.bas, opportunity-management.bas
Sales Pipelinecrm/sales-pipeline.gbaistart.bas, create-deal.bas, update-stage.bas
Contact Directorycrm/contacts.gbaistart.bas, add-contact.bas, search-contact.bas

Finance & Accounting

TemplateFolderKey Files
Invoicingfinance/invoicing.gbaistart.bas, create-invoice.bas, send-reminder.bas
Expense Trackerfinance/expenses.gbaistart.bas, submit-expense.bas, approve-expense.bas

Operations

TemplateFolderKey Files
ERPerp.gbaiProcess automation, integrations
Warehouseoperations/warehouse.gbaistart.bas, receive-stock.bas, ship-order.bas

Template Structure

All templates follow this standard directory layout:

template-name.gbai/
  template-name.gbdialog/    # BASIC dialog scripts
    start.bas                # Entry point (required)
    *.bas                    # Tool scripts (auto-discovered)
    *-jobs.bas               # Scheduled jobs
  template-name.gbkb/        # Knowledge base collections
    collection1/             # Documents for USE KB "collection1"
  template-name.gbdrive/     # File storage (not KB)
    uploads/                 # User uploaded files
    exports/                 # Generated files
  template-name.gbot/        # Configuration
    config.csv               # Bot parameters
  template-name.gbtheme/     # UI theme (optional)
    default.css              # Theme CSS

Quick Start Guide

1. Choose a Template

Select based on your needs:

  • Simple chat: Use default.gbai
  • Business app: Choose crm.gbai, bi.gbai, or erp.gbai
  • AI features: Pick ai-search.gbai or llm-tools.gbai
  • Communication: Select broadcast.gbai or whatsapp.gbai

2. Deploy the Template

# Templates are auto-deployed during bootstrap
# Access at: http://localhost:9000/template-name

3. Customize Configuration

Edit template-name.gbot/config.csv:

name,value
bot-name,My Custom Bot
welcome-message,Hello! How can I help?
llm-model,model.gguf
temperature,0.7

4. Add Knowledge Base

Place documents in .gbkb folders:

  • Each folder becomes a collection
  • Use USE KB "folder-name" in scripts
  • Documents are automatically indexed

5. Create Tools

Add .bas files to .gbdialog:

  • Each file becomes a tool
  • Auto-discovered by the system
  • Called automatically by LLM when needed

Required Files for Each Template

start.bas (Required)

' Template Name - Start Script

' Setup Tools
ADD TOOL "tool-name-1"
ADD TOOL "tool-name-2"

' Setup Knowledge Base
USE KB "template-name.gbkb"

' Set Context
SET CONTEXT "context name" AS "You are a [role]. You help with [tasks]."

' Setup Suggestions
CLEAR SUGGESTIONS
ADD SUGGESTION "action1" AS "Display text 1"
ADD SUGGESTION "action2" AS "Display text 2"

' Welcome Message
BEGIN TALK
    **Template Title**
    
    Welcome message here.
    
    **What I can help with:**
    • Feature 1
    • Feature 2
END TALK

BEGIN SYSTEM PROMPT
    Detailed instructions for the AI...
END SYSTEM PROMPT

Tool File Template

PARAM paramname AS STRING LIKE "example" DESCRIPTION "What this parameter is"
PARAM optionalparam AS STRING LIKE "default" DESCRIPTION "Optional parameter"

DESCRIPTION "What this tool does. Called when user wants to [action]."

' Business logic
let result = "processed"

' Save data (field names = variable names)
SAVE "table.csv", paramname, optionalparam, result

' Store in memory
SET BOT MEMORY "last_item", result

' Response
TALK "✅ Action completed successfully!"

config.csv Template

name,value
episodic-memory-history,2
episodic-memory-threshold,4
theme-color1,#1565C0
theme-color2,#E3F2FD
theme-logo,https://pragmatismo.com.br/icons/general-bots.svg
theme-title,Template Name - General Bots

Syntax Rules for Templates

DO ✅

' Variable names (no underscores in names)
let ticketnumber = "TKT001"
let useremail = "user@example.com"

' SAVE with field names = variable names
SAVE "table.csv", ticketnumber, useremail, status

' Keywords with spaces
SET BOT MEMORY "last_ticket", ticketnumber
SET CONTEXT "name" AS "description"
ADD SUGGESTION "key" AS "Display text"
CLEAR SUGGESTIONS
USE KB "myknowledge"
USE TOOL "mytool"

' GET BOT MEMORY as function
let lastticket = GET BOT MEMORY("last_ticket")

DON’T ❌

' NO: Complex object operations
SET object.field = value  ' WRONG
SAVE "table", object.id, object  ' WRONG

Creating Custom Templates

To create your own template:

  1. Copy template.gbai as starting point
  2. Define clear purpose - one template, one job
  3. Structure folders properly:
    • .gbdialog for scripts
    • .gbkb for knowledge collections
    • .gbdrive for general files
    • .gbot for configuration
  4. Include examples - sample data and dialogs
  5. Test thoroughly - verify all features

Best Practices

Template Selection

  1. Start small: Begin with default.gbai
  2. Match use case: Choose aligned templates
  3. Combine features: Mix templates as needed
  4. Keep originals: Copy before modifying

Customization Strategy

Minimal BASIC Approach

Instead of complex dialog flows, use simple LLM calls:

' Let system AI handle conversations naturally
TALK "How can I assist you?"
' System AI understands and responds appropriately

Tool Creation

Only create .bas files for specific actions:

  • API calls
  • Database operations
  • File processing
  • Calculations

Knowledge Base Organization

  • One folder per topic/collection
  • Name folders clearly
  • Keep documents updated
  • Index automatically

Performance Tips

  • Remove unused template files
  • Index only necessary documents
  • Configure appropriate cache settings
  • Monitor resource usage

Support Resources

  • README files in each template folder
  • Example configurations included
  • Sample knowledge bases provided
  • Community forums for discussions

Business Intelligence Template (bi.gbai)

A General Bots template for automated business intelligence reporting and data visualization.


Overview

The BI template provides scheduled analytics reporting with automatic chart generation and delivery. It’s designed for organizations that need automated consumption reports, category analysis, and customer-specific insights.

Features

  • Scheduled Reporting - Automated report generation on configurable schedules
  • Time-Series Charts - Monthly consumption trends visualization
  • Category Analysis - Product category breakdown with donut charts
  • Per-Customer Reports - Individual customer consumption analysis
  • Multi-Channel Delivery - Send reports via chat, email, or messaging platforms

Package Structure

bi.gbai/
├── bi.gbai/
│   ├── bi-admin.bas      # Administrative scheduled reports
│   └── bi-user.bas       # Per-customer report generation

Scripts

FileDescription
bi-admin.basScheduled job for generating platform-wide analytics reports
bi-user.basLoop through customers to generate individual consumption reports

Configuration

Configure the template in your bot’s config.csv:

ParameterDescriptionExample
ScheduleCron expression for report timing1 * * * * *
Data SourceTable/view for billing dataOrders

Usage

Administrative Reports

The bi-admin.bas script runs on a schedule and generates:

  1. Monthly Consumption Chart - Time-series showing spending trends
  2. Product Category Breakdown - Donut chart of spending by category
SET SCHEDULE "1 * * * * *"

billing = FIND "Orders"

' Monthly consumption
data = SELECT SUM(UnitPrice * Quantity) as Value, 
       MONTH(OrderDate)+'/'+YEAR(OrderDate) 
       FROM billing 
       GROUP BY MONTH(OrderDate), YEAR(OrderDate)

img = CHART "timeseries", data
SEND FILE img, "Monthly Consumption"

Per-Customer Reports

The bi-user.bas script iterates through customers to generate personalized reports:

customers = FIND "Customers"

FOR EACH c IN customers
    data = SELECT SUM(UnitPrice * Quantity) AS Value, 
           MONTH(OrderDate)+'/'+YEAR(OrderDate) 
           FROM billing
           JOIN Customers ON billing.CustomerID = Customers.CustomerID
           GROUP BY MONTH(OrderDate), YEAR(OrderDate)
           WHERE Customers.CustomerID = c.CustomerID

    img = CHART "timeseries", data
    SEND FILE img, "Monthly Consumption"
END FOR

Chart Types

The template supports various chart types:

TypeUse Case
timeseriesTrends over time (monthly, weekly, daily)
donutCategory distribution
barComparative analysis
piePercentage breakdowns

Data Requirements

Orders Table Schema

The template expects a billing/orders data source with:

  • OrderDate - Date of the transaction
  • UnitPrice - Price per unit
  • Quantity - Number of units
  • ProductID - Foreign key to products
  • CustomerID - Foreign key to customers

Products Table Schema

  • ProductID - Primary key
  • CategoryID - Foreign key to categories
  • ProductName - Product name

Categories Table Schema

  • CategoryID - Primary key
  • CategoryName - Category display name

Example Output

Monthly Consumption Report

📊 Monthly Consumption Report
-----------------------------
Generated: 2024-01-15 08:00

[Time Series Chart Image]

Total Revenue: $125,430
Top Month: December ($18,500)
Growth Rate: +12% MoM

Category Breakdown

📊 Product Category Distribution
--------------------------------

[Donut Chart Image]

Electronics: 35%
Clothing: 28%
Home & Garden: 22%
Other: 15%

Customization

Adding New Reports

Create additional .bas files in the bi.gbai folder:

' sales-by-region.bas
SET SCHEDULE "0 9 * * 1"  ' Every Monday at 9 AM

data = SELECT Region, SUM(Amount) as Total 
       FROM Sales 
       GROUP BY Region

img = CHART "bar", data
SEND FILE img, "Weekly Regional Sales"

Customizing Delivery

Send reports to specific users or channels:

' Send to specific user
SEND FILE img TO "manager@company.com", "Weekly Report"

' Send to WhatsApp
SEND FILE img TO "+1234567890", "Your monthly report"

' Send to team channel
TALK TO "sales-team", img

Scheduling Options

ScheduleCron ExpressionDescription
Every minute1 * * * * *Testing/real-time
Hourly0 0 * * * *Frequent updates
Daily 8 AM0 0 8 * * *Morning reports
Weekly Monday0 0 9 * * 1Weekly summaries
Monthly 1st0 0 8 1 * *Monthly reports

Integration Examples

With CRM

' Combine with CRM data
opportunities = FIND "opportunities.csv"
revenue = SELECT stage, SUM(amount) FROM opportunities GROUP BY stage

img = CHART "funnel", revenue
SEND FILE img, "Sales Pipeline"

With ERP

' Inventory analysis
inventory = FIND "inventory.csv"
low_stock = SELECT product, quantity FROM inventory WHERE quantity < reorder_level

img = CHART "bar", low_stock
SEND FILE img, "Low Stock Alert"

Best Practices

  1. Schedule appropriately - Don’t run heavy reports too frequently
  2. Filter data - Use date ranges to limit data volume
  3. Cache results - Store computed metrics for faster access
  4. Log activities - Track report generation for auditing
  5. Handle errors - Wrap queries in error handling

Troubleshooting

IssueSolution
Empty chartsVerify data source has records
Schedule not runningCheck cron syntax
Slow reportsAdd date filters, optimize queries
Missing dataVerify JOIN conditions


See Also

Web Crawler Template (crawler.gbai)

A General Bots template for automated web crawling and content extraction for knowledge base population.


Overview

The Crawler template enables your bot to automatically fetch, parse, and index web content. It’s designed for building knowledge bases from websites, monitoring web pages for changes, and extracting structured data from online sources.

Features

  • Automated Web Scraping - Fetch and parse web pages automatically
  • Document Mode - Answer questions based on crawled content
  • Configurable Depth - Control how many pages to crawl
  • Content Indexing - Automatically add content to knowledge base
  • LLM Integration - Use AI to understand and summarize crawled content

Package Structure

crawler.gbai/
├── crawler.gbkb/          # Knowledge base for crawled content
│   └── docs/              # Indexed documents
└── crawler.gbot/
    └── config.csv         # Crawler configuration

Configuration

Configure the crawler in crawler.gbot/config.csv:

ParameterDescriptionExample
WebsiteTarget URL to crawlhttps://pragmatismo.com.br/
website Max DocumentsMaximum pages to crawl2
Answer ModeHow to respond to queriesdocument
Theme ColorUI theme colorpurple
LLM ProviderAI provider for processingopenai

Example config.csv

name,value
Website,https://pragmatismo.com.br/
website Max Documents,2
Answer Mode,document
Theme Color,purple
LLM Provider,openai

How It Works

  1. Initialization - Bot reads the target website from configuration
  2. Crawling - Fetches pages starting from the root URL
  3. Extraction - Parses HTML and extracts meaningful text content
  4. Indexing - Stores content in the knowledge base for RAG
  5. Q&A - Users can ask questions about the crawled content

Usage

Basic Setup

  1. Copy the template to your bot’s packages directory:
cp -r templates/crawler.gbai /path/to/your/bot/packages/
  1. Edit crawler.gbot/config.csv with your target website:
name,value
Website,https://your-website.com/
website Max Documents,10
Answer Mode,document
  1. Deploy and the bot will automatically crawl the configured site.

Querying Crawled Content

Once crawled, users can ask questions naturally:

  • “What services does the company offer?”
  • “Tell me about the pricing”
  • “Summarize the about page”
  • “What are the main features?”

Answer Modes

ModeBehavior
documentAnswers strictly based on crawled content
hybridCombines crawled content with general knowledge
summaryProvides concise summaries of relevant pages

Sample Conversation

User: What does this company do?

Bot:  Based on the website content:
      
      Pragmatismo provides enterprise AI solutions 
      including:
      
      • General Bots platform
      • Custom AI development
      • Conversational AI consulting
      • Integration services
      
      📄 Source: pragmatismo.com.br/about

User: Do they have any case studies?

Bot:  Yes! According to the website, they have worked 
      with several enterprise clients:
      
      • Banking sector automation
      • Healthcare chatbots
      • E-commerce customer service
      
      📄 Source: pragmatismo.com.br/cases

User: How can I contact them?

Bot:  Contact information from the website:
      
      📧 Email: contact@pragmatismo.com.br
      📍 Location: São Paulo, Brazil
      🌐 Website: pragmatismo.com.br
      
      📄 Source: pragmatismo.com.br/contact

Advanced Configuration

Limiting Crawl Scope

Control which pages are crawled:

name,value
Website,https://example.com/docs/
website Max Documents,50
Website Include Pattern,/docs/*
Website Exclude Pattern,/docs/archive/*

Scheduling Recrawls

Set up periodic recrawling to keep content fresh:

name,value
Website Refresh Schedule,0 0 * * 0

This example recrawls every Sunday at midnight.

Authentication

For sites requiring authentication:

name,value
Website Auth Type,basic
Website Username,user
Website Password,secret

Customization

Creating Custom Crawl Logic

Create a BASIC dialog for custom crawling:

' custom-crawl.bas
urls = ["https://site1.com", "https://site2.com", "https://site3.com"]

FOR EACH url IN urls
    content = GET url
    
    IF content THEN
        SAVE "crawled_pages.csv", url, content, NOW()
        SET CONTEXT content
    END IF
NEXT

TALK "Crawled " + UBOUND(urls) + " pages successfully."

Processing Crawled Content

Use LLM to process and structure crawled data:

' process-crawled.bas
pages = FIND "crawled_pages.csv"

FOR EACH page IN pages
    summary = LLM "Summarize this content in 3 bullet points: " + page.content
    
    WITH processed
        url = page.url
        summary = summary
        processed_at = NOW()
    END WITH
    
    SAVE "processed_content.csv", processed
NEXT

Extracting Structured Data

Extract specific information from pages:

' extract-products.bas
SET CONTEXT "You are a data extraction assistant. Extract product information as JSON."

page_content = GET "https://store.example.com/products"

products = LLM "Extract all products with name, price, and description as JSON array: " + page_content

SAVE "products.json", products

Integration Examples

With Knowledge Base

' Add crawled content to KB
content = GET "https://docs.example.com/api"

IF content THEN
    USE KB "api-docs.gbkb"
    ADD TO KB content, "API Documentation"
END IF

With Notifications

' Monitor for changes
previous = GET BOT MEMORY "last_content"
current = GET "https://news.example.com"

IF current <> previous THEN
    SEND MAIL "admin@company.com", "Website Changed", "The monitored page has been updated.", []
    SET BOT MEMORY "last_content", current
END IF

Best Practices

  1. Respect robots.txt - Only crawl pages allowed by the site’s robots.txt
  2. Rate limiting - Don’t overwhelm target servers with requests
  3. Set reasonable limits - Start with low Max Documents values
  4. Monitor content quality - Review crawled content for accuracy
  5. Keep content fresh - Schedule periodic recrawls for dynamic sites
  6. Handle errors gracefully - Implement retry logic for failed requests

Troubleshooting

IssueCauseSolution
No content indexedInvalid URLVerify the Website URL is accessible
Partial contentMax Documents too lowIncrease the limit in config
Stale answersContent not refreshedSet up scheduled recrawls
Authentication errorsMissing credentialsAdd auth settings to config
Timeout errorsSlow target siteIncrease timeout settings

Limitations

  • JavaScript-rendered content may not be fully captured
  • Some sites block automated crawlers
  • Large sites may take significant time to fully crawl
  • Dynamic content may require special handling

Use Cases

  • Documentation Bots - Index product docs for support
  • Competitive Intelligence - Monitor competitor websites
  • News Aggregation - Collect news from multiple sources
  • Research Assistants - Build knowledge bases from academic sources
  • FAQ Generators - Extract FAQs from help sites

  • AI Search - AI-powered document search
  • Talk to Data - Natural language data queries
  • Law - Legal document processing with similar RAG approach

See Also

Sales CRM Template (crm.gbai)

A comprehensive General Bots template for sales customer relationship management with lead tracking, opportunity management, and sales pipeline automation.


Overview

The CRM template provides a full-featured sales CRM system with conversational AI capabilities. It enables sales teams to manage leads, track opportunities through the pipeline, generate quotes, send proposals, and forecast revenue—all through natural conversation or automated workflows.

Features

  • Lead Management - Capture, qualify, convert, and nurture leads
  • Opportunity Pipeline - Track deals through customizable stages
  • Account Management - Manage customer accounts and contacts
  • Activity Tracking - Log calls, emails, meetings, and tasks
  • Quote Generation - Create and send professional quotes
  • Proposal Automation - Generate and deliver sales proposals
  • Sales Forecasting - Pipeline analysis and revenue projections
  • Email Integration - Receive and process emails automatically
  • Sentiment Analysis - AI-powered customer sentiment tracking
  • Data Enrichment - Automatic lead data enhancement

Package Structure

crm.gbai/
├── crm.gbdialog/
│   ├── lead-management.bas          # Lead lifecycle management
│   ├── opportunity-management.bas   # Opportunity pipeline
│   ├── account-management.bas       # Account/company management
│   ├── activity-tracking.bas        # Activity logging
│   ├── case-management.bas          # Support case handling
│   ├── analyze-customer-sentiment.bas # AI sentiment analysis
│   ├── data-enrichment.bas          # Lead data enhancement
│   ├── send-proposal.bas            # Proposal generation
│   ├── create-lead-from-draft.bas   # Email to lead conversion
│   ├── crm-jobs.bas                 # Scheduled background jobs
│   └── tables.bas                   # Database schema definitions
└── crm.gbot/
    └── config.csv                   # Bot configuration

Scripts

FileDescription
lead-management.basComplete lead lifecycle: capture, qualify, convert, follow-up, nurture
opportunity-management.basPipeline stages, quotes, products, forecasting
account-management.basAccount and contact management
activity-tracking.basLog and track all sales activities
case-management.basCustomer support case handling
analyze-customer-sentiment.basAI-powered sentiment analysis
data-enrichment.basEnrich leads with external data
send-proposal.basGenerate and send proposals
on-receive-email.basProcess incoming emails
crm-jobs.basScheduled automation tasks
tables.basCRM database schema

Data Schema

Leads Table

FieldTypeDescription
idGUIDUnique identifier
nameStringLead name
emailEmailEmail address
phonePhonePhone number
companyStringCompany name
sourceStringLead source
statusStringnew, qualified, hot, warm, cold, converted
scoreIntegerLead qualification score (0-100)
assigned_toStringSales rep ID
created_atDateTimeCreation timestamp

Opportunities Table

FieldTypeDescription
idGUIDUnique identifier
nameStringOpportunity name
account_idGUIDRelated account
contact_idGUIDPrimary contact
amountDecimalDeal value
stageStringPipeline stage
probabilityIntegerWin probability (0-100)
close_dateDateExpected close date
owner_idStringSales rep ID
lead_sourceStringOriginal lead source

Accounts Table

FieldTypeDescription
idGUIDUnique identifier
nameStringCompany name
typeStringprospect, customer, partner
industryStringIndustry vertical
owner_idStringAccount owner
created_from_leadGUIDOriginal lead ID

Contacts Table

FieldTypeDescription
idGUIDUnique identifier
account_idGUIDParent account
nameStringFull name
emailEmailEmail address
phonePhonePhone number
titleStringJob title
primary_contactBooleanPrimary contact flag

Activities Table

FieldTypeDescription
idGUIDUnique identifier
typeStringcall, email, meeting, task
subjectStringActivity subject
lead_idGUIDRelated lead
opportunity_idGUIDRelated opportunity
created_atDateTimeActivity timestamp

Pipeline Stages

StageProbabilityDescription
qualification10%Initial qualification
needs_analysis20%Understanding requirements
value_proposition50%Presenting solution
decision_makers60%Engaging decision makers
proposal75%Proposal sent
negotiation90%Terms negotiation
closed_won100%Deal closed - won
closed_lost0%Deal closed - lost

Sample Conversations

Lead Capture

User: I just got a lead from the website

Bot:  Great! I'll help you capture it. What's their name?

User: John Smith from TechCorp

Bot:  What's their email?

User: john@techcorp.com

Bot:  ✅ Lead captured: John Smith (TechCorp)
      ID: LEAD-20240115-4821
      
      Would you like to qualify this lead now?

Lead Qualification (BANT)

User: Yes, qualify the lead

Bot:  I'll ask a few questions to qualify them.
      
      What is TechCorp's annual revenue range?
      1. Under $1M
      2. $1M - $10M
      3. $10M - $50M
      4. Over $50M

User: 3

Bot:  How many employees?

User: About 200

Bot:  What's their timeline for making a decision?

User: This quarter

Bot:  Do they have budget allocated?

User: Yes

Bot:  🔥 Hot Lead! Score: 85/100
      
      This lead is a great fit. I've:
      • Notified sales@company.com
      • Created a follow-up task
      
      Would you like to convert to an opportunity?

Opportunity Management

User: Create a new opportunity for Acme Corp

Bot:  What should we call this opportunity?

User: Enterprise License Deal

Bot:  What is the estimated value of this deal?

User: $50,000

Bot:  When do you expect to close this deal?

User: End of Q2

Bot:  ✅ Opportunity created: Enterprise License Deal
      
      Account: Acme Corp
      Value: $50,000
      Stage: Qualification (10%)
      Close Date: June 30, 2024

Pipeline Update

User: Update the Acme deal to proposal stage

Bot:  Current stage: value_proposition
      
      Select new stage:
      1. Qualification (10%)
      2. Needs Analysis (20%)
      3. Value Proposition (50%)
      4. Decision Makers (60%)
      5. Proposal (75%)
      6. Negotiation (90%)
      7. Closed Won (100%)
      8. Closed Lost (0%)

User: 5

Bot:  ✅ Stage updated to proposal (75%)
      
      Would you like to generate a quote for this opportunity?

Sales Forecast

User: Show me the sales forecast

Bot:  📊 SALES FORECAST
      ════════════════════════════════════════
      
      Total Pipeline: $425,000
      Weighted Pipeline: $187,500
      
      Quarterly Forecast:
      ├── Q1: $45,000
      ├── Q2: $82,500
      ├── Q3: $35,000
      └── Q4: $25,000
      
      Top Opportunities:
      1. Enterprise License - Acme Corp ($50,000) - 75%
      2. Platform Upgrade - TechStart ($35,000) - 60%
      3. Consulting Deal - DataFlow ($28,000) - 50%

Lead Qualification Scoring

The qualification process scores leads based on BANT criteria:

  • Budget - Revenue range (up to 30 points)
  • Authority - Employee count/company size (up to 25 points)
  • Need - Timeline urgency (up to 30 points)
  • Timeline - Budget allocation (up to 25 points)

Scoring Thresholds

ScoreStatusAction
≥70Hot LeadImmediate follow-up
50-69Warm Lead24-hour response
30-49Cold LeadNurture campaign
<30UnqualifiedArchive

Lead Nurturing

Automated nurture campaigns based on lead age:

DaysContent
35 Tips to Improve Your Business
7Case Study: How We Helped Similar Companies
14Free Consultation Offer
30Special Limited Time Offer

Configuration

Configure in crm.gbot/config.csv:

ParameterDescriptionExample
Company NameYour company nameAcme Sales
CurrencyDefault currencyUSD
Tax RateDefault tax percentage10
Quote Validity DaysQuote expiration30
Pipeline StagesCustom stage definitions(JSON)
Lead SourcesAvailable lead sourcesweb,referral,event
Admin EmailNotifications emailsales@company.com

Scheduled Jobs

JobScheduleDescription
Lead nurturingDailySend nurture emails to cold/warm leads
Follow-up remindersHourlyAlert reps of overdue follow-ups
Pipeline cleanupWeeklyArchive stale opportunities
Forecast updateDailyRecalculate sales forecast
Activity syncReal-timeSync emails and calendar events

Email Integration

Receiving Emails

' on-receive-email.bas
email_from = GET "email.from"
email_subject = GET "email.subject"
email_body = GET "email.body"

' Check if from existing contact
contact = FIND "contacts.csv", "email = '" + email_from + "'"

IF contact THEN
    ' Log activity against contact
    WITH activity
        type = "email"
        subject = email_subject
        contact_id = contact.id
    END WITH
    SAVE "activities.csv", activity
ELSE
    ' Create new lead from email
    CALL "create-lead-from-draft.bas"
END IF

Sending Proposals

' send-proposal.bas
proposal = GENERATE FROM TEMPLATE "proposal_template.docx" WITH {
    "company": account.name,
    "contact": contact.name,
    "products": opportunity_products,
    "total": quote.total,
    "valid_until": quote.valid_until
}

SEND MAIL contact.email, "Proposal: " + opportunity.name, 
    "Please find attached our proposal.", [proposal]

AI Features

Customer Sentiment Analysis

' analyze-customer-sentiment.bas
SET CONTEXT "Analyze customer communication for sentiment and buying signals."

communications = FIND "activities.csv", "contact_id = '" + contact_id + "'"

analysis = LLM "Analyze these customer communications and provide:
               1. Overall sentiment (positive, neutral, negative)
               2. Buying signals detected
               3. Concerns or objections
               4. Recommended next action
               
               Communications: " + JSON(communications)

TALK analysis

Data Enrichment

' data-enrichment.bas
' Enrich lead with external data
company_info = GET "https://api.enrichment.com/company/" + lead.company

IF company_info THEN
    lead.industry = company_info.industry
    lead.employee_count = company_info.employees
    lead.revenue_range = company_info.revenue
    lead.linkedin_url = company_info.linkedin
    
    UPDATE "leads.csv", lead
END IF

Best Practices

  1. Qualify early - Use BANT scoring to prioritize leads
  2. Track everything - Log all customer interactions
  3. Follow up promptly - Hot leads within hours, warm within 24h
  4. Use automation - Let nurture campaigns work cold leads
  5. Clean pipeline - Archive stale opportunities regularly
  6. Forecast accurately - Keep close dates and probabilities updated
  7. Segment leads - Use tags and sources for better targeting

Troubleshooting

IssueCauseSolution
Lead not foundSearch criteria too strictUse partial match
Stage not updatingMissing opportunity IDSet opportunity in session
Quote not generatingMissing productsAdd products to opportunity first
Email not sendingMissing contact emailVerify contact record
Forecast incorrectStale dataUpdate opportunity amounts

Use Cases

  • Inside Sales - Lead qualification and opportunity management
  • Field Sales - Account management and activity tracking
  • Sales Management - Pipeline visibility and forecasting
  • Business Development - Lead generation and nurturing
  • Customer Success - Account health and expansion opportunities

Integration Points

  • Email - Inbound/outbound email tracking
  • Calendar - Meeting scheduling
  • ERP - Order and billing sync
  • Marketing Automation - Lead handoff
  • Support Ticketing - Case management


See Also

Marketing Automation Template (marketing.gbai)

A General Bots template for marketing campaign management, content creation, and multi-channel broadcast messaging.


Overview

The Marketing template provides marketing automation capabilities including campaign management, content ideation, image generation, social media posting, and WhatsApp broadcast messaging. It enables marketing teams to create, schedule, and deliver campaigns through conversational AI.

Features

  • Campaign Management - Create and organize marketing campaigns
  • Content Ideation - AI-assisted content idea generation
  • Image Generation - AI-powered marketing visuals
  • Social Media Posting - Direct posting to Instagram and other platforms
  • WhatsApp Broadcasts - Mass messaging with template support
  • Contact Segmentation - Target specific audience segments
  • Template Compliance - META-approved template validation
  • Broadcast Logging - Track delivery and engagement

Package Structure

marketing.gbai/
├── marketing.gbdialog/
│   ├── add-new-idea.bas       # Content ideation tool
│   ├── broadcast.bas          # WhatsApp broadcast messaging
│   ├── get-image.bas          # AI image generation
│   ├── post-to-instagram.bas  # Instagram posting
│   ├── poster.bas             # Marketing poster creation
│   └── campaigns/             # Campaign templates
└── marketing.gbot/
    └── config.csv             # Bot configuration

Scripts

FileDescription
add-new-idea.basGenerate and save marketing content ideas
broadcast.basSend WhatsApp broadcasts to contact lists
get-image.basGenerate marketing images with AI
post-to-instagram.basPost content to Instagram
poster.basCreate marketing posters and visuals

WhatsApp Broadcast

The broadcast.bas script enables mass WhatsApp messaging with template support:

Parameters

ParameterTypeDescriptionExample
messageSTRINGMessage with variables"Olá {name}, confira nossas novidades!"
template_fileFILEHeader image for templateheader.jpg
list_fileFILEContact list with phone numberscontacts.xlsx
filterSTRINGOptional filter condition"Perfil=VIP"

Template Compliance

The system validates messages for META WhatsApp template approval:

report = LLM "Esta mensagem será aprovada pelo WhatsApp META como Template? 
              Responda OK se sim, ou explique o problema: " + message

IF report <> "OK" THEN
    TALK "Atenção: " + report
END IF

Broadcast Execution

IF filter THEN
    list = FIND list_file, filter
ELSE
    list = FIND list_file
END IF

index = 1
sent = 0

DO WHILE index < UBOUND(list)
    row = list[index]
    
    SEND TEMPLATE TO row.telefone, template_file
    
    WAIT 0.1  ' Rate limiting
    
    ' Log delivery
    WITH logEntry
        timestamp = NOW()
        phone = row.telefone
        name = row.name
        status = "sent"
    END WITH
    
    SAVE "broadcast_log.csv", logEntry
    
    sent = sent + 1
    index = index + 1
LOOP

TALK "Broadcast enviado para " + sent + " contatos."

Contact List Format

The contact file should include:

ColumnRequiredDescription
telefoneYesPhone number with country code
nameNoContact name for personalization
emailNoEmail address
PerfilNoSegment tag for filtering

Example contacts.xlsx

telefone,name,email,Perfil
+5511999999999,João Silva,joao@email.com,VIP
+5511888888888,Maria Santos,maria@email.com,Regular
+5511777777777,Pedro Costa,pedro@email.com,VIP

Sample Conversations

Creating a Broadcast

User: Send a broadcast to VIP customers

Bot:  I'll help you create a broadcast. What's the message?

User: Olá {name}, aproveite 20% de desconto esta semana!

Bot:  Checking template compliance...
      ✅ Message approved for WhatsApp template.
      
      Do you have a header image?

User: Yes, use promo-banner.jpg

Bot:  Which contact list should I use?

User: contacts.xlsx

Bot:  📤 Broadcast Preview:
      
      Message: Olá {name}, aproveite 20% de desconto esta semana!
      Image: promo-banner.jpg
      Recipients: 127 VIP customers
      
      Send now or schedule?

User: Send now

Bot:  📤 Broadcasting...
      ✅ Broadcast complete! Sent to 127 contacts.
      
      View delivery report in Analytics.

Content Ideation

User: I need content ideas for our new product launch

Bot:  What product are you launching?

User: Wireless headphones

Bot:  What platform are you targeting?

User: Instagram

Bot:  Here are 5 content ideas for your wireless headphones launch:

      1. "Experience Sound Like Never Before 🎧"
         Key message: Premium audio quality
         CTA: Shop now, link in bio
         Hashtags: #WirelessFreedom #AudioExperience #NewLaunch
      
      2. "Cut the Cord, Not the Quality"
         Key message: Freedom without compromise
         CTA: Pre-order today
         Hashtags: #WirelessAudio #TechLife #MusicLovers
      
      3. "Your Music. Your Freedom."
         Key message: Lifestyle integration
         CTA: Discover more
         Hashtags: #OnTheGo #Wireless #Premium
      
      4. "30 Hours of Pure Sound"
         Key message: Battery life
         CTA: Learn more
         Hashtags: #AllDayBattery #WirelessHeadphones
      
      5. "Studio Sound, Everywhere"
         Key message: Professional quality
         CTA: Experience now
         Hashtags: #StudioQuality #Audiophile #TechGear
      
      Would you like me to generate images for any of these?

Image Generation

User: Generate an image for idea 1

Bot:  Generating marketing image...
      
      Prompt: "Premium wireless headphones floating in space with 
      sound waves emanating, modern minimalist style, product 
      photography, dark background with purple accent lighting"
      
      [Generated Image]
      
      ✅ Image generated! Would you like to:
      1. Post to Instagram
      2. Use in a broadcast
      3. Generate variations
      4. Download

User: Post to Instagram

Bot:  What caption should I use?

User: Experience Sound Like Never Before 🎧 #WirelessFreedom #NewLaunch

Bot:  ✅ Posted to Instagram!
      
      Post ID: 12345678
      Time: May 15, 2025 at 2:30 PM
      
      Track engagement in Analytics.

Audience Segmentation

Filter Examples

FilterDescription
Perfil=VIPVIP customers only
Perfil=Regular AND compras>5Regular customers with 5+ purchases
ultima_compra>2024-01-01Recent purchasers
cidade=São PauloLocation-based
interesse=electronicsInterest-based

Building Segments

' Create reusable segments
WITH segment
    id = "SEG-" + FORMAT(GUID())
    name = "VIP São Paulo"
    filter = "Perfil=VIP AND cidade=São Paulo"
    count = COUNT(FIND "contacts.xlsx", filter)
    created_at = NOW()
END WITH

SAVE "segments.csv", segment

Configuration

Configure in marketing.gbot/config.csv:

ParameterDescriptionExample
Brand ColorPrimary brand color#FF5722
Logo FileCompany logo pathlogo.png
Instagram AccountConnected IG account@mycompany
WhatsApp Business IDWA Business account123456789
Default TemplateDefault broadcast templatemarketing_update
Rate LimitMessages per second10
Max Broadcast SizeMaximum recipients1000

Analytics & Reporting

Broadcast Analytics

' Get broadcast statistics
broadcast_id = "BROADCAST-20240115-1234"
logs = FIND "broadcast_log.csv", "broadcast_id = '" + broadcast_id + "'"

total_sent = COUNT(logs)
delivered = COUNT(FIND logs, "status = 'delivered'")
read = COUNT(FIND logs, "status = 'read'")
clicked = COUNT(FIND logs, "status = 'clicked'")

TALK "📊 Broadcast Report"
TALK "Total Sent: " + total_sent
TALK "Delivered: " + delivered + " (" + (delivered/total_sent*100) + "%)"
TALK "Read: " + read + " (" + (read/total_sent*100) + "%)"
TALK "Clicked: " + clicked + " (" + (clicked/total_sent*100) + "%)"

Customization

Adding Campaign Types

' campaign-email.bas
PARAM subject AS STRING DESCRIPTION "Email subject line"
PARAM body AS STRING DESCRIPTION "Email body content"
PARAM list_file AS FILE DESCRIPTION "Contact list"
PARAM filter AS STRING DESCRIPTION "Segment filter" OPTIONAL

DESCRIPTION "Send email marketing campaign"

IF filter THEN
    contacts = FIND list_file, filter
ELSE
    contacts = FIND list_file
END IF

FOR EACH contact IN contacts
    personalized_body = REPLACE(body, "{name}", contact.name)
    SEND MAIL contact.email, subject, personalized_body, []
    
    WITH log
        campaign_id = campaign_id
        contact_email = contact.email
        sent_at = NOW()
        status = "sent"
    END WITH
    
    SAVE "email_campaign_log.csv", log
NEXT

TALK "Email campaign sent to " + UBOUND(contacts) + " recipients."

Social Media Scheduling

' schedule-post.bas
PARAM platform AS STRING LIKE "instagram" DESCRIPTION "Social platform"
PARAM content AS STRING DESCRIPTION "Post content"
PARAM image AS FILE DESCRIPTION "Post image" OPTIONAL
PARAM schedule_time AS STRING DESCRIPTION "When to post"

DESCRIPTION "Schedule social media post"

WITH scheduled_post
    id = "POST-" + FORMAT(GUID())
    platform = platform
    content = content
    image = image
    scheduled_for = schedule_time
    status = "scheduled"
    created_at = NOW()
END WITH

SAVE "scheduled_posts.csv", scheduled_post

SET SCHEDULE schedule_time, "execute-scheduled-post.bas"

TALK "Post scheduled for " + schedule_time + " on " + platform

Best Practices

  1. Template compliance - Always validate templates before broadcast
  2. Segment wisely - Target relevant audiences to improve engagement
  3. Rate limiting - Respect platform rate limits to avoid blocks
  4. Personalization - Use variables for personalized messages
  5. A/B testing - Test different messages with small segments first
  6. Timing - Schedule broadcasts for optimal engagement times
  7. Tracking - Monitor delivery and engagement metrics
  8. Opt-out handling - Honor unsubscribe requests immediately

Troubleshooting

IssueCauseSolution
Template rejectedNon-compliant contentReview META guidelines
Low delivery rateInvalid phone numbersClean contact list
Broadcast blockedRate limit exceededReduce sending speed
Images not generatingInvalid promptSimplify prompt text
Instagram post failedAuth expiredReconnect account

Compliance Notes

  • Ensure recipients have opted in to receive marketing messages
  • Honor unsubscribe requests within 24 hours
  • Follow META WhatsApp Business policies
  • Comply with GDPR/LGPD data protection requirements
  • Keep records of consent for audit purposes

Use Cases

  • Product Launches - Announce new products to customers
  • Promotions - Send special offers and discounts
  • Events - Promote webinars, sales, and events
  • Newsletters - Regular customer communications
  • Re-engagement - Win back inactive customers
  • Social Media - Automated content posting

  • CRM - Customer relationship management
  • Contacts - Contact list management
  • Broadcast - General message broadcasting
  • Analytics - Marketing analytics

See Also

Chapter 3: Knowledge & AI

Quick reference for all General Bots knowledge and AI capabilities.

Feature Categories

CategoryKey Features
CoreChat, dialogs, sessions, automation
AI/LLMModel integration, knowledge base, context
ChannelsWeb, WhatsApp, Teams, Email, SMS
ProductivityCalendar, tasks, drive, mail, meet
DataCRUD, aggregations, file operations
HTTPREST, GraphQL, SOAP, webhooks

Editions

EditionUse Case
MinimalEmbedded, IoT
CoreGeneral business
StandardProfessional teams
EnterpriseLarge organizations

Chapter Contents

See Also

KB and TOOL System Documentation

Overview

The General Bots system provides four essential keywords for managing Knowledge Bases and Tools dynamically during conversation sessions. The USE KB keyword loads and embeds files from .gbkb folders into the vector database. The CLEAR KB keyword removes a knowledge base from the current session. The USE TOOL keyword makes a tool available for the LLM to call. The CLEAR TOOLS keyword removes all tools from the current session. Together, these keywords give you complete control over what information and capabilities your bot has access to at any moment.

Knowledge Base System

What is a KB?

A Knowledge Base is a folder containing documents (using the .gbkb folder structure) that are vectorized, embedded, and stored in a vector database. When users ask questions, the vector database retrieves relevant chunks and excerpts to inject into prompts, giving the LLM context-aware responses based on your specific documentation and data.

Folder Structure

Knowledge bases are organized within your bot’s work directory. The structure places all knowledge base folders inside a .gbkb container that shares your bot’s name. Within this container, you create separate folders for different topics or document collections. Each folder can contain PDF files, markdown documents, plain text files, Word documents, CSV files, and other supported formats.

work/
  {bot_name}/
    {bot_name}.gbkb/
      circular/
        document1.pdf
        document2.md
        document3.txt
      comunicado/
        info.docx
        data.csv
      docs/
        README.md
        guide.pdf

KB Loading Process

When you load a knowledge base, the system goes through several stages to make your documents searchable. First, the system scans the specified .gbkb folder to identify all documents. Then it processes each file by extracting text from PDFs, Word documents, text files, markdown, CSV files, and other supported formats. The extracted text is split into chunks of approximately 1000 characters with overlap between chunks to preserve context at boundaries. Each chunk is then converted into a vector representation using an embedding model. These vectors are stored in the vector database with metadata about their source, enabling fast similarity search. Once this process completes, the knowledge base is ready to answer semantic queries.

Supported File Types

The system supports a variety of document formats. PDF files receive full text extraction using the pdf-extract library. Microsoft Word documents in both DOCX and DOC formats are supported. Plain text files and markdown documents are processed directly. CSV files treat each row as a separate searchable entry. HTML files have their text content extracted while ignoring markup. JSON files are parsed and their structured data becomes searchable.

USE KB Keyword

The USE KB keyword loads a knowledge base folder into your current session. You can load multiple knowledge bases, and all of them become active simultaneously. This allows you to combine different document collections for comprehensive responses.

USE KB "circular"
' The circular KB folder is now loaded and searchable
' All documents in that folder are available for semantic queries

USE KB "comunicado"
' Now both circular and comunicado are active
' The LLM can draw from both collections when responding

CLEAR KB Keyword

The CLEAR KB keyword removes all loaded knowledge bases from the current session. This frees up memory and context space, which is particularly useful when switching between different topics or when you need to ensure the LLM only uses specific information.

CLEAR KB
' All loaded knowledge bases are removed
' Memory is freed and context space is reclaimed

Tool System

What are Tools?

Tools are callable functions that the LLM can invoke to perform specific actions beyond its training data. Tools enable your bot to query databases, call external APIs, process data, execute workflows, and integrate with external systems. When the LLM determines that a tool would help answer a user’s question, it generates a tool call with the appropriate parameters.

Tool Definition

Tools are defined in .bas files that automatically generate MCP and OpenAI-compatible tool definitions. When you create a BASIC file with PARAM declarations and a DESCRIPTION, the system compiles it into a tool specification that the LLM understands.

' weather.bas - becomes a tool automatically
PARAM location AS string
PARAM units AS string DEFAULT "celsius"
DESCRIPTION "Get current weather for a location"

' Tool implementation
weather_data = GET "https://api.weather.com/v1/current?location=" + location
SET CONTEXT "weather_data", weather_data
TALK "Here's the current weather for " + location

Tool Registration

Tools become available through two mechanisms. Auto-discovery scans all .bas files in your .gbdialog folder (except start.bas) and registers them as tools automatically. Dynamic loading uses the USE TOOL keyword to make external tools available during a session.

USE TOOL Keyword

The USE TOOL keyword makes a specific tool available for the LLM to call. You can enable multiple tools, giving your bot access to various capabilities during a conversation.

USE TOOL "weather"
' The weather tool is now available

USE TOOL "database_query"
' Database querying capability is added

USE TOOL "email_sender"
' The bot can now send emails when appropriate

CLEAR TOOLS Keyword

The CLEAR TOOLS keyword removes all tools from the current session. After clearing, the LLM can no longer call external functions and must rely solely on its training and any loaded knowledge bases.

CLEAR TOOLS
' All tools are disabled
' LLM cannot call external functions

Session Management

Context Lifecycle

Each conversation session follows a predictable lifecycle. When a session starts, the bot has a clean slate with no knowledge bases or tools loaded. During the conversation, you load resources as needed using USE KB and USE TOOL commands. The LLM actively uses these loaded resources to provide informed, capable responses. When the topic changes or resources are no longer needed, you clear them with CLEAR KB and CLEAR TOOLS. When the session ends, automatic cleanup releases all remaining resources.

Best Practices for KB Management

Load only the knowledge bases relevant to the current conversation. Overloading context with unnecessary KBs reduces response quality and increases costs. Clear knowledge bases when switching topics to keep the context focused on what matters. Update your KB files regularly to keep the information current. Monitor token usage because vector search results add tokens to each query.

Best Practices for Tool Management

Enable only the minimum set of tools needed for the current task. Having too many tools available can confuse the LLM about which one to use. Always validate tool responses and check for errors before presenting results to users. Log tool usage for audit purposes and debugging. Consider implementing rate limits to prevent abuse in production environments.

Performance Considerations

Memory usage varies based on your configuration. Each loaded knowledge base typically uses 100-500MB of RAM depending on document count and size. Tools use minimal memory, usually less than 1MB each. Vector search operations add 10-50ms latency to responses. Clear unused resources promptly to free memory for other operations.

Token optimization is important for controlling costs. KB chunks add 500-2000 tokens per query depending on the number of relevant chunks retrieved. Each tool description uses 50-200 tokens. Clearing resources when they are no longer needed reduces token usage. Using specific KB folders rather than loading entire databases improves both performance and relevance.

Implementation Details

Vector Database

The vector database configuration uses one collection per bot instance to maintain isolation. The embedding model is text-embedding-ada-002, which produces 1536-dimensional vectors. Distance calculations use cosine similarity for semantic matching. The index uses HNSW (Hierarchical Navigable Small World) with M=16 and ef=100 for fast approximate nearest neighbor search.

File Processing Pipeline

When USE KB processes files, it follows a systematic pipeline. The system scans the specified directory to identify all files. Text is extracted based on each file’s type using appropriate parsers. The extracted text is cleaned and normalized to remove artifacts. Content is split into chunks of approximately 1000 characters with 200 character overlap to preserve context across boundaries. Embeddings are generated via the OpenAI API for each chunk. The vectors are stored in the vector database along with metadata about their source. Finally, the session context is updated to reflect the newly available knowledge base.

Tool Execution Engine

When USE TOOL prepares a tool for use, it parses the tool definition into a JSON schema that describes parameters and expected behavior. This schema is registered with the LLM context so the model knows the tool is available. The system listens for tool invocations in the LLM’s responses. When a tool call is detected, parameters are validated against the schema. The tool executes its logic, which might involve HTTP requests or function calls. Results return to the LLM for incorporation into the response. All executions are logged for audit purposes.

Error Handling

Common Issues

Several error conditions can occur when working with knowledge bases and tools. The KB_NOT_FOUND error indicates that the specified KB folder does not exist, so you should verify the folder name and path. A VECTORDB_ERROR suggests a connection issue with the vector database service that needs investigation. EMBEDDING_FAILED errors typically indicate problems with the embedding API, often related to API keys or rate limits. TOOL_NOT_FOUND means the specified tool is not registered, so verify the tool name matches exactly. TOOL_EXECUTION_ERROR indicates the tool failed during execution, requiring investigation of the tool endpoint or logic. MEMORY_LIMIT errors occur when too many knowledge bases are loaded simultaneously, requiring you to clear unused KBs.

Debugging Approach

Check logs for detailed information about issues. KB loading progress shows which documents are being processed. Embedding generation logs reveal any failures during vectorization. Vector search query logs help diagnose relevance problems. Tool invocation logs show parameter values and execution results. Error details provide stack traces and specific failure reasons.

Examples

Customer Support Bot

This example shows a customer support bot that loads product documentation and FAQs, enables ticket management tools, and provides informed assistance.

' Load product documentation
USE KB "product_docs"
USE KB "faqs"

' Enable support tools
USE TOOL "ticket_system"
USE TOOL "knowledge_search"

' The bot now has access to documentation and can work with tickets
TALK "How can I help you with your support needs today?"

' When the session ends, clean up
CLEAR KB
CLEAR TOOLS

Research Assistant

This example demonstrates a research assistant that can switch between different knowledge base collections depending on the research topic.

' Load research papers for current topic
USE KB "papers_2024"
USE KB "citations"

' Enable research tools
USE TOOL "arxiv_search"
USE TOOL "citation_formatter"

TALK "What research topic would you like to explore?"

' When switching to a different research area
CLEAR KB
USE KB "papers_biology"

Enterprise Integration

This example shows an enterprise bot with access to company policies and integration with internal systems like Active Directory, Jira, and Slack.

' Load company policies
USE KB "hr_policies"
USE KB "it_procedures"

' Enable enterprise integration tools
USE TOOL "active_directory"
USE TOOL "jira_integration"
USE TOOL "slack_notifier"

' The bot can now query AD, work with Jira tickets, and send Slack notifications
' Handle employee requests throughout the conversation

' Clean up at end of shift
CLEAR KB
CLEAR TOOLS

Security Considerations

KB Security

Knowledge base security involves multiple layers of protection. Access control ensures that knowledge bases require proper authorization before loading. Files are encrypted at rest to protect sensitive information. All KB access is logged for audit purposes. Per-session KB separation ensures that one user’s loaded knowledge bases cannot leak to another session.

Tool Security

Tool security protects against misuse and unauthorized access. Authentication requirements ensure tools only execute within valid sessions. Rate limiting prevents tool abuse through excessive calls. Parameter validation sanitizes all inputs before execution. Execution sandboxing isolates tool operations from the core system.

Best Practices

Follow the principle of least privilege by loading only the resources needed for the current task. Conduct regular audits to review KB and tool usage patterns. Ensure sensitive knowledge bases use encrypted storage. Rotate API keys used by tools on a regular schedule. Maintain session isolation by clearing resources between different users.

Configuration

Configuration options for knowledge bases and tools are set in your bot’s config.csv file. The vector database connection settings specify where embeddings are stored. Chunk size and overlap parameters control how documents are split. Embedding model selection determines vector quality and dimension. Tool timeout settings prevent long-running operations from blocking conversations.

Troubleshooting

KB Issues

If a knowledge base is not loading, first verify that the folder exists at the expected path within work/{bot_name}/{bot_name}.gbkb/. Check file permissions to ensure the system can read the documents. Verify the vector database connection is healthy. Review logs for any embedding errors during processing.

If search results are poor quality, consider adjusting the chunk overlap to provide more context at boundaries. Experiment with different chunk sizes for your content type. Ensure your embedding model is appropriate for the content language. Pre-process documents to remove noise and improve text quality before indexing.

Tool Issues

If a tool is not executing, first verify that the tool registration completed successfully by checking logs. Confirm parameter validation rules match the values being passed. Test the tool endpoint directly outside of the bot to isolate the issue. Review execution logs for specific error messages.

If tools are timing out, increase the timeout setting in configuration. Check network connectivity between the bot and tool endpoints. Optimize the tool endpoint to respond faster. Consider adding retry logic for transient failures.

Migration Guide

If you are migrating from a file-based knowledge system to vector search, start by exporting your existing files into a clean directory structure. Organize the files into logical .gbkb folders based on topic or department. Run the embedding pipeline by loading each KB with USE KB. Test vector search queries to verify results match expectations. Update your bot logic to use the new KB keywords instead of file operations.

From Static to Dynamic Tools

If you have static function calls that should become dynamic tools, convert each function into a tool definition with PARAM declarations. Create a .bas file with the DESCRIPTION and parameter specifications. Implement the endpoint or handler that the tool will call. Test the tool using USE TOOL and verify it executes correctly. Remove the static function registration from your startup logic.

See Also

Documentation

The Vector Collections page explains how vector search works under the hood. The Document Indexing page covers automatic document processing in detail. The Semantic Search page describes meaning-based retrieval algorithms. The Context Compaction page explains how conversation context is managed. The Caching page covers performance optimization through semantic caching. The Chapter 6 BASIC Reference provides complete dialog scripting documentation. The Chapter 9 API and Tools reference covers tool integration in depth.

Further Reading

The Pragmatismo blog post on BASIC LLM Tools explains how to extend LLMs with custom tools. The MCP is the new API article covers modern tool integration patterns. The Beyond Chatbots post discusses using knowledge bases effectively for sophisticated applications.

Next Chapter

Continue to Chapter 4 on User Interface to learn about creating bot interfaces that present your knowledge base and tool capabilities to users effectively.

Vector Collections

This chapter explains how botserver organizes knowledge into vector collections, the searchable units that power semantic retrieval. Understanding how collections work helps you structure documents effectively and optimize the knowledge your bots can access.

Storage Breakdown

From Folders to Collections

Vector collections emerge automatically from the folder structure within your .gbkb directory. Each folder you create becomes a distinct collection, indexed separately and activated independently. This direct mapping between physical organization and logical collections makes knowledge management intuitive—organize files into folders by topic, and those folders become the collections you reference in your scripts.

When botserver encounters a .gbkb folder, it scans for documents in supported formats including PDF, DOCX, TXT, HTML, and Markdown. Each file’s content is extracted, split into manageable chunks, converted to vector embeddings, and stored in the vector database. The folder name becomes the collection identifier you use with the USE KB keyword.

This automatic process means no manual indexing configuration is required. Add files to a folder, and they become searchable. Remove files, and they disappear from search results. The system tracks file changes through hash comparisons, triggering reindexing only when content actually changes.

The Indexing Pipeline

Understanding the indexing pipeline helps diagnose issues and optimize performance. When a folder is processed, the system first detects which files are new or modified since the last indexing run. This incremental approach avoids reprocessing unchanged content.

For each file requiring processing, text extraction pulls readable content from the document regardless of its format. PDF extraction handles complex layouts, DOCX processing unwraps the underlying XML, and plain text formats are read directly. The extracted text preserves paragraph structure and meaningful breaks.

The chunking phase splits long documents into smaller pieces suitable for embedding and retrieval. Each chunk contains approximately 500 tokens with overlap between adjacent chunks to preserve context across boundaries. This sizing balances granularity—enabling precise matches—against coherence—keeping related information together.

Embedding generation converts each text chunk into a numerical vector representation. botserver uses the BGE embedding model by default, producing 384-dimensional vectors that capture semantic meaning. These embeddings enable the similarity comparisons that power semantic search.

Finally, the vectors and their associated metadata are stored in the vector database, organized by collection. Each entry includes the embedding vector, the original text chunk, the source file path, and position information enabling reconstruction of context.

Working with Collections

Activating a collection for use in conversations requires only the USE KB statement with the collection name matching the folder. Once activated, the collection becomes part of the knowledge available when answering questions.

Multiple collections can be active simultaneously, and the system searches across all of them when looking for relevant content. This capability allows bots to draw on diverse knowledge sources. A comprehensive assistant might activate employee policies, product documentation, and procedural guides, answering questions that span any combination of these areas.

The CLEAR KB keyword deactivates collections, either removing all active collections at once or targeting specific ones by name. Clearing collections frees memory and focuses search results on remaining active knowledge. Scripts that handle diverse topics might activate and clear collections as the conversation shifts between subject areas.

Collections operate at the session level, meaning activation persists until the session ends or the collection is explicitly cleared. Users can ask follow-up questions that build on retrieved knowledge without requiring reactivation between each query.

Website Indexing

Beyond static documents, collections can include content crawled from websites. The USE WEBSITE keyword registers a URL for crawling, with the retrieved content becoming searchable alongside document-based collections.

For content that changes over time, scheduled crawling keeps the collection current. A script with SET SCHEDULE can periodically re-crawl websites, ensuring that the bot’s knowledge reflects recent updates. This approach works well for documentation sites, knowledge bases, or any web content relevant to your bot’s domain.

Website content goes through the same indexing pipeline as documents—text extraction, chunking, embedding, and storage. The resulting collection is indistinguishable in use from document-based collections.

How Search Utilizes Collections

When a user asks a question and collections are active, the search process finds relevant content automatically. The system embeds the query using the same model that indexed the documents, ensuring that queries and content exist in the same semantic space.

Vector similarity search identifies chunks whose embeddings are closest to the query embedding. The system retrieves the top matches from each active collection, then combines and ranks them by relevance. This process typically completes in milliseconds even for large collections.

The most relevant chunks become part of the context provided to the language model when generating a response. The model sees both the user’s question and the retrieved information, enabling it to produce answers grounded in your organization’s actual documentation.

This entire process happens transparently. Developers don’t write search queries or handle result sets. Users don’t know that retrieval is occurring. The system simply provides knowledgeable responses informed by the activated collections.

Embedding Configuration

The embedding model determines how meaning is captured in vectors and significantly influences search quality. botserver uses a locally-running BGE model by default, configured through the embedding URL and model path settings in config.csv.

The default model provides good general-purpose performance for English content. Organizations with specialized vocabulary or multilingual requirements might benefit from alternative models. The embedding infrastructure supports any compatible model, allowing customization for specific domains.

Changing embedding models requires reindexing existing collections since embeddings from different models aren’t comparable. Plan model changes carefully, accounting for the reprocessing time required for large document collections.

Collection Management Practices

Effective collection organization follows the principle of coherent groupings. Each folder should contain documents about a related topic area, enabling targeted activation. Overly broad collections that mix unrelated content produce noisier search results than focused collections containing cohesive material.

Clear naming conventions help scripts remain readable and maintainable. Collection names should indicate their content clearly enough that someone reading a script understands what knowledge is being activated without examining the folder contents.

Regular content maintenance keeps collections valuable. Remove outdated documents that might produce incorrect answers. Update files when information changes. Schedule website re-crawls frequently enough that cached content doesn’t become stale.

Monitoring collection usage helps identify optimization opportunities. If certain collections are rarely activated, consider whether they should exist separately or merge into related collections. If search results frequently miss relevant content, examine whether documents are organized in ways that match how users think about topics.

Performance Considerations

Collection size affects both memory usage and search performance. Larger collections require more storage for their embeddings and take longer to search, though the impact is usually modest given vector database optimizations. Very large collections might benefit from subdivision into more focused subcollections.

Active collection count influences context-building overhead. Each active collection contributes potential results that must be ranked and filtered. Activating only relevant collections for each conversation keeps search focused and efficient.

Embedding generation represents the primary indexing cost. Initial indexing of large document sets takes time proportional to total content size. Incremental updates process only changed files, making ongoing maintenance much faster than initial setup.

Caching at multiple levels improves performance for common patterns. Frequently accessed chunks remain in memory. Repeated queries benefit from result caching. The system automatically manages these caches without requiring configuration.

Summary

Vector collections bridge the gap between static documents and dynamic conversation knowledge. The automatic indexing pipeline transforms folder contents into searchable collections without requiring manual configuration. Simple activation through USE KB makes knowledge available, while the underlying vector search finds relevant content based on meaning rather than keywords. Thoughtful organization of documents into focused collections maximizes the value of this powerful capability.

Document Indexing

Documents in .gbkb folders are indexed automatically. No manual configuration required.

Automatic Triggers

Indexing occurs when:

  • Files added to .gbkb folders
  • Files modified or updated
  • USE KB called for a collection
  • USE WEBSITE registers URLs for crawling

Processing Pipeline

Document → Extract Text → Chunk → Embed → Store in Qdrant
StageDescription
ExtractPull text from PDF, DOCX, HTML, MD, TXT, CSV
ChunkSplit into ~500 token segments with 50 token overlap
EmbedGenerate vectors using BGE model
StoreSave to Qdrant with metadata

Supported File Types

FormatNotes
PDFFull text extraction, OCR for scanned docs
DOCXMicrosoft Word documents
TXT/MDPlain text and Markdown
HTMLWeb pages (text only)
CSV/JSONStructured data

Website Indexing

Schedule regular crawls for web content:

SET SCHEDULE "0 2 * * *"  ' Daily at 2 AM
USE WEBSITE "https://docs.example.com"

Schedule Examples

PatternFrequency
"0 * * * *"Hourly
"*/30 * * * *"Every 30 minutes
"0 0 * * 0"Weekly (Sunday)
"0 0 1 * *"Monthly (1st)

Configuration

In config.csv:

name,value
embedding-url,http://localhost:8082
embedding-model,../../../../data/llm/bge-small-en-v1.5-f32.gguf

Using Indexed Content

USE KB "documentation"
' All documents now searchable
' LLM uses this knowledge automatically

Troubleshooting

IssueSolution
Documents not foundCheck file is in .gbkb folder, verify USE KB called
Slow indexingLarge PDFs take time; consider splitting documents
Outdated contentSet up scheduled crawls for web content

See Also

Semantic Search

Semantic search finds relevant content by meaning, not just keywords. When a user asks “How many days off do I get?”, the system matches documents about “vacation policy” or “PTO allowance” even though the words differ.

Search Pipeline

How It Works

  1. Query embedding - Convert question to vector using same model as documents
  2. Similarity search - Find document chunks with closest embeddings (cosine distance)
  3. Result selection - Take top-k results above relevance threshold
  4. Context injection - Add retrieved text to LLM prompt

Automatic Integration

Semantic search requires no explicit coding. Just activate knowledge bases:

USE KB "policies"
USE KB "products"

' Now all user questions automatically search both collections
TALK "How can I help you?"

The system handles query embedding, vector search, ranking, and context assembly transparently.

Search Pipeline Details

StageOperationDefault
EmbeddingConvert query to vectorBGE model
SearchVector similarity lookupQdrant
DistanceCosine similarity0.0-1.0
Top-kResults returned5
ThresholdMinimum relevance0.7

Multiple Collections

When multiple KBs are active, the system searches all and combines best results:

USE KB "hr-docs"      ' Active
USE KB "it-docs"      ' Active
USE KB "finance"      ' Active

' Query searches all three, returns best matches regardless of source

Use CLEAR KB to deactivate collections when switching topics.

Performance

  • Cold search: 100-200ms (first query)
  • Warm search: 20-50ms (cached embeddings)
  • Indexing: One-time cost per document

Optimizations:

  • Embedding cache for repeated queries
  • HNSW index for fast vector search
  • Only active collections consume resources

Optimizing Quality

Document factors:

  • Clear, descriptive text produces better matches
  • Use vocabulary similar to how users ask questions
  • Avoid jargon-heavy content when possible

Collection factors:

  • Focused collections (one topic) beat catch-all collections
  • Fewer active collections = less noise in results
  • Split large document sets by domain area

Troubleshooting

IssueCauseSolution
No resultsCollection not activeCall USE KB "name"
Wrong resultsToo many collectionsClear irrelevant KBs
Missing matchesDocument not indexedCheck file is in .gbkb folder
Poor relevanceContent mismatchReview document quality

Configuration

Semantic search uses sensible defaults. Two settings affect context:

name,value
episodic-memory-history,2      # Previous exchanges to include
episodic-memory-threshold,4      # When to compress older context

See Also

Episodic Memory

Episodic memory automatically manages conversation history to stay within LLM token limits while preserving important information through intelligent summarization. This system handles context compaction transparently, ensuring conversations remain coherent without manual intervention.

Overview

Large Language Models have fixed context windows (e.g., 8K, 32K, 128K tokens). Long conversations can exceed these limits, causing truncation or errors. Episodic memory solves this by:

  1. Monitoring conversation length
  2. Summarizing older exchanges when thresholds are reached
  3. Keeping recent messages in full detail
  4. Storing summaries as “episodic memory” for continuity

Configuration

Episodic memory is controlled by parameters in config.csv:

name,value
episodic-memory-enabled,true
episodic-memory-threshold,4
episodic-memory-history,2
episodic-memory-model,fast
episodic-memory-max-episodes,100
episodic-memory-retention-days,365
episodic-memory-auto-summarize,true

Parameter Reference

ParameterDefaultTypeDescription
episodic-memory-enabledtrueBooleanEnable/disable episodic memory system
episodic-memory-threshold4IntegerNumber of exchanges before compaction triggers
episodic-memory-history2IntegerRecent exchanges to keep in full detail
episodic-memory-modelfastStringModel for generating summaries (fast, quality, or model name)
episodic-memory-max-episodes100IntegerMaximum episode summaries per user
episodic-memory-retention-days365IntegerDays to retain episode summaries
episodic-memory-auto-summarizetrueBooleanAutomatically summarize when threshold reached

How It Works

Context Compaction Process

  1. Monitor: System tracks message count since last summary
  2. Trigger: When count reaches episodic-memory-threshold, compaction starts
  3. Summarize: Older messages are summarized using the configured LLM
  4. Preserve: Last episodic-memory-history exchanges remain in full
  5. Store: Summary saved with role “episodic” for future context

Example Timeline

With defaults (episodic-memory-threshold=4, episodic-memory-history=2):

ExchangeActionContext State
1-2NormalMessages 1-2 in full
3-4NormalMessages 1-4 in full
5CompactionSummary of 1-2 + Messages 3-5 in full
6-7NormalSummary + Messages 3-7 in full
8CompactionSummary of 1-5 + Messages 6-8 in full

Automatic Behavior

The system automatically:

  1. Tracks conversation length
  2. Triggers compaction when exchanges exceed episodic-memory-threshold
  3. Summarizes older messages using the configured LLM
  4. Keeps only the last episodic-memory-history exchanges in full
  5. Stores the summary as an “episodic memory” for future context

The scheduler runs every 60 seconds, checking all active sessions and processing those that exceed the threshold.

Tuning Guidelines

High-Context Conversations

For complex discussions requiring more history:

name,value
episodic-memory-history,5
episodic-memory-threshold,10

Token-Constrained Environments

For smaller context windows or cost optimization:

name,value
episodic-memory-history,1
episodic-memory-threshold,2

Disable Compaction

Set threshold to 0 to disable automatic compaction:

name,value
episodic-memory-threshold,0

Extended Retention

For long-term memory across sessions:

name,value
episodic-memory-max-episodes,500
episodic-memory-retention-days,730

Use Case Recommendations

Use CaseHistoryThresholdRationale
FAQ Bot12Questions are independent
Customer Support24Some context needed
Technical Discussion48Complex topics require history
Therapy/Coaching510Continuity is critical
Long-term Assistant36Balance memory and context

Token Savings

Compaction significantly reduces token usage:

ScenarioWithout CompactionWith CompactionSavings
10 exchanges~5,000 tokens~2,000 tokens60%
20 exchanges~10,000 tokens~3,000 tokens70%
50 exchanges~25,000 tokens~5,000 tokens80%

Actual savings depend on message length and summary quality.

Summary Storage

Summaries are stored with special role identifiers:

  • Role episodic or compact marks summary messages
  • Summaries include key points from compacted exchanges
  • Original messages are not deleted, just excluded from active context
  • Episodes are searchable for context retrieval across sessions

Benefits

  • Automatic management - No manual intervention needed
  • Token efficiency - Stay within model context limits
  • Context preservation - Important information kept via summaries
  • Relevant context - Recent exchanges kept in full detail
  • Cost savings - Fewer tokens = lower API costs
  • Long-term memory - Episode storage enables recall across sessions

Interaction with Caching

Episodic memory works alongside semantic caching:

  • Caching: Reuses responses for similar queries (see Semantic Caching)
  • Episodic Memory: Manages conversation length over time

Both features reduce costs and improve performance independently.

Best Practices

  1. Start with defaults - Work well for most use cases
  2. Monitor token usage - Adjust if hitting context limits
  3. Consider conversation type - Support vs complex discussion
  4. Test different values - Find optimal balance for your users
  5. Set retention appropriately - Balance memory vs privacy requirements

Troubleshooting

IssueCauseSolution
Context too longThreshold too highLower episodic-memory-threshold
Lost contextHistory too lowIncrease episodic-memory-history
Summaries missing infoModel limitationsUse quality instead of fast
No compaction occurringThreshold is 0 or disabledSet positive threshold, enable feature
Old episodes not deletedRetention too longLower episodic-memory-retention-days

See Also

Caching

botserver includes automatic caching to improve response times and reduce redundant processing, including semantic caching for LLM responses using an in-memory cache component.

Caching Architecture

Features

The caching system provides exact match caching for identical prompts and semantic similarity matching to find and reuse responses for semantically similar prompts. Configurable TTL settings control how long cached responses remain valid. Caching can be enabled or disabled on a per-bot basis through configuration. Embedding-based similarity uses local embedding models for semantic matching, and comprehensive statistics and monitoring track cache hits, misses, and performance metrics.

How Caching Works

Caching in botserver is controlled by configuration parameters in config.csv. The system automatically caches LLM responses and manages conversation history.

When enabled, the semantic cache operates through a straightforward process. When a user asks a question, the system checks if a semantically similar question was asked before. If the similarity exceeds the threshold (typically 0.95), it returns the cached response. Otherwise, it generates a new response and caches it for future queries.

Configuration

Basic Cache Settings

From default.gbai/default.gbot/config.csv:

llm-cache,false              # Enable/disable LLM response caching
llm-cache-ttl,3600          # Cache time-to-live in seconds
llm-cache-semantic,true     # Use semantic similarity for cache matching
llm-cache-threshold,0.95    # Similarity threshold for cache hits

Configuration Parameters

ParameterTypeDefaultDescription
llm-cachebooleanfalseEnable/disable LLM response caching
llm-cache-ttlinteger3600Time-to-live for cached entries (in seconds)
llm-cache-semanticbooleantrueEnable semantic similarity matching
llm-cache-thresholdfloat0.95Similarity threshold for semantic matches (0.0-1.0)

Embedding Service Configuration

For semantic similarity matching, ensure your embedding service is configured:

embedding-url,http://localhost:8082
embedding-model,../../../../data/llm/bge-small-en-v1.5-f32.gguf

Conversation History Management

The system manages conversation context through these parameters:

episodic-memory-history,2    # Number of previous messages to include in context
episodic-memory-threshold,4    # Compact conversation after N exchanges

The episodic-memory-history setting keeps the last 2 exchanges in the conversation context, providing continuity without excessive token usage. The episodic-memory-threshold setting triggers summarization or removal of older messages after 4 exchanges to save tokens while preserving essential context.

Cache Storage

Architecture

The caching system uses a multi-level approach for optimal performance, combining fast in-memory access with configurable persistence options.

Cache Key Structure

The cache uses a multi-level key structure where exact matches use a hash of the exact prompt while semantic matches store embedding vectors with a semantic index for similarity comparison.

Cache Component Features

The cache component provides fast in-memory storage with sub-millisecond response times. Automatic expiration handles TTL-based cache invalidation without manual intervention. Distributed caching enables sharing the cache across multiple bot instances for consistent performance. Persistence options offer optional disk persistence for cache durability across restarts.

Example Usage

Basic Caching

' Caching happens automatically when enabled
USE KB "policies"

' First user asks: "What's the vacation policy?"
' System generates response and caches it

' Second user asks: "Tell me about vacation rules"
' System finds semantic match (>0.95 similarity) and returns cached response

Tool Response Caching

' Tool responses can also be cached
USE TOOL "weather-api"

' First request: "What's the weather in NYC?"
' Makes API call, caches response for 1 hour

' Second request within TTL: "NYC weather?"
' Returns cached response without API call

Cache Management

The cache operates automatically based on your configuration settings. Cache entries are managed through TTL expiration and memory policies without requiring manual intervention.

Best Practices

When to Enable Caching

Enable caching for FAQ bots with repetitive questions, knowledge base queries where the same information is requested frequently, API-heavy integrations where external calls are expensive, and high-traffic bots where response latency impacts user experience.

Disable caching for real-time data queries where freshness is critical, personalized responses that should vary per user, time-sensitive information that changes frequently, and development or testing environments where you need to see actual responses.

Tuning Cache Parameters

TTL settings should match your data freshness requirements. Use short TTL values around 300 seconds for news, weather, and stock prices. Medium TTL values around 3600 seconds work well for general knowledge and FAQs. Long TTL values around 86400 seconds suit static documentation and policies.

Similarity threshold affects matching precision. High thresholds of 0.95 or above provide strict matching with fewer false positives. Medium thresholds between 0.85 and 0.95 balance coverage and accuracy. Low thresholds below 0.85 enable broad matching but risk returning incorrect responses.

Memory Management

The cache component automatically manages memory through LRU (Least Recently Used) eviction policies that remove the oldest accessed entries first. Configurable memory limits prevent unbounded growth. Automatic key expiration cleans up entries that have exceeded their TTL.

Performance Impact

Typical performance improvements with caching enabled:

MetricWithout CacheWith CacheImprovement
Response Time2-5s50-200ms10-100x faster
API CallsEvery requestFirst request only90%+ reduction
Token UsageFull contextCached response95%+ reduction
Cost$0.02/request$0.001/request95% cost saving

Troubleshooting

Cache Not Working

If caching isn’t working as expected, verify that the cache service is running and accessible. Confirm caching is enabled in your config with llm-cache,true. Check that the TTL hasn’t expired for entries you expect to be cached. Review the similarity threshold to ensure it isn’t set too high for your use case.

Clear Cache

Cache is managed automatically through TTL expiration and eviction policies. To clear the cache manually, restart the cache component or use the admin API endpoint /api/admin/cache/clear.

Summary

The semantic caching system in botserver provides intelligent response caching that reduces response latency by 10-100x and cuts API costs by 90% or more. Response quality is maintained through semantic matching that understands query intent rather than requiring exact matches. The system scales automatically with the cache component to handle increasing load. Configure caching based on your bot’s needs, monitor performance metrics, and tune parameters for optimal results.

AI and LLM

botserver integrates with Large Language Models to provide intelligent conversational capabilities and natural language understanding. This integration forms the core of what makes General Bots conversations feel natural and contextually aware.

Overview

The LLM integration in botserver enables sophisticated conversational experiences. Natural language conversations flow smoothly without rigid command structures. Responses are context-aware, drawing on conversation history and loaded knowledge bases. The system automatically discovers and invokes tools when they would help answer user questions. Document understanding allows bots to read and reason about uploaded files. Text generation and summarization capabilities support content creation and information distillation.

LLM Providers

OpenAI

OpenAI serves as the primary LLM provider with support for multiple model tiers. GPT-5 provides fast, cost-effective responses for straightforward conversations. GPT-5 mini delivers efficient processing for simpler queries. The o3 series offers superior reasoning for complex tasks. Custom fine-tuned models can be used when you have specialized requirements.

Configuration requires setting your API key and selecting a model:

OPENAI_API_KEY=your-api-key
LLM_MODEL=gpt-5

Local Models

For privacy-sensitive deployments or cost control, botserver supports self-hosted models. Llama.cpp compatible servers provide open-source model hosting. Custom inference endpoints allow integration with any API-compatible service. Privacy-preserving deployments keep all data on-premises without external API calls.

Configuration for local models specifies the provider type and endpoint:

LLM_PROVIDER=local
LLM_ENDPOINT=http://localhost:8081

The LLM Keyword

The LLM keyword provides direct access to language model capabilities within BASIC scripts. Usage patterns differ between background processing and interactive conversations.

Background Processing

For scheduled tasks and background jobs that do not interact directly with users, the LLM keyword generates content that can be stored for later use.

' For background/scheduled tasks only - not for interactive conversations
summary = LLM "Explain quantum computing in simple terms"
SET BOT MEMORY "quantum_explanation", summary

Document Summarization

Scheduled tasks can process documents and generate summaries available to all users.

' Scheduled task to generate summaries for all users
document = GET "knowledge/policy.pdf"
summary = LLM "Summarize this document: " + document
SET BOT MEMORY "policy_summary", summary

Context-Aware Conversations

For interactive conversations, use SET CONTEXT to provide information that the System AI incorporates automatically when responding. This approach lets the AI generate natural responses rather than scripted outputs.

' For interactive conversations - use SET CONTEXT, not LLM
TALK "What's your question?"
question = HEAR
context = GET BOT MEMORY "knowledge"
SET CONTEXT "background", context
TALK "Based on our knowledge base, here's what I can tell you..."
' System AI automatically uses the context when responding

LLM Provider Implementation

The provider architecture lives in the src/llm/ directory with a modular design. The mod.rs file defines the provider trait and factory for instantiating providers. The openai.rs file implements the OpenAI provider with all API operations. The local.rs file provides support for local model servers.

Provider Trait

All LLM providers implement a common trait ensuring consistent behavior. The generate method produces text completions from prompts. The generate_stream method returns tokens incrementally for real-time display. The get_embedding method creates vector representations for semantic search. The count_tokens method estimates token usage before making API calls.

Context Management

Context Window

Managing the limited context window requires careful attention to what information reaches the model. Automatic truncation removes older content when approaching limits. Context compaction summarizes extensive histories into shorter representations. Relevance filtering prioritizes information most likely to help with the current query. History summarization condenses long conversations into essential points.

Context Sources

The context provided to the LLM comes from multiple sources that combine to create informed responses. Conversation history provides recent messages for continuity. Knowledge base chunks supply relevant document excerpts. Bot memory contributes persistent context that applies across conversations. Tool definitions tell the model what functions it can invoke. User profile information enables personalization based on known preferences.

Prompt Engineering

System Prompts

System prompts establish the bot’s personality and capabilities. These are typically configured in bot memory and loaded into context at the start of conversations.

system_prompt = GET BOT MEMORY "system_prompt"
SET CONTEXT "system", system_prompt

Dynamic Prompts

Building prompts programmatically allows context to reflect current conditions. Variables set in context become available to the System AI for generating responses.

' For interactive conversations - use SET CONTEXT
SET CONTEXT "user_name", user_name
SET CONTEXT "current_date", NOW()
' System AI automatically incorporates this context

Streaming Responses

WebSocket Streaming

Real-time token streaming creates a responsive user experience. As the LLM generates tokens, each token is sent immediately via WebSocket to the connected client. The UI updates progressively as tokens arrive, showing the response as it forms. The complete response is assembled on the client side once generation finishes.

Stream Control

Several controls manage the streaming process. Users can start and stop generation as needed. Long responses can be cancelled if they are not useful. Timeout protection prevents indefinitely hanging connections. Error recovery handles network interruptions gracefully by resuming or restarting generation.

Embeddings

Vector Generation

Creating embeddings transforms text into vectors for semantic search. The embedding process converts natural language into high-dimensional numerical representations that capture meaning.

Embedding Models

Different embedding models serve different needs. OpenAI’s text-embedding-ada-002 provides high-quality embeddings through their API. Local deployments can use sentence transformers for on-premises embedding generation. Custom models can be configured when you have specialized embedding requirements.

Token Management

Token Counting

Estimating token usage before making API calls helps with cost control and context management. Token counting uses the same tokenizer as the target model to produce accurate estimates.

Token Limits

Several factors constrain token usage. Each model has specific limits on total tokens per request. Context window constraints determine how much history and knowledge base content fits. Rate limiting prevents exceeding API quotas. Cost management tracks token usage against budgets.

Error Handling

Common Errors

Several error conditions occur frequently when working with LLMs. Invalid API keys prevent authentication with the provider. Rate limit exceeded errors indicate too many requests in a time window. Context too long errors mean the prompt exceeds the model’s maximum. Model unavailable errors happen during provider outages. Network timeouts occur when connections take too long.

Fallback Strategies

Robust error handling employs multiple fallback strategies. Retry with exponential backoff handles transient failures. Switching to a backup model maintains service when the primary is unavailable. Reducing context size can resolve context length errors. Caching responses reduces API calls and provides fallback content. Returning graceful errors keeps users informed when recovery is not possible.

Performance Optimization

Caching

Response caching dramatically improves performance for repeated queries. Semantic caching identifies similar questions and returns cached responses without API calls. Cache invalidation strategies ensure responses remain fresh as knowledge bases update. Cache warming pre-generates responses for common questions during off-peak times.

Batching

Batching multiple requests improves throughput and reduces per-request overhead. Embedding generation particularly benefits from batching when processing many documents. Rate limit management becomes simpler with controlled batch submission.

Connection Pooling

Connection pooling to LLM providers reduces latency from connection establishment. Keep-alive connections persist across requests. Pool sizing balances resource usage against responsiveness.

Model Selection

Choosing the right model involves balancing several factors. Capability requirements determine the minimum model sophistication needed. Response latency requirements favor faster models for interactive use. Cost constraints may push toward more economical model tiers. Privacy requirements might mandate local models over cloud APIs.

Model Comparison

GPT-5 mini offers the fastest responses at the lowest cost, suitable for straightforward questions. Claude Sonnet 4.5 and GPT-5 provide superior reasoning for complex queries with good balance of cost and capability. Local models like Llama variants offer privacy and cost predictability with varying capability levels. Specialized models may excel at particular domains like code or medical content.

Integration with Tools

LLMs in botserver work closely with the tool system. The model receives tool definitions describing available functions. When a user request would benefit from tool use, the model generates a tool call. botserver executes the tool and returns results to the model. The model incorporates tool results into its final response.

This integration enables bots to take actions beyond conversation, such as querying databases, sending emails, or calling external APIs, all orchestrated naturally through conversation.

Best Practices

Effective LLM usage follows several guidelines. Keep system prompts focused and specific rather than trying to cover every scenario. Use SET CONTEXT for interactive conversations rather than generating responses directly with LLM calls. Load relevant knowledge bases before conversations to improve response quality. Monitor token usage to manage costs. Test responses across different query types to ensure consistent quality.

Debugging and Monitoring

Debugging LLM interactions requires visibility into prompts and responses. Enable verbose logging during development to see full API exchanges. Monitor response quality metrics over time. Track token usage and costs per conversation. Review conversation logs to identify improvement opportunities.

See Also

The Context Configuration chapter explains context window management in detail. The LLM Configuration chapter covers all configuration options. The Tool Definition chapter describes creating tools the LLM can invoke. The Knowledge Base chapter explains how documents integrate with LLM context.

Hybrid RAG Search

Hybrid search combines dense (semantic) and sparse (keyword) retrieval for better search quality than either method alone.

Overview

MethodStrengthsWeaknesses
Dense (Semantic)Synonyms, meaning, paraphrasingRare terms, exact matches
Sparse (BM25)Exact terms, product codes, namesNo semantic understanding
HybridBest of bothSlightly more computation

How It Works

User Query
    │
    ├──────────────────┐
    ▼                  ▼
Dense Search      Sparse Search
(Weight: 0.7)     (Weight: 0.3)
    │                  │
    └────────┬─────────┘
             ▼
    Reciprocal Rank Fusion
             │
             ▼
    Optional Reranking
             │
             ▼
       Final Results

Reciprocal Rank Fusion (RRF):

RRF_score(d) = Σ 1 / (k + rank_i(d))

Configuration

In config.csv:

name,value
rag-hybrid-enabled,true
rag-dense-weight,0.7
rag-sparse-weight,0.3
rag-top-k,10
rag-rrf-k,60
rag-reranker-enabled,false

Weight Tuning

Content TypeDenseSparseUse Case
Balanced0.70.3General purpose
Semantic-Heavy0.90.1Conversational, multilingual
Keyword-Heavy0.40.6Technical docs, product catalogs
Equal0.50.5When unsure

Reranking

Optional LLM-based reranking for highest quality:

name,value
rag-reranker-enabled,true
rag-reranker-model,quality
rag-reranker-top-n,20
AspectWithoutWith Reranking
Latency~50ms~500ms
QualityGoodExcellent
CostNoneLLM API cost

Use for: Legal, medical, financial, compliance-critical queries.

Usage

Hybrid search is automatic when enabled. No code changes needed:

USE KB "company-policies"
' Queries automatically use hybrid search

Performance

MetricTarget
MRR (Mean Reciprocal Rank)> 0.7
Recall@10> 0.9
Latency P95< 200ms
Cache Hit Rate> 40%

Caching

name,value
rag-cache-enabled,true
rag-cache-ttl,3600
rag-cache-max-size,10000

Troubleshooting

IssueSolution
Poor resultsAdjust weights for content type
High latencyReduce rag-top-k, enable caching, disable reranking
Missing expected resultsCheck document indexed, verify no filters excluding it

See Also

Memory Management

General Bots provides a comprehensive memory system that enables persistent storage, cross-session continuity, and multi-agent data sharing. This chapter covers all memory types, their use cases, and best practices.

Overview

The memory system supports four distinct scopes:

Memory TypeScopePersistenceUse Case
User MemoryPer user, all botsPermanentPreferences, profile, facts
Bot MemoryPer bot, all usersPermanentBot state, counters, config
Session MemoryPer sessionSession lifetimeCurrent conversation context
Episodic MemoryPer conversationPermanentConversation summaries

User Memory

User memory follows users across all bots and sessions, enabling personalization and continuity.

Keywords

' Store user data
SET USER MEMORY "key", value

' Retrieve user data
value = GET USER MEMORY("key")

' Store a fact about the user
SET USER FACT "occupation", "software engineer"

' Get all user facts
facts = USER FACTS()

Examples

Personalized Greeting

' Check if returning user
name = GET USER MEMORY("name")

IF name = "" THEN
    TALK "Hello! What's your name?"
    HEAR name
    SET USER MEMORY "name", name
    TALK "Nice to meet you, " + name + "!"
ELSE
    TALK "Welcome back, " + name + "!"
END IF

Cross-Bot Preferences

' In any bot - store preference
SET USER MEMORY "language", "pt-BR"
SET USER MEMORY "timezone", "America/Sao_Paulo"

' In any other bot - use preference
language = GET USER MEMORY("language")
IF language = "pt-BR" THEN
    TALK "Olá! Como posso ajudar?"
ELSE
    TALK "Hello! How can I help?"
END IF

User Facts for AI Context

' Store facts about the user
SET USER FACT "company", "Acme Corp"
SET USER FACT "role", "Product Manager"
SET USER FACT "interests", "AI, automation, productivity"

' Later, use facts to personalize AI responses
facts = USER FACTS()
SET CONTEXT "user_profile" AS facts

response = LLM "Help me draft a product roadmap"
' AI now knows user's role and interests

Database Schema

User memory is stored in the user_memory table:

ColumnTypeDescription
idUUIDPrimary key
user_idUUIDUser identifier
keyVARCHAR(255)Memory key
valueJSONBStored value (any type)
memory_typeVARCHAR(50)preference, fact, context
ttlTIMESTAMPOptional expiration
created_atTIMESTAMPCreation time
updated_atTIMESTAMPLast update

Configuration

name,value
user-memory-enabled,true
user-memory-max-keys,1000
user-memory-default-ttl,0
OptionDefaultDescription
user-memory-enabledtrueEnable user memory
user-memory-max-keys1000Max keys per user
user-memory-default-ttl0Default TTL (0 = no expiry)

Bot Memory

Bot memory stores data at the bot level, shared across all users but isolated per bot.

Keywords

' Store bot data
SET BOT MEMORY "key", value

' Retrieve bot data
value = GET BOT MEMORY("key")

Examples

Bot Statistics

' Track bot usage
conversations = GET BOT MEMORY("total_conversations")
conversations = conversations + 1
SET BOT MEMORY "total_conversations", conversations

PRINT "This bot has had " + conversations + " conversations"

Feature Flags

' Store feature configuration
SET BOT MEMORY "enable_voice", true
SET BOT MEMORY "max_retries", 3
SET BOT MEMORY "welcome_message", "Hello! I'm your assistant."

' Use in logic
enableVoice = GET BOT MEMORY("enable_voice")
IF enableVoice THEN
    ' Enable voice features
END IF

Cache API Results

' Cache expensive API calls
cachedRates = GET BOT MEMORY("exchange_rates")
cachedTime = GET BOT MEMORY("exchange_rates_time")

IF cachedRates = "" OR (NOW() - cachedTime) > 3600 THEN
    ' Refresh cache
    rates = GET "https://api.exchangerate.host/latest"
    SET BOT MEMORY "exchange_rates", rates
    SET BOT MEMORY "exchange_rates_time", NOW()
ELSE
    rates = cachedRates
END IF

Use Cases

Use CaseExample KeyDescription
Counterstotal_ordersTrack bot-wide metrics
Configmax_itemsRuntime configuration
Cacheapi_cache_productsCached API responses
Statelast_sync_timeOperational state

Session Memory

Session memory is temporary storage for the current conversation session.

Keywords

' Store in session
SET "key", value

' Retrieve from session
value = GET "key"

' Set context for AI
SET CONTEXT "topic" AS "billing inquiry"

Examples

Conversation State

' Track conversation flow
SET "current_step", "collecting_info"
SET "collected_name", username
SET "collected_email", useremail

' Later in conversation
step = GET "current_step"
IF step = "collecting_info" THEN
    ' Continue collecting
END IF

Multi-Turn Context

' Build context through conversation
SET CONTEXT "customer_id" AS customerid
SET CONTEXT "issue_type" AS "refund"
SET CONTEXT "order_id" AS orderid

' AI has full context for responses
response = LLM "Help resolve this customer issue"

Session Lifetime

  • Created when user starts conversation
  • Persists across messages in same conversation
  • Cleared when conversation ends or times out
  • Default timeout: 30 minutes of inactivity

Episodic Memory

Episodic memory stores summaries of past conversations for long-term context.

How It Works

  1. Conversation Ends - System detects conversation completion
  2. Summary Generated - LLM creates concise summary
  3. Stored - Summary saved with metadata
  4. Retrieved - Used in future conversations for context

Example

' System automatically creates episode summaries
' Example summary stored:
' {
'   "conversation_id": "abc123",
'   "summary": "User asked about refund policy, was satisfied with explanation",
'   "topics": ["refunds", "policy"],
'   "sentiment": "positive",
'   "resolution": "resolved",
'   "created_at": "2025-01-15T10:30:00Z"
' }

' In future conversations, retrieve relevant episodes
episodes = GET USER MEMORY("recent_episodes")
SET CONTEXT "previous_interactions" AS episodes

Configuration

name,value
episodic-memory-enabled,true
episodic-memory-threshold,4
episodic-memory-history,2
episodic-memory-model,fast
episodic-memory-max-episodes,100
episodic-memory-retention-days,365
episodic-memory-auto-summarize,true

Memory Patterns

Profile Builder Pattern

Build user profile progressively through conversations.

' Check what we know
profile = GET USER MEMORY("profile")
IF profile = "" THEN
    profile = #{ }
END IF

' Fill in missing information naturally
IF profile.name = "" THEN
    ' Ask for name when appropriate
END IF

IF profile.preferences = "" THEN
    ' Learn preferences from behavior
END IF

' Update profile
SET USER MEMORY "profile", profile

Preference Learning Pattern

Learn preferences from user behavior.

' Track user choices
choice = HEAR selection
choices = GET USER MEMORY("choices_history")
IF choices = "" THEN choices = []

' Add new choice
choices = APPEND(choices, #{
    choice: choice,
    context: currentContext,
    timestamp: NOW()
})
SET USER MEMORY "choices_history", choices

' Analyze patterns periodically
IF LEN(choices) >= 10 THEN
    preferences = LLM "Analyze these choices and identify preferences: " + JSON(choices)
    SET USER MEMORY "learned_preferences", preferences
END IF

Context Handoff Pattern

Pass context between bots in multi-agent scenarios.

' Sending bot: Store context for receiving bot
handoffContext = #{
    topic: currentTopic,
    userIntent: detectedIntent,
    conversationSummary: summary,
    relevantFacts: facts
}
SET USER MEMORY "handoff_context", handoffContext

' Transfer to specialist
TRANSFER CONVERSATION TO "specialist-bot"

' Receiving bot: Retrieve context
context = GET USER MEMORY("handoff_context")
SET CONTEXT "background" AS context.conversationSummary
SET CONTEXT "intent" AS context.userIntent

' Clear handoff context after use
SET USER MEMORY "handoff_context", ""

TTL Pattern

Use time-to-live for temporary data.

' Store with expiration (implementation depends on memory type)
' For session-like data in user memory:
SET USER MEMORY "temp_auth_code", #{
    code: authCode,
    expires: NOW() + 300  ' 5 minutes
}

' Check expiration
stored = GET USER MEMORY("temp_auth_code")
IF stored <> "" AND stored.expires > NOW() THEN
    ' Valid
ELSE
    ' Expired or not found
    SET USER MEMORY "temp_auth_code", ""
END IF

Best Practices

Key Naming Conventions

' Use consistent prefixes
SET USER MEMORY "pref_language", "en"      ' Preferences
SET USER MEMORY "pref_timezone", "UTC"
SET USER MEMORY "fact_name", "John"        ' Facts
SET USER MEMORY "fact_company", "Acme"
SET USER MEMORY "ctx_last_topic", "sales"  ' Context
SET USER MEMORY "cache_products", data     ' Cached data

Don’t Store Sensitive Data

' ❌ DON'T: Store sensitive data
SET USER MEMORY "password", userPassword
SET USER MEMORY "ssn", socialSecurityNumber
SET USER MEMORY "credit_card", cardNumber

' ✅ DO: Store references only
SET USER MEMORY "payment_method_id", paymentId
SET USER MEMORY "verified", true

Handle Missing Data Gracefully

' Always check for empty/missing
name = GET USER MEMORY("name")
IF name = "" THEN
    name = "there"  ' Default value
END IF
TALK "Hello, " + name + "!"

Clean Up Old Data

' Periodic cleanup of old data
lastCleanup = GET BOT MEMORY("last_memory_cleanup")
IF lastCleanup = "" OR (NOW() - lastCleanup) > 86400 THEN
    ' Run cleanup logic
    ' Remove expired entries, old cache, etc.
    SET BOT MEMORY "last_memory_cleanup", NOW()
END IF

Troubleshooting

Memory Not Persisting

  1. Check memory type - session memory doesn’t persist
  2. Verify database connection
  3. Check for key name typos (keys are case-sensitive)
  4. Review memory limits

Cross-Bot Memory Not Sharing

  1. Ensure using USER MEMORY not BOT MEMORY
  2. Verify same user identity
  3. Check user-memory-enabled config

Memory Full Errors

  1. Clean up old/unused keys
  2. Increase user-memory-max-keys
  3. Use TTL for temporary data
  4. Consolidate related keys into objects

See Also

Conversation Management

This chapter explores how botserver manages conversations through sessions, message history, and context tracking. Understanding these mechanisms helps you build bots that maintain coherent, contextual interactions across multiple turns and sessions.

The Conversation Lifecycle

Every conversation in botserver follows a well-defined lifecycle that begins when a user first connects and continues until the session expires or ends explicitly. When a user interacts with a bot, the system creates a session that serves as the container for all conversation state, including message history, user preferences, and any variables set during the interaction.

Sessions persist across individual messages, allowing conversations to span multiple interactions. A user might ask a question, receive a response, and return hours later to continue the same conversation thread. The system maintains this continuity by storing session data in PostgreSQL for durability while caching active sessions in the cache layer for fast access.

The session contains a unique identifier, a reference to the associated user (or an anonymous identifier), the bot being interacted with, creation and expiration timestamps, and all accumulated conversation state. This comprehensive tracking enables sophisticated multi-turn interactions where the bot remembers previous exchanges and builds upon them.

Message History and Persistence

Every message exchanged during a conversation is recorded in the message history table, creating a permanent record of the interaction. Each entry captures the session identifier linking it to the conversation, the user and bot involved, the actual message content, an indicator of whether the message came from the user or the bot, and a precise timestamp.

The system distinguishes between several message types that serve different purposes. User messages represent input from the human participant. Bot responses contain the generated replies. System messages convey status updates or notifications. Tool outputs capture results from executed tools. This categorization helps with both display formatting and analysis.

Message history serves multiple purposes beyond simple record-keeping. The conversation context sent to the language model draws from recent history, enabling contextual responses. Analytics systems process history to understand usage patterns and conversation quality. Compliance requirements often mandate retention of interaction records, which the history system satisfies.

Context Assembly and Management

Context management represents one of the most sophisticated aspects of conversation handling. When generating a response, the system must assemble relevant information from multiple sources into a coherent context that guides the language model’s output.

The context assembly process draws from several layers. System context includes the bot’s configuration and base prompts that establish personality and capabilities. Conversation context incorporates recent message history to maintain coherence. Knowledge context adds relevant documents retrieved from active knowledge bases. User context includes preferences and state specific to the current user. Tool context describes available tools the model can invoke.

Because language models have limited context windows, the system must manage what information to include. Automatic truncation removes older messages when the context grows too large, preserving the most recent and relevant exchanges. For very long conversations, summarization compresses earlier history into concise summaries that capture essential information without consuming excessive tokens.

Scripts can manipulate context directly through dedicated keywords. Setting context adds specific information that should influence responses. Clearing context removes information that is no longer relevant. These operations give developers fine-grained control over what the model knows during generation.

Multi-Turn Interaction Patterns

Conversations rarely consist of single isolated exchanges. Users ask follow-up questions, refine requests, and reference earlier parts of the conversation. botserver’s architecture specifically supports these multi-turn patterns through careful context management and entity tracking.

When a user says “Book a meeting for tomorrow” followed by “Make it at 2 PM,” the system must understand that “it” refers to the meeting mentioned in the previous turn. This reference resolution happens automatically through the included conversation history, which gives the model the context needed to interpret pronouns and implicit references correctly.

Topic persistence allows conversations to maintain focus across multiple exchanges. If a user is discussing product returns, subsequent messages are interpreted in that context even when they don’t explicitly mention returns. The accumulated history provides the framing that makes this natural understanding possible.

Guided conversations implement multi-step flows where the bot collects information progressively. Rather than asking for all information at once, the bot might first ask for a name, then an email, then a preference. Each step builds on previous responses, with validation ensuring data quality before proceeding.

Session Recovery and Continuity

Network interruptions, browser refreshes, and other disruptions shouldn’t break conversation flow. botserver implements robust session recovery that allows users to seamlessly continue where they left off.

When a user reconnects, the session identifier validates their return. The system retrieves stored history and reconstructs the conversation context. The user can then continue as if no interruption occurred, with full access to previous exchanges and accumulated state.

Error recovery extends beyond simple disconnections. If a response generation fails, the system preserves the last known good state. Graceful degradation provides meaningful feedback to users rather than cryptic errors. Automatic retry logic handles transient failures that resolve themselves.

Anonymous and Authenticated Conversations

botserver supports both authenticated users and anonymous visitors, with different handling for each case. Understanding these distinctions helps design appropriate conversation experiences.

Anonymous sessions receive temporary identifiers that exist only for the duration of the session. Permissions are limited compared to authenticated users. Storage is typically short-term, with sessions expiring quickly after inactivity. These constraints reflect the reduced trust level for unidentified users.

When an anonymous user authenticates, their session upgrades to a full user session. Accumulated history transfers to the persistent user record. Permissions expand to match the authenticated role. This seamless upgrade path encourages users to authenticate without losing conversation progress.

Real-Time Communication

WebSocket connections provide the real-time communication channel for conversations. Unlike traditional HTTP request-response patterns, WebSockets maintain persistent bidirectional connections that enable instant message delivery in both directions.

The WebSocket protocol supports several interaction patterns beyond basic message exchange. Streaming responses allow bots to send content progressively, displaying text as it generates rather than waiting for complete responses. Typing indicators let users know the bot is processing their request. Connection status updates inform users of connectivity changes.

Messages follow a structured format with type identifiers, content payloads, and session references. The server processes incoming messages, routes them through the conversation engine, and pushes responses back through the same WebSocket connection.

Conversation Analytics

Understanding how conversations perform helps improve bot effectiveness. botserver tracks numerous metrics that reveal conversation patterns and quality indicators.

Quantitative metrics include message counts, conversation lengths, response times, and tool usage frequency. These numbers identify basic patterns like peak usage times and average conversation depth.

Qualitative analysis examines conversation content for sentiment, topics, intents, and entities. This deeper understanding reveals what users actually want from the bot, what frustrates them, and what succeeds.

Performance metrics specifically track system behavior, including generation latency, error rates, and resource utilization during conversation processing.

Configuration and Tuning

Several configuration parameters affect conversation behavior. Session timeout controls how long inactive sessions persist before expiring. History length limits how many messages the system retains in active memory. Context window size determines how much information reaches the language model.

Retention policies govern long-term storage of conversation data. Message retention duration sets how long history persists before archival. Archive timing determines when conversations move to compressed storage. Anonymous retention specifically addresses the shorter lifetime appropriate for unidentified users.

These settings balance resource usage against conversation quality and compliance requirements. Longer retention supports better context and audit trails but consumes more storage. Larger context windows improve response quality but increase processing costs.

Privacy and Compliance

Conversation data represents sensitive information that requires careful handling. botserver implements multiple safeguards to protect user privacy while meeting compliance requirements.

Data retention policies ensure information doesn’t persist longer than necessary. Compression and archival reduce storage costs while maintaining accessibility for compliance purposes. Clear deletion procedures support user rights to have their data removed.

Access controls limit who can view conversation history. Users see their own conversations. Administrators may have audit access where compliance requires it. Appropriate logging tracks access to sensitive data.

Summary

Conversation management in botserver provides the foundation for meaningful bot interactions. Through careful session handling, comprehensive message history, sophisticated context assembly, and robust recovery mechanisms, the system enables conversations that feel natural and maintain coherence across multiple turns, sessions, and circumstances. Understanding these capabilities helps developers build bots that engage users effectively while respecting privacy and compliance requirements.

Automation

This chapter explains how botserver enables bots to perform scheduled and event-driven tasks without requiring direct user interaction. Through automation capabilities, bots can proactively maintain content, process data, and respond to system events, transforming them from reactive assistants into active participants in organizational workflows.

Automation Fundamentals

botserver implements automation through two complementary mechanisms. The SET SCHEDULE keyword enables cron-based task scheduling, allowing scripts to execute at predetermined times. Event triggers through the ON keyword enable scripts to respond to database changes and system events. Together, these mechanisms support a wide range of automation scenarios from simple periodic tasks to complex event-driven workflows.

Understanding when to use automation is as important as understanding how. Automated tasks run without an active user session, which means they cannot directly communicate with users through TALK statements. Instead, automated scripts typically gather and process information, storing results in bot memory where users can access it during their next interaction.

Scheduling Tasks with SET SCHEDULE

The SET SCHEDULE keyword registers a script for periodic execution using standard cron expressions. When the BASIC compiler encounters this keyword, it extracts the schedule specification and creates an entry in the system_automations table. A background service monitors this table and executes scripts when their scheduled times arrive.

Cron expressions follow the standard Unix format with five fields representing minutes, hours, day of month, month, and day of week. The expression 0 9 * * * means “at minute 0 of hour 9, every day of every month, regardless of day of week”—in other words, daily at 9:00 AM. The expression */30 * * * * means “every 30 minutes” by using the step syntax. More complex patterns like 0 9 * * 1-5 specify “weekdays at 9 AM” by restricting the day of week field to Monday through Friday.

Scheduled scripts execute with full bot context and permissions, but without an associated user session. This means they can access bot memory, call external APIs, read and write files, and perform data processing. However, they cannot use TALK to send messages since there’s no user to receive them. Results should be stored in bot memory for later retrieval or sent through other channels like email.

Practical Scheduling Examples

A daily report generation script illustrates common automation patterns. The script specifies its schedule, retrieves data from the previous day, processes it using LLM analysis, and stores the result in bot memory. When users later ask about the daily report, the bot can retrieve and present this pre-computed summary without delay.

Content update automation keeps information fresh without manual intervention. A news aggregation script might run every six hours, fetching latest headlines, summarizing them, and caching the result. Users interacting with the bot receive current information even if nobody has explicitly updated the content.

Maintenance tasks handle housekeeping that shouldn’t require human attention. Cleanup scripts can run during low-activity periods to archive old data, remove temporary files, or perform consistency checks. These tasks keep the system healthy without consuming resources during peak usage times.

Data synchronization scripts bridge external systems with bot knowledge. A script might periodically fetch updates from a CRM, inventory system, or other business application, ensuring the bot’s responses reflect current organizational reality.

Event-Driven Automation

The ON keyword creates triggers that fire when specific events occur rather than at scheduled times. Currently, the system supports database event triggers that respond to table modifications. When the specified event occurs, the associated code block executes.

Event triggers complement scheduled tasks by enabling immediate response to changes rather than waiting for the next scheduled run. While a scheduled task might check for new registrations hourly, an event trigger fires immediately when a registration occurs, enabling real-time automation workflows.

The system stores triggers in the same system_automations table as scheduled tasks, distinguished by their trigger kind. Each trigger specifies its target (the table or resource being monitored), parameters controlling its behavior, and an activation flag allowing temporary disabling without deletion.

The System Automations Table

The system_automations table serves as the central registry for all automation rules. Each record contains a unique identifier, the bot that owns the automation, the kind of trigger (scheduled or event-driven), the cron expression for scheduled tasks, parameters such as script names, an active flag, and a timestamp tracking the last execution.

This centralized storage allows the background scheduler to efficiently query upcoming tasks across all bots. It also enables administrative monitoring of automation activity and troubleshooting of failed executions.

Automation Lifecycle Management

Understanding how automations are created, executed, modified, and removed helps administrators manage bot deployments effectively.

During script compilation, the preprocessor detects SET SCHEDULE statements and extracts their cron expressions. The system creates or updates corresponding entries in the system_automations table. If a script previously had a schedule that was removed, the old automation entry is deleted.

When execution time arrives, the scheduler loads the bot’s context, executes the BASIC script, updates the last_triggered timestamp, and logs the execution result. Any errors during execution are captured and logged but don’t affect other scheduled tasks.

Modifying a schedule requires only changing the SET SCHEDULE line in the script. The next compilation updates the database entry automatically. This approach keeps schedule definitions with their associated code rather than requiring separate configuration management.

Deleting a bot cascades to remove all its automations, preventing orphaned schedules that would fail at execution time.

Best Practices for Automation

Effective automation requires thoughtful design decisions. Scheduling frequency should match actual needs—running a task every minute when hourly would suffice wastes resources and can mask problems. Consider what would happen if a task takes longer than its scheduling interval, as overlapping executions can cause unexpected behavior.

Error handling in automated scripts is particularly important because no user is present to observe failures. Scripts should catch exceptions, log meaningful error messages, and degrade gracefully when dependencies are unavailable. Consider storing error states in bot memory so users can be informed of issues during their next interaction.

Scripts should be tested manually before enabling scheduling. Running a script interactively verifies that it works correctly and helps identify issues that might not be apparent from logs alone.

Bot memory serves as the bridge between automated tasks and user interactions. Automated scripts store their results in bot memory, making that information available to all users. This pattern works well for information that benefits from pre-computation, like summarized reports or aggregated statistics.

External credentials should never be hardcoded in scripts. Use bot memory to store API keys and other secrets, retrieving them at runtime. This practice improves security and simplifies credential rotation.

Understanding Limitations

Several constraints affect automation design decisions. The minimum scheduling granularity is one minute, as the cron format doesn’t support sub-minute precision. Tasks requiring more frequent execution need alternative approaches.

Each scheduled execution has timeout limits to prevent runaway tasks from consuming resources indefinitely. Long-running processes should be designed to complete within these limits or broken into smaller pieces.

The system doesn’t provide automatic retry on failure. If a scheduled task fails, it simply waits for the next scheduled time. Scripts needing retry behavior must implement it internally.

Only one instance of a scheduled script runs at a time. If execution takes longer than the scheduling interval, subsequent invocations are skipped rather than queued. This prevents resource exhaustion but means some scheduled times may be missed.

There’s no dependency management between scheduled tasks. If one task must complete before another begins, scripts must coordinate through bot memory or other synchronization mechanisms.

Monitoring Automated Tasks

Observing automation behavior helps identify problems and optimize performance. Active schedules can be queried directly from the system_automations table, filtered by bot and trigger kind. The last_triggered timestamp shows when each automation last executed successfully.

Execution logging captures both successful runs and failures at appropriate log levels. Monitoring these logs reveals patterns like consistently slow executions or recurring errors that might not be apparent from individual runs.

Debug logging at lower levels captures schedule changes during compilation, helping trace unexpected automation behavior to its source. Enabling debug logging temporarily can help diagnose why a schedule isn’t executing as expected.

Debugging Automation Issues

When automated tasks don’t behave as expected, systematic investigation identifies the cause. Common issues include invalid cron expressions that never match, scripts that work interactively but fail without a user session, external resources that are unavailable when the script runs, and permission issues that only manifest in the automation context.

Verifying the cron expression syntax ensures the schedule means what you intend. Online cron expression validators can help confirm that expressions match expected execution times.

Testing scripts manually with explicit handling for the missing user session helps identify code that incorrectly assumes user context. Any TALK statements will fail in automated context, and scripts must work correctly without user input.

Checking external resource availability at scheduled times reveals dependencies that might not be available around the clock. Business APIs often have maintenance windows, and network conditions vary throughout the day.

Reviewing permissions ensures the bot has access to all resources the automated script needs. Permissions that work for interactive users might not apply to automated execution contexts.

Security Considerations

Automated tasks execute with the bot’s full permissions, making them powerful but requiring careful design. Scripts can access any data the bot can access, call any API the bot is authorized to use, and store results in any location the bot can write.

This power means automated scripts should be reviewed carefully before deployment. Malicious or buggy automation could exfiltrate data, overwhelm external services, or fill storage with garbage. Limiting automation privileges isn’t possible in the current system, so careful script review is the primary safeguard.

Rate limiting applies to automated tasks just as it does to interactive use. Aggressive scheduling that exceeds API limits will be throttled, potentially causing tasks to fail or take longer than expected.

Monitoring for runaway automation helps catch scripts that behave differently than expected. Unusual resource consumption, excessive API calls, or unexpected storage growth might indicate automation problems requiring intervention.

Summary

botserver’s automation capabilities transform bots from reactive assistants into proactive system participants. Through SET SCHEDULE and event triggers, bots can maintain fresh content, process data regularly, and respond to system events without user interaction. Understanding the automation lifecycle, limitations, and best practices enables effective use of these powerful capabilities while avoiding common pitfalls. Automation extends bot value by handling routine tasks automatically, freeing users to focus on work that requires human judgment.

Email Integration

botserver provides email integration capabilities through IMAP/SMTP protocols, allowing bots to read, send, and manage emails.

Overview

Email integration in botserver enables reading emails via IMAP, sending emails via SMTP, email account management, draft creation and management, folder organization, and email-based automation workflows.

Configuration

Environment Variables

Email configuration requires these environment variables:

# IMAP Settings
EMAIL_IMAP_SERVER=imap.gmail.com
EMAIL_IMAP_PORT=993
EMAIL_USERNAME=your-email@example.com
EMAIL_PASSWORD=your-app-password

# SMTP Settings
EMAIL_SMTP_SERVER=smtp.gmail.com
EMAIL_SMTP_PORT=587
EMAIL_FROM=your-email@example.com

Feature Flag

Email functionality requires the email feature flag during compilation:

cargo build --features email

Email Module Structure

The email module is located in src/email/ and contains mod.rs with the email service implementation, account management functionality, message handling logic, and IMAP/SMTP client implementations.

Database Schema

User Email Accounts

The user_email_accounts table stores email account configurations with encrypted password storage. Users can configure multiple accounts, each with its own IMAP and SMTP server details.

Email Drafts

The email_drafts table provides draft management including To, CC, and BCC addresses, subject and body content, attachment metadata, and auto-save support for work in progress.

Email Folders

The email_folders table handles folder organization with IMAP folder mapping, message counts, unread tracking, and hierarchical structure support for nested folders.

BASIC Keywords for Email

SEND MAIL

Send emails from BASIC scripts:

SEND MAIL "recipient@example.com", "Subject", "Email body content"

# With variables
let to = "user@example.com"
let subject = "Meeting Reminder"
let body = "Don't forget our meeting at 2 PM"
SEND MAIL to, subject, body

Email Automation

# Check for new emails
let new_emails = GET_EMAILS("INBOX", "UNSEEN")

# Process each email
FOR EACH email IN new_emails {
    let sender = email.from
    let subject = email.subject
    let body = email.body
    
    # Auto-reply logic
    if (subject CONTAINS "urgent") {
        SEND MAIL sender, "Re: " + subject, "I'll get back to you soon."
    }
}

Email Operations

Reading Emails

The system can connect to IMAP servers, fetch message headers, download full messages, search by various criteria, mark messages as read or unread, and move messages between folders.

Sending Emails

SMTP operations include authentication with the mail server, sending plain text and HTML emails, reply and forward functionality, and bulk sending with configurable limits. Attachment support is planned for a future release.

Security

Password Storage

Email passwords are encrypted using AES-GCM and never stored in plaintext. Passwords are decrypted only when needed for authentication and memory is cleared after use to prevent credential leakage.

Connection Security

All email connections require TLS/SSL encryption with proper certificate validation. Secure authentication methods are enforced, and plaintext transmission is never permitted.

Use Cases

Support Ticket System

# Monitor support inbox
let support_emails = GET_EMAILS("support", "UNSEEN")

FOR EACH email IN support_emails {
    # Create ticket
    let ticket_id = CREATE_TICKET(email.from, email.subject, email.body)
    
    # Send confirmation
    let response = "Ticket #" + ticket_id + " created. We'll respond within 24 hours."
    SEND MAIL email.from, "Ticket Created: " + email.subject, response
}

Newsletter Distribution

# Send newsletter to subscribers
let subscribers = GET "subscribers.csv"
let newsletter = GET "newsletter.html"

FOR EACH subscriber IN subscribers {
    SEND MAIL subscriber.email, "Monthly Newsletter", newsletter
    WAIT 1  # Rate limiting
}

Email-to-Task Conversion

# Convert emails to tasks
let task_emails = GET_EMAILS("tasks", "UNSEEN")

FOR EACH email IN task_emails {
    CREATE TASK email.subject, email.body, email.from
    MARK_AS_READ email
}

Integration with Other Features

With Calendar

Email integrates with the calendar system for meeting invitations, event reminders, and schedule updates sent via email notifications.

With Tasks

Task integration enables task creation from emails, status updates delivered via email, and deadline reminders sent to responsible parties.

With Knowledge Base

Knowledge base integration supports email archival for compliance, searchable email history, and providing email context for bot conversations.

Limitations

Current Limitations

The current implementation does not support attachment handling, provides only basic HTML email support, lacks email templates, has limited filtering options, and does not support OAuth2 authentication, requiring app-specific passwords instead.

Rate Limiting

Provider-specific rate limits apply to all email operations. Implement delays between sends to avoid throttling, monitor for rate limit errors, and use batch operations wisely to stay within provider limits.

Email Provider Setup

Gmail Configuration

To configure Gmail, first enable 2-factor authentication on your Google account. Then generate an app-specific password for botserver to use. Enable IMAP access in Gmail settings. Use imap.gmail.com on port 993 for IMAP and smtp.gmail.com on port 587 for SMTP.

Outlook/Office 365

For Outlook or Office 365, enable IMAP in your account settings. If 2FA is enabled, generate an app password. Use outlook.office365.com on port 993 for IMAP and smtp.office365.com on port 587 for SMTP.

Custom Email Servers

For custom email servers, configure the appropriate server addresses, port numbers, security settings including TLS or SSL requirements, and the authentication method supported by your server.

Error Handling

Connection Errors

# Handle email errors
status = SEND MAIL recipient, subject, body
IF status = "sent" THEN
    TALK "Email sent successfully"
ELSE
    TALK "Failed to send email: " + status
    # Log error for admin
END IF

Common Issues

Common email issues include authentication failures from incorrect credentials, network timeouts when servers are slow to respond, server unavailable errors during outages, quota exceeded errors when hitting send limits, and invalid address errors for malformed recipients.

Best Practices

Use app-specific passwords rather than primary account passwords to limit security exposure. Respect provider rate limits by implementing appropriate delays between operations. Implement retry logic for transient failures to ensure delivery. Validate email addresses before sending to catch format errors early. Monitor usage by tracking sent and received counts. Encrypt sensitive data in storage and transit. Maintain an audit trail by logging all email operations.

Monitoring

Metrics to Track

Key metrics include emails sent and received, failed operations and their causes, connection failures, processing time for email operations, and queue size when batching sends.

Health Checks

Regular health checks should verify IMAP connectivity, SMTP availability, account validity and credential freshness, and folder synchronization status.

Summary

Email integration in botserver enables powerful email-based automation and communication. Through IMAP/SMTP protocols and BASIC script integration, bots can manage email workflows, automate responses, and integrate email with other bot features for comprehensive communication automation.

Transfer to Human

The TRANSFER TO HUMAN keyword enables seamless handoff from bot conversations to human attendants. This is a critical feature for hybrid support workflows where complex issues require human intervention.

Overview

When a conversation requires human attention—whether due to customer request, issue complexity, or emotional escalation—the bot can transfer the conversation to a human attendant using the TRANSFER TO HUMAN keyword.

The system sets needs_human = true in the session context, which routes all subsequent messages from that customer to human attendants instead of the bot.

How It Works

Customer Message → Check needs_human
                        ↓
        ┌───────────────┴───────────────┐
        ↓                               ↓
  needs_human=false               needs_human=true
        ↓                               ↓
   Bot Processing              Human Attendant
        ↓                               ↓
  TRANSFER TO HUMAN?            Respond via
        ↓                       Console/WhatsApp
  Set needs_human=true                ↓
        ↓                       /resolve command
  Notify Attendants                   ↓
                              needs_human=false
                                      ↓
                              Back to Bot

Configuration

Enable CRM Features

Add the following to your bot’s config.csv:

name,value

# Required: Enable CRM/Transfer functionality
crm-enabled,true

# Optional: Enable LLM-assisted attendant features
attendant-llm-tips,true
attendant-polish-message,true
attendant-smart-replies,true
attendant-auto-summary,true
attendant-sentiment-analysis,true

The crm-enabled setting activates:

  • Transfer to human functionality
  • Attendant queue management
  • WebSocket notifications
  • LLM assist features (if configured)

Configure Attendants

Create attendant.csv in your bot’s .gbai folder:

id,name,channel,preferences,department,aliases
att-001,John Smith,all,sales,commercial,john;johnny;js
att-002,Jane Doe,web,support,customer-service,jane
att-003,Bob Wilson,whatsapp,technical,engineering,bob;bobby
att-004,Maria Santos,all,collections,finance,maria
ColumnDescription
idUnique identifier for the attendant
nameDisplay name shown to customers
channelChannel they handle: all, web, whatsapp, teams, etc.
preferencesType of work they prefer
departmentDepartment for routing
aliasesSemicolon-separated nicknames for name matching

The needs_human Flag

When TRANSFER TO HUMAN is called, the system sets needs_human = true in the session’s context data. This flag controls message routing:

needs_human ValueBehavior
false (default)Messages go to bot for processing
trueMessages go to human attendant

Checking the Flag in BASIC

' Check if conversation needs human
IF session.needs_human THEN
    TALK "You're connected to our support team."
ELSE
    TALK "I'm your AI assistant. How can I help?"
END IF

Manual Flag Control (Advanced)

' Force transfer without using keyword
SET SESSION "needs_human", true
SET SESSION "transfer_reason", "Customer requested human"

' Return to bot mode (usually done by attendant via /resolve)
SET SESSION "needs_human", false

Basic Usage

Transfer to Any Available Attendant

' Simple transfer to next available human
TRANSFER TO HUMAN

TALK result.message

Transfer to Specific Person

' Transfer to a specific attendant by name
TRANSFER TO HUMAN "John Smith"

' Also works with aliases
TRANSFER TO HUMAN "johnny"

' Or by ID
TRANSFER TO HUMAN "att-001"

Transfer to Department

' Transfer to sales department
TRANSFER TO HUMAN "sales"

' Transfer with priority
result = TRANSFER TO HUMAN "support", "high"

IF result.success THEN
    TALK "You are now connected to " + result.assigned_to_name
ELSE
    TALK result.message
END IF

Transfer with Context

' Transfer with department, priority, and context
TRANSFER TO HUMAN "technical", "urgent", "Customer needs help with API integration"

Advanced Usage

Extended Transfer with Named Parameters

' Using transfer_to_human_ex for full control
params = #{
    name: "John",
    department: "support",
    priority: "high",
    reason: "Complex billing issue",
    context: "Customer has been a member since 2020, premium tier"
}

result = transfer_to_human_ex(params)

IF result.success THEN
    TALK "Transferring you to " + result.assigned_to_name
    TALK "Estimated wait time: " + result.estimated_wait_seconds + " seconds"
ELSE
    TALK "Sorry, " + result.message
END IF

Conditional Transfer

' Transfer based on conversation context
sentiment = ANALYZE SENTIMENT conversation

IF sentiment.score < -0.5 THEN
    ' Frustrated customer - high priority
    TRANSFER TO HUMAN "support", "urgent", "Customer appears frustrated"
ELSE IF topic = "billing" THEN
    TRANSFER TO HUMAN "billing"
ELSE IF topic = "technical" THEN
    TRANSFER TO HUMAN "technical"
ELSE
    TRANSFER TO HUMAN
END IF

Check Availability Before Transfer

' Check if any attendants are available
attendants = GET "/api/attendance/attendants"

available = 0
FOR EACH att IN attendants
    IF att.status = "online" THEN
        available = available + 1
    END IF
NEXT

IF available > 0 THEN
    TRANSFER TO HUMAN
ELSE
    TALK "Our team is currently unavailable. Would you like to:"
    TALK "1. Leave a message"
    TALK "2. Schedule a callback"
    TALK "3. Continue with our AI assistant"
    HEAR choice
END IF

Transfer Result

The TRANSFER TO HUMAN keyword returns a result object:

PropertyTypeDescription
successBooleanWhether the transfer was successful
statusStringStatus: queued, assigned, connected, no_attendants, crm_disabled, attendant_not_found, error
queue_positionIntegerPosition in queue (if queued)
assigned_toStringAttendant ID (if assigned)
assigned_to_nameStringAttendant name (if assigned)
estimated_wait_secondsIntegerEstimated wait time
messageStringHuman-readable message

Handling Different Statuses

result = TRANSFER TO HUMAN "sales"

SELECT CASE result.status
    CASE "assigned"
        TALK "Great news! " + result.assigned_to_name + " will be with you shortly."
        
    CASE "queued"
        TALK "You are #" + result.queue_position + " in line."
        TALK "Estimated wait: " + (result.estimated_wait_seconds / 60) + " minutes."
        
    CASE "connected"
        TALK "You are now connected with " + result.assigned_to_name
        
    CASE "no_attendants"
        TALK "No attendants are currently available."
        TALK "Would you like to leave a message?"
        
    CASE "attendant_not_found"
        TALK "That person is not available. Let me find someone else."
        TRANSFER TO HUMAN
        
    CASE "crm_disabled"
        TALK "I'm sorry, human support is not configured for this bot."
        
    CASE ELSE
        TALK "Something went wrong. Please try again."
END SELECT

LLM Tool Integration

The TRANSFER TO HUMAN keyword is automatically registered as an LLM tool, allowing the AI to decide when to transfer:

Tool Schema

{
    "name": "transfer_to_human",
    "description": "Transfer the conversation to a human attendant. Use when the customer explicitly asks to speak with a person, when the issue is too complex, or when emotional support is needed.",
    "parameters": {
        "type": "object",
        "properties": {
            "name": {
                "type": "string",
                "description": "If someone wants to talk to somebody specific, provide their name or alias"
            },
            "department": {
                "type": "string",
                "description": "Department to transfer to: sales, support, technical, billing, etc."
            },
            "priority": {
                "type": "string",
                "enum": ["normal", "high", "urgent"],
                "default": "normal"
            },
            "reason": {
                "type": "string",
                "description": "Brief reason for the transfer"
            }
        }
    }
}

AI-Initiated Transfer Example

When a customer says “I want to talk to a real person,” the LLM can automatically invoke:

{
    "tool": "transfer_to_human",
    "arguments": {
        "reason": "Customer requested human assistance"
    }
}

Priority Levels

PriorityValueUse Case
low0Non-urgent inquiries
normal1Standard requests (default)
high2Important customers, time-sensitive issues
urgent3Escalations, complaints, VIP customers

Higher priority conversations are served first in the queue.


Attendant Status

Attendants can have the following statuses:

StatusDescription
onlineAvailable and ready for conversations
busyCurrently handling conversations
awayTemporarily unavailable
offlineNot working

Only online attendants receive new conversation assignments.


Queue Status

Conversations in the queue have these statuses:

StatusDescription
waitingWaiting for an attendant
assignedAssigned but not yet active
activeConversation in progress
resolvedConversation completed
abandonedCustomer left before assignment

REST API Endpoints

Queue Management

EndpointMethodDescription
/api/attendance/queueGETList conversations in queue
/api/attendance/attendantsGETList all attendants
/api/attendance/assignPOSTAssign conversation to attendant
/api/attendance/transferPOSTTransfer between attendants
/api/attendance/resolve/:session_idPOSTMark conversation resolved
/api/attendance/insightsGETGet queue insights

Example: Manual Transfer via API

' Transfer using direct API call
body = #{
    session_id: session.id,
    from_attendant_id: "att-001",
    to_attendant_id: "att-002",
    reason: "Specialist needed for technical issue"
}

result = POST "/api/attendance/transfer", body

Attendant Console

When CRM is enabled, the Attendant Console becomes available at /suite/attendant/. This provides a full-featured interface for human agents:

Features

  • Queue Management: View and filter waiting conversations
  • Real-time Updates: WebSocket-powered live updates
  • AI Insights: Sentiment analysis, intent detection, suggested replies
  • Transfer: Transfer conversations between attendants
  • Customer Details: View customer history and information
  • Quick Responses: Pre-configured response templates

Accessing the Console

  1. Enable crm-enabled,true in config.csv
  2. Create attendant.csv with your team
  3. Navigate to /suite/attendant/ or click “Attendant” in the Suite menu

WhatsApp Attendant Mode

Attendants can manage conversations directly from WhatsApp using commands:

Queue Commands

CommandDescription
/queue or /filaView waiting conversations
/take or /pegarTake next conversation
/status [online|busy|away|offline]Set availability
/transfer @name or /transferirTransfer to another attendant
/resolve or /resolverMark complete, set needs_human=false

AI Assist Commands

CommandDescription
/tips or /dicasGet AI tips for current conversation
/polish <msg> or /polirPolish message before sending
/replies or /respostasGet 3 smart reply suggestions
/summary or /resumoGet conversation summary
/help or /ajudaShow all commands

Example WhatsApp Session

Attendant: /queue
Bot: 📋 Queue (2 waiting)
     1. João Silva (whatsapp) - Status: waiting
     2. Maria Santos (web) - Status: waiting

Attendant: /take
Bot: ✅ Conversation assigned
     Customer: João Silva
     Session: abc12345

[Customer message arrives]
Customer: Preciso de ajuda com meu pedido

Attendant: /tips
Bot: 💡 Tips:
     🎯 Customer needs help with order
     ✅ Ask for order number
     📚 Check order status in system

Attendant: /polish oi joao, vou verificar seu pedido agora
Bot: ✨ Polished:
     "Olá João! Vou verificar seu pedido agora mesmo."

Attendant: Olá João! Vou verificar seu pedido agora mesmo.
[Message sent to customer]

Attendant: /resolve
Bot: ✅ Conversation resolved
     Customer returned to bot mode.

Best Practices

1. Set Clear Expectations

result = TRANSFER TO HUMAN

IF result.success AND result.status = "queued" THEN
    TALK "You're now in line to speak with a team member."
    TALK "Your position: #" + result.queue_position
    TALK "While you wait, I can still help with simple questions."
END IF

2. Provide Context to Attendants

' Build context from conversation
context = "Customer inquiry about: " + detected_topic + ". "
context = context + "Sentiment: " + sentiment + ". "
context = context + "Key entities: " + entities.join(", ")

TRANSFER TO HUMAN "support", "normal", context

3. Handle Off-Hours

' Check business hours
hour = HOUR(NOW())
day = WEEKDAY(NOW())

IF day >= 1 AND day <= 5 AND hour >= 9 AND hour < 18 THEN
    TRANSFER TO HUMAN
ELSE
    TALK "Our team is available Monday-Friday, 9 AM - 6 PM."
    TALK "Would you like to leave a message or schedule a callback?"
END IF

4. VIP Routing

' Check if customer is VIP
customer = FIND "customers", "email='" + user.email + "'"

IF customer.tier = "premium" OR customer.tier = "enterprise" THEN
    TRANSFER TO HUMAN "vip-support", "high", "Premium customer"
ELSE
    TRANSFER TO HUMAN
END IF

Troubleshooting

“CRM not enabled” Error

Add crm-enabled,true to your config.csv file.

“No attendants configured” Error

Create attendant.csv in your bot’s .gbai folder with at least one attendant.

Transfer Not Finding Attendant by Name

  • Check that the name or alias is spelled correctly
  • Verify the attendant exists in attendant.csv
  • Aliases are case-insensitive and separated by semicolons

Queue Not Updating

  • Ensure WebSocket connection is active
  • Check that the attendant status is online
  • Verify the bot has proper database permissions

Analytics & Insights

The attendance system provides analytics through the API:

Queue Insights

GET /api/attendance/insights/{session_id}

Response:
{
    "session_id": "uuid",
    "sentiment": "neutral",
    "message_count": 15,
    "suggested_reply": "How can I help?",
    "key_topics": ["billing", "refund"],
    "priority": "normal",
    "language": "pt"
}

LLM-Powered Analytics

When attendant-sentiment-analysis is enabled:

POST /api/attendance/llm/sentiment

Response:
{
    "overall": "negative",
    "score": -0.6,
    "emotions": [{"name": "frustration", "intensity": 0.8}],
    "escalation_risk": "high",
    "urgency": "high",
    "emoji": "😟"
}

Troubleshooting

Customer Stuck in Human Mode

If a customer is stuck with needs_human=true after the issue is resolved:

  1. Attendant uses /resolve command
  2. Or manually via API:
POST /api/attendance/resolve/{session_id}

Messages Not Reaching Attendant

  1. Check crm-enabled,true in config.csv
  2. Verify attendant.csv exists with valid entries
  3. Ensure attendant status is online
  4. Check WebSocket connection in browser console

Attendant Commands Not Working on WhatsApp

  1. Verify phone number is in attendant.csv
  2. Phone must match exactly (with country code)
  3. Check that bot is receiving webhooks

See Also

LLM-Assisted Attendant Features

General Bots provides AI-powered assistance to human attendants during customer conversations. These features help attendants respond faster, more professionally, and with better context awareness.

Overview

When the bot transfers a conversation to a human attendant (via TRANSFER TO HUMAN), the LLM orchestrator continues working in the background to assist the human. This creates a hybrid experience where AI augments human capability rather than replacing it.

The system uses the same PROMPT.md and bot personality configured for the bot, ensuring consistency between bot responses and attendant assistance.

Features

FeatureConfig KeyDescription
Real-time Tipsattendant-llm-tipsContextual tips when customer messages arrive
Message Polishattendant-polish-messageImprove grammar and tone before sending
Smart Repliesattendant-smart-repliesGenerate 3 contextual reply suggestions
Auto Summaryattendant-auto-summarySummarize conversation when attendant joins
Sentiment Analysisattendant-sentiment-analysisReal-time emotional state tracking

Configuration

Add these settings to your bot’s config.csv:

name,value

# Enable all LLM assist features
attendant-llm-tips,true
attendant-polish-message,true
attendant-smart-replies,true
attendant-auto-summary,true
attendant-sentiment-analysis,true

# Optional: Set bot personality for context
bot-system-prompt,You are a friendly customer service assistant for Acme Corp
bot-description,Premium support for enterprise customers

Selective Enablement

Enable only the features you need:

name,value
attendant-smart-replies,true
attendant-sentiment-analysis,true

Feature Details

1. Real-time Tips (attendant-llm-tips)

When a customer sends a message, the LLM analyzes it and provides actionable tips to the attendant.

Tip Types

TypeIconDescription
intent🎯What the customer wants
actionSuggested action to take
warning⚠️Sentiment or escalation concern
knowledge📚Relevant info to share
history📜Insight from conversation history
general💡General helpful advice

Example Tips

Customer says: “This is ridiculous! I’ve been waiting 3 days for a response!”

Tips generated:

  • ⚠️ Customer is frustrated - use empathetic language and apologize
  • 🎯 Customer has been waiting for support response
  • ✅ Acknowledge the delay and provide immediate assistance

API Usage

' Internal API - automatically called by UI
POST /api/attendance/llm/tips
{
    "session_id": "uuid",
    "customer_message": "message text",
    "history": [{"role": "customer", "content": "..."}]
}

2. Message Polish (attendant-polish-message)

Before sending, attendants can polish their message with one click. The LLM improves grammar, clarity, and tone while preserving the original meaning.

Supported Tones

  • professional (default)
  • friendly
  • empathetic
  • formal

Example

Original: “ya we can do that but u need to wait til tmrw”

Polished: “Yes, we can certainly help with that! Please allow until tomorrow for us to process your request.”

Changes: Fixed grammar, improved clarity, added professional tone

API Usage

POST /api/attendance/llm/polish
{
    "session_id": "uuid",
    "message": "original message",
    "tone": "professional"
}

Response:

{
    "success": true,
    "original": "ya we can do that...",
    "polished": "Yes, we can certainly...",
    "changes": ["Fixed grammar", "Improved tone"]
}

3. Smart Replies (attendant-smart-replies)

Generate three contextually appropriate reply suggestions based on the conversation history and bot personality.

Reply Categories

  • greeting - Opening responses
  • answer - Direct answers to questions
  • acknowledgment - Empathetic acknowledgments
  • solution - Problem-solving responses
  • follow_up - Continuation questions
  • closing - Conversation wrap-up

Example

Customer: “How do I reset my password?”

Suggested Replies:

  1. Empathetic: “I understand how frustrating it can be when you can’t access your account. I’ll help you reset your password right away.”

  2. Solution-focused: “To reset your password, please go to the login page and click ‘Forgot Password’. You’ll receive an email with reset instructions.”

  3. Follow-up: “I can help you with that! Are you trying to reset the password for your main account or a sub-account?”

API Usage

POST /api/attendance/llm/smart-replies
{
    "session_id": "uuid",
    "history": [
        {"role": "customer", "content": "How do I reset my password?"},
        {"role": "attendant", "content": "Hi! Let me help you with that."}
    ]
}

4. Auto Summary (attendant-auto-summary)

When an attendant takes a conversation, they receive an instant summary of what’s happened so far. This is especially useful for:

  • Long conversations
  • Transfers between attendants
  • Complex multi-issue discussions

Summary Contents

FieldDescription
briefOne-sentence overview
key_pointsMain discussion points
customer_needsWhat the customer wants
unresolved_issuesOpen items
sentiment_trendImproving/stable/declining
recommended_actionWhat to do next
message_countNumber of messages
duration_minutesConversation length

Example Summary

{
    "brief": "Customer requesting refund for damaged product received yesterday",
    "key_points": [
        "Order #12345 arrived damaged",
        "Customer sent photos as proof",
        "Previous agent offered replacement"
    ],
    "customer_needs": [
        "Full refund instead of replacement",
        "Confirmation email"
    ],
    "unresolved_issues": [
        "Refund approval pending"
    ],
    "sentiment_trend": "stable",
    "recommended_action": "Escalate to supervisor for refund approval"
}

API Usage

GET /api/attendance/llm/summary/{session_id}

5. Sentiment Analysis (attendant-sentiment-analysis)

Real-time analysis of customer emotional state to help attendants respond appropriately.

Analysis Components

ComponentValuesDescription
overallpositive, neutral, negativeGeneral sentiment
score-1.0 to 1.0Numeric sentiment score
emotionsListDetected emotions with intensity
escalation_risklow, medium, highRisk of escalation
urgencylow, normal, high, urgentMessage urgency
emoji😊😐😟Visual indicator

Example Analysis

Customer message: “I’ve been trying to get help for TWO WEEKS! This is absolutely unacceptable!”

{
    "overall": "negative",
    "score": -0.8,
    "emotions": [
        {"name": "frustration", "intensity": 0.9},
        {"name": "anger", "intensity": 0.7}
    ],
    "escalation_risk": "high",
    "urgency": "high",
    "emoji": "😟"
}

The UI shows a warning: ⚠️ High escalation risk detected

API Usage

POST /api/attendance/llm/sentiment
{
    "session_id": "uuid",
    "message": "customer message",
    "history": []
}

WhatsApp Attendant Commands

Attendants using WhatsApp can access LLM assist features via commands:

Queue Management

CommandDescription
/queue or /filaView waiting conversations
/take or /pegarTake next conversation
/status [online|busy|away|offline]Set your status
/transfer @name or /transferirTransfer conversation
/resolve or /resolverMark as resolved

AI Assistance

CommandDescription
/tips or /dicasGet tips for current conversation
/polish <message> or /polirPolish a message before sending
/replies or /respostasGet smart reply suggestions
/summary or /resumoGet conversation summary
/help or /ajudaShow all commands

Example Usage

Attendant: /queue
Bot: 📋 Queue (3 waiting)
     1. John Smith (whatsapp)
        Status: waiting | ID: abc123
     2. Maria Santos (web)
        Status: waiting | ID: def456
     ...

Attendant: /take
Bot: ✅ Conversation assigned
     Customer: John Smith
     Session: abc123
     
Attendant: /tips
Bot: 💡 Tips for this conversation

     🎯 Customer is asking about billing
     ✅ Check their account status before responding
     ⚠️ Previous conversation ended unresolved
     
Attendant: /polish thank u for waiting ill check ur account now
Bot: ✨ Polished message

     "Thank you for your patience! I'll check your account right now."
     
     Changes:
     • Fixed grammar
     • Improved professionalism
     
Attendant: /resolve
Bot: ✅ Conversation resolved
     Session abc123 marked as resolved.

Web Attendant Console

The web attendant console (/suite/attendant/) integrates LLM assist features directly:

Insights Panel

The right sidebar shows:

  • Customer Sentiment - Real-time with emoji indicator
  • Detected Intent - What the customer wants
  • Conversation Summary - Auto-generated when you select a conversation

Suggested Replies

Below the insights, three AI-generated reply suggestions appear. Click any suggestion to insert it into the message input.

Polish Button

The ✨ button next to the message input polishes your text before sending.

Tips Notifications

When a customer sends a message:

  1. Tips appear in the insights panel
  2. High-priority tips show as toast notifications
  3. Warning tips (escalation risk) are highlighted

Bot Personality Integration

LLM assist uses your bot’s personality when generating suggestions. Set this in config.csv:

name,value
bot-system-prompt,You are a friendly tech support agent for CloudSoft Inc. Be helpful and patient.
bot-description,Enterprise software support

Or in your start.bas header:

REM CloudSoft Support Bot
REM Friendly, patient, and technically knowledgeable
REM Always offer to escalate complex issues

TALK "Welcome to CloudSoft Support!"

The LLM reads these comments to understand the bot’s personality and applies the same tone to:

  • Smart reply suggestions
  • Message polishing
  • Tips generation

API Reference

Endpoints

MethodEndpointDescription
POST/api/attendance/llm/tipsGenerate tips
POST/api/attendance/llm/polishPolish message
POST/api/attendance/llm/smart-repliesGenerate replies
GET/api/attendance/llm/summary/{session_id}Get summary
POST/api/attendance/llm/sentimentAnalyze sentiment
GET/api/attendance/llm/config/{bot_id}Get config

Check Configuration

GET /api/attendance/llm/config/{bot_id}

Response:
{
    "tips_enabled": true,
    "polish_enabled": true,
    "smart_replies_enabled": true,
    "auto_summary_enabled": true,
    "sentiment_enabled": true,
    "any_enabled": true
}

Fallback Behavior

When LLM is unavailable, the system provides fallback functionality:

FeatureFallback
TipsKeyword-based analysis (urgent, problem, question)
PolishReturns original message
Smart RepliesGeneric template replies
SummaryBasic message count and duration
SentimentKeyword-based positive/negative detection

Best Practices

1. Start with Smart Replies

If you’re unsure which features to enable, start with attendant-smart-replies. It provides immediate value with low overhead.

2. Enable Sentiment for High-Volume Support

For teams handling many conversations, attendant-sentiment-analysis helps prioritize frustrated customers.

3. Use Polish for Quality Consistency

Enable attendant-polish-message to ensure consistent, professional communication regardless of individual writing skills.

4. Tips for Complex Products

For products with many features or complex workflows, attendant-llm-tips helps attendants quickly understand context.

5. Summary for Shift Changes

Enable attendant-auto-summary if your team has shift changes or frequent transfers between attendants.


Troubleshooting

“Feature is disabled” Message

Add the feature to your config.csv:

attendant-smart-replies,true

Slow Response Times

LLM calls add latency. If responses are slow:

  • Use a faster LLM model
  • Enable only essential features
  • Check your llm-url configuration

Generic Suggestions

If suggestions seem generic:

  • Set bot-system-prompt in config.csv
  • Add personality comments to start.bas
  • Ensure conversation history is being passed

WhatsApp Commands Not Working

  1. Verify the attendant is registered in attendant.csv
  2. Check that the phone number matches exactly
  3. Ensure crm-enabled,true is set

See Also

Chapter 4: BASIC Scripting

Why BASIC?

In 1964, John Kemeny and Thomas Kurtz created BASIC with a revolutionary idea: programming should be for everyone. Today, General Bots brings this philosophy to the AI era.

With BASIC + LLM, you write:

TALK "What's your name?"
HEAR name
poem = LLM "Create a heartfelt poem for " + name
TALK poem

Not 50 lines of boilerplate.

The Core Keywords

Just SEVEN main keywords power everything:

KeywordPurposeExample
TALKOutputTALK "Hello!"
HEARInputHEAR name AS NAME
USE KBKnowledgeUSE KB "docs"
USE TOOLFunctionsUSE TOOL "weather"
GETDataGET "api/users"
IF/THENLogicIF age >= 18 THEN ...
FOR/NEXTLoopsFOR i = 1 TO 10 ...

Your First Tool

In the LLM world, you write tools that AI can use:

' enrollment.bas - An LLM-callable tool
PARAM name AS STRING LIKE "John Smith" DESCRIPTION "Full name"
PARAM email AS STRING LIKE "john@example.com" DESCRIPTION "Email"
PARAM course AS STRING LIKE "Introduction to AI" DESCRIPTION "Course"

DESCRIPTION "Enrolls a student in a course"

SAVE "enrollments.csv", name, email, course, NOW()
TALK "Enrolled " + name + " in " + course

The LLM handles the conversation. You define the action.

Everyone Can Program

You don’t need:

  • A computer science degree
  • Years of experience
  • Understanding of algorithms

You just need:

  • An idea
  • 10 minutes to learn BASIC
  • Creativity

Getting Started

TimeGoal
Minute 1TALK "Hello, world!"
Minute 5Add HEAR and LLM
Day 1Production-ready bot

Documentation Guide

DocumentPurpose
BasicsCore LLM-first concepts
Keywords ReferenceComplete keyword list
TemplatesReal-world examples
Universal MessagingMulti-channel support

Keyword Categories

The Philosophy

BASIC in General Bots isn’t about controlling conversation flow - it’s about providing tools and context that LLMs use intelligently.

Write simple tools. Let AI handle the complexity.


“The beauty of BASIC lies not in what it can do, but in who it enables to do it.”

Dialog Basics

BASIC dialogs in General Bots are designed for the LLM era - you write tools and context setters, not complex conversation flows.

Core Concepts

ConceptDescription
LLM ToolsBASIC scripts that become callable tools for the LLM
ContextSET CONTEXT provides knowledge to the LLM
SuggestionsADD SUGGESTION guides conversations
MemoryGET/SET BOT/USER MEMORY for persistent data

LLM-First Example

' Load context from memory
resume = GET BOT MEMORY "announcements"
context = GET BOT MEMORY "company_info"

' Give LLM the context it needs
SET CONTEXT "announcements" AS resume
SET CONTEXT "company" AS context

' Guide the conversation
CLEAR SUGGESTIONS
ADD SUGGESTION "announcements" AS "Show me this week's updates"
ADD SUGGESTION "company" AS "Tell me about the company"

' Start conversation
TALK "What would you like to know?"

Creating LLM Tools

Instead of parsing user input, create tools the LLM can call:

' update-summary.bas - A tool the LLM can invoke
PARAM topic AS STRING LIKE "Q4 Results" DESCRIPTION "Topic to summarize"
PARAM length AS STRING LIKE "brief" DESCRIPTION "brief or detailed"

DESCRIPTION "Creates a summary of the requested topic"

data = GET BOT MEMORY topic
summary = LLM "Summarize this " + length + ": " + data
TALK summary

Traditional vs LLM Approach

TraditionalLLM + BASIC
Parse user input manuallyLLM understands naturally
Complex IF/ELSE treesTools with PARAMs
Validate every fieldLLM handles validation
Design conversation flowsLLM manages conversation

Tool Pattern Example

' schedule-appointment.bas
PARAM service AS STRING LIKE "consultation" DESCRIPTION "Type of appointment"
PARAM date AS DATE LIKE "tomorrow at 3pm" DESCRIPTION "Preferred date/time"

DESCRIPTION "Schedules an appointment and sends confirmation"

appointment = GET "api/appointments/available" WITH service, date
IF appointment.available THEN
  SET BOT MEMORY "last_appointment" AS appointment.id
  SEND EMAIL TO user.email WITH appointment.details
  TALK "Scheduled your " + service + " for " + date
ELSE
  alternatives = GET "api/appointments/suggest" WITH service, date
  TALK "That time isn't available. Alternatives: " + alternatives
END IF

Best Practices

DoDon’t
Write focused toolsCreate complex conversation flows
Use context wiselyMicromanage the LLM
Trust the LLMParse user input manually
Use suggestionsForce rigid paths

See Also

API Possibilities

Your bot can connect to hundreds of external services. This chapter shows what’s possible when you integrate APIs into your bot conversations.


How API Integration Works

When users talk to your bot, it can:

  1. Fetch information from external services (weather, news, prices)
  2. Send data to other systems (CRM, email, databases)
  3. Automate workflows across multiple services

The bot decides when to call APIs based on what the user asks.


What You Can Build

Customer Service Bot

  • Look up orders from your e-commerce system
  • Check shipping status from carriers
  • Create support tickets automatically

Sales Assistant

  • Pull customer info from your CRM
  • Check inventory levels
  • Generate quotes and send emails

Personal Assistant

  • Get weather forecasts
  • Check calendar availability
  • Book appointments

Data Dashboard Bot

  • Pull metrics from analytics platforms
  • Get stock prices and crypto rates
  • Aggregate data from multiple sources

🌤️ Weather & Environment

ServiceWhat You Can Do
OpenWeatherMapCurrent weather, forecasts, alerts
AirVisualAir quality index by location
Open-MeteoFree weather data worldwide

Example conversation:

User: What’s the weather in São Paulo?

Bot: It’s 24°C and partly cloudy in São Paulo. Expect rain this afternoon.


💰 Finance & Payments

ServiceWhat You Can Do
StripeProcess payments, manage subscriptions
CoinGeckoCrypto prices and market data
Exchange RatesCurrency conversion

Example conversation:

User: How much is Bitcoin right now?

Bot: Bitcoin is currently $67,450 USD, up 2.3% today.


📰 News & Content

ServiceWhat You Can Do
NewsAPIHeadlines from 150+ sources
Hacker NewsTech news and discussions
RedditPosts from any subreddit

Example conversation:

User: What’s trending in tech news?

Bot: Top stories: Apple announces new M4 chip, OpenAI releases GPT-5…


📧 Communication

ServiceWhat You Can Do
SendGridSend transactional emails
TwilioSMS messages and voice calls
SlackPost messages to channels

Example conversation:

User: Send the report to the team

Bot: Done! I’ve emailed the Q3 report to the sales team and posted a summary in #sales-updates.


🗺️ Maps & Location

ServiceWhat You Can Do
Google MapsDirections, places, distances
OpenStreetMapFree mapping data
IP GeolocationLocate users by IP address

Example conversation:

User: How far is the airport?

Bot: Guarulhos Airport is 25km away, about 45 minutes in current traffic.


🛒 E-Commerce

ServiceWhat You Can Do
ShopifyManage products and orders
WooCommerceWordPress store integration
StripePayment processing

Example conversation:

User: Check my order status

Bot: Order #12345 shipped yesterday via FedEx. Expected delivery: Friday.


📅 Productivity

ServiceWhat You Can Do
Google CalendarSchedule events, check availability
TrelloManage cards and boards
NotionCreate and update pages

Example conversation:

User: Schedule a meeting with John tomorrow at 2pm

Bot: Done! I’ve added “Meeting with John” to your calendar for tomorrow at 2pm.


🎵 Entertainment

ServiceWhat You Can Do
SpotifySearch songs, manage playlists
YouTubeSearch videos, get info
TMDBMovie and TV show data

Example conversation:

User: What movies are playing this weekend?

Bot: Here’s what’s in theaters: Dune Part 3, The Avengers 6…


🏥 Health & Fitness

ServiceWhat You Can Do
FitbitActivity and sleep data
NutritionixFood and nutrition info
StravaRunning and cycling stats

Example conversation:

User: How many calories in a Big Mac?

Bot: A Big Mac has 563 calories, 33g fat, and 25g protein.


🔐 Authentication

ServiceWhat You Can Do
Auth0User login and registration
Firebase AuthGoogle, Facebook, email login
Have I Been PwnedCheck for data breaches

🛠️ Developer Tools

ServiceWhat You Can Do
GitHubRepo info, issues, PRs
JiraCreate and manage tickets
SentryError tracking and alerts

Example conversation:

User: Create a bug ticket for the login issue

Bot: Created PROJ-456: “Login button not responding on mobile” - assigned to dev team.


API Marketplaces

Find more APIs at:

  • RapidAPI - 40,000+ APIs in one place
  • APILayer - Curated collection of useful APIs
  • Public APIs - Free API directory

Things to Consider

Rate Limits

Most APIs limit how many requests you can make. Plan for this in high-traffic bots.

API Keys

Keep your API keys secure. Never expose them in client-side code.

Costs

Many APIs are free up to a limit, then charge per request. Monitor your usage.

Reliability

Have fallback responses when APIs are slow or unavailable.


Getting Started

  1. Choose an API that matches what your bot needs to do
  2. Get API credentials (usually free to sign up)
  3. Create a tool in your .gbdialog folder that calls the API
  4. Test it by asking your bot questions that trigger the API

Your bot’s LLM automatically learns when to use each tool based on what users ask.


See Also

Universal Messaging & Multi-Channel

botserver automatically handles conversations across different channels (Web, WhatsApp, Email, etc.) using the same BASIC scripts. Write once, deploy everywhere.

How It Works

Your BASIC scripts don’t need to know which channel they’re running on. The same TALK and HEAR commands work universally:

TALK "Hello! How can I help you?"
HEAR response
TALK "You said: " + response

This script works identically whether the user is:

  • Chatting via web browser
  • Messaging on WhatsApp
  • Sending emails
  • Using Microsoft Teams

Supported Channels

Web (Default)

The primary channel. Users access via browser at http://localhost:9000.

WhatsApp Business

Requires WhatsApp Business API configuration. Messages are automatically formatted for WhatsApp’s constraints.

Email

Bots can receive and respond to emails. Each email thread becomes a conversation session.

Microsoft Teams

Deploy bots directly to Teams channels and direct messages.

Channel Detection

botserver automatically detects the channel based on the session context. No special code needed:

' This works on ALL channels
TALK "Welcome to our service!"
TALK "What's your name?"
HEAR name
TALK "Nice to meet you, " + name

Channel-Specific Formatting

While your code stays the same, botserver automatically handles channel-specific formatting:

Web

  • Full HTML support
  • Rich formatting
  • Images and media
  • Interactive elements

WhatsApp

  • Plain text with emoji
  • Media as attachments
  • Quick reply buttons
  • 1024 character limit per message

Email

  • HTML email format
  • Subject line handling
  • Attachments
  • Proper threading

Teams

  • Adaptive cards
  • @mentions
  • Channel vs DM detection
  • Teams-specific formatting

Media Handling

Send files and media universally:

' Works on all channels that support files
SEND FILE "report.pdf"
TALK "I've sent you the report."

Each channel handles files appropriately:

  • Web: Download link
  • WhatsApp: Document attachment
  • Email: Email attachment
  • Teams: File card

Session Management

Each channel maintains its own session handling:

  • Web: Cookie-based sessions
  • WhatsApp: Phone number as session ID
  • Email: Thread ID as session
  • Teams: User/channel context

Configuration

Channel configuration is done in the bot’s config.csv:

channel-web,enabled
channel-whatsapp,enabled
channel-email,enabled
channel-teams,disabled

Best Practices

  1. Keep messages concise - Some channels have length limits
  2. Use simple formatting - Not all channels support rich text
  3. Test on target channels - Ensure your bot works well on each
  4. Handle media gracefully - Not all channels support all file types
  5. Consider response times - Email is async, chat is real-time

Channel Limitations

ChannelMessage LengthMedia SupportRich TextReal-time
WebUnlimitedFullYesYes
WhatsApp1024 charsImages, DocsLimitedYes
EmailUnlimitedAttachmentsHTMLNo
Teams28KBFullAdaptive CardsYes

Summary

Universal messaging means your BASIC scripts work across all channels without modification. botserver handles the complexity of channel-specific formatting and delivery, letting you focus on the conversation logic.

BASIC vs Automation Tools: A Practical Comparison

Understanding how General Bots BASIC compares to other automation platforms

Overview

General Bots BASIC provides a conversational-first approach to automation. This chapter compares BASIC with popular automation tools to help you understand when each approach works best.


Comparison Matrix

FeatureZapiern8nMakePower AutomateBASIC
Webhooks
SchedulingSET SCHEDULE
HTTP/REST
GraphQL
SOAP
Database Native
Conversations
WhatsApp NativePluginPluginPluginPlugin✅ Built-in
Telegram NativePluginPluginPlugin✅ Built-in
Multi-ChannelLimitedLimitedLimitedLimited✅ Native
LLM IntegrationPluginPluginPluginGPT-5✅ Any model
Self-Hosted
Open Source✅ AGPL

Key Differences

Conversation-First Design

Traditional automation tools focus on backend workflows. BASIC adds interactive conversations:

Multi-Channel Native

The same bot works across all channels without modification:

This message reaches users on WhatsApp, Telegram, Web, or any configured channel—same content, adapted formatting.

LLM Model Freedom

BASIC supports any LLM provider:

  • OpenAI (GPT-5, o3)
  • Anthropic (Claude Sonnet 4.5, Opus 4.5)
  • Local models (Llama, Mistral via llama.cpp)
  • Groq, DeepSeek, and others
  • Any OpenAI-compatible API

Configure in config.csv:

name,value
llm-url,http://localhost:8081
llm-model,model.gguf

When to Use Each Tool

Choose BASIC When You Need

  • Interactive workflows - Users participate in the process
  • Multi-channel presence - Same bot on WhatsApp, Telegram, Web
  • AI-powered conversations - Natural language understanding
  • Self-hosted deployment - Full data control
  • Open source flexibility - Modify and extend as needed

Choose Traditional Automation When You Need

  • Backend-only workflows - No user interaction required
  • Visual workflow builders - Prefer drag-and-drop interfaces
  • Existing integrations - Specific pre-built connectors
  • Team familiarity - Team already knows the tool

Migration Examples

From Zapier

Zapier workflow: Form submission → Slack notification → CRM entry → Welcome email

BASIC equivalent:

From n8n

n8n workflow: Monitor website → Alert on error → Create ticket

BASIC equivalent runs on schedule and notifies immediately:


Complete Office Suite

BASIC provides built-in capabilities for common office tasks:

CapabilityBASIC Keyword
Send emailSEND MAIL
Create draftCREATE DRAFT
Schedule meetingsBOOK
Manage filesUPLOAD, DOWNLOAD, LIST
Create tasksCREATE TASK
Video meetingsCREATE MEETING

Example: Daily Report Automation


Getting Started

Quick Start

  1. Download and run botserver
  2. Edit your bot’s .bas files
  3. Configure settings in config.csv
  4. Deploy to any channel

Resources

Template Variables

Templates support variable substitution using double curly braces {{variable_name}}. Variables are replaced at send time with values from the provided data object.

General Bots

Built-in Variables

These variables are automatically available in all templates without explicit declaration:

VariableDescriptionExample
{{recipient}}Recipient email or phonejohn@example.com
{{to}}Alias for recipientjohn@example.com
{{date}}Current date (YYYY-MM-DD)2025-01-22
{{time}}Current time (HH:MM)14:30
{{datetime}}Combined date and time2025-01-22 14:30
{{year}}Current year2025
{{month}}Current month nameJanuary

Custom Variables

Pass custom variables via the variables parameter in SEND TEMPLATE:

WITH vars
    .name = "John"
    .company = "Acme Corp"
    .product = "Pro Plan"
    .discount = "20%"
END WITH

SEND TEMPLATE "welcome", "email", "john@example.com", vars

The template content would reference these variables:

Hello {{name}},

Welcome to {{company}}! You've signed up for {{product}}.

As a special offer, use code WELCOME for {{discount}} off your first purchase.

Best regards,
The Team

Channel-Specific Templates

Email Templates

Email templates support automatic Subject: line extraction. Place the subject on the first line:

Subject: Welcome to {{company}}, {{name}}!

Hello {{name}},

Thank you for joining us...

The system extracts the subject line and uses the remainder as the body.

WhatsApp Templates

WhatsApp templates must be pre-approved by Meta. Use numbered placeholders as required by the WhatsApp Business API:

Hello {{1}}, your order {{2}} has shipped. Track at {{3}}

Map variables using numeric keys:

WITH vars
    .1 = customer_name
    .2 = order_id
    .3 = tracking_url
END WITH

SEND TEMPLATE "order-shipped", "whatsapp", phone, vars

SMS Templates

Keep SMS templates under 160 characters for single-segment delivery:

Hi {{name}}, your code is {{code}}. Valid for 10 minutes.

Template Examples

Welcome Email

Subject: Welcome to {{company}}!

Hi {{name}},

Thanks for signing up on {{date}}. Here's what you can do next:

1. Complete your profile
2. Explore our features
3. Join our community

Questions? Reply to this email.

Best,
{{company}} Team

Order Confirmation

Subject: Order #{{order_id}} Confirmed

Hi {{name}},

Your order has been confirmed!

Order: #{{order_id}}
Date: {{date}}
Total: {{total}}

Items:
{{items}}

Shipping to:
{{address}}

Track your order: {{tracking_url}}

Appointment Reminder

Subject: Reminder: {{appointment_type}} tomorrow

Hi {{name}},

This is a reminder of your upcoming appointment:

Date: {{appointment_date}}
Time: {{appointment_time}}
Location: {{location}}

Need to reschedule? Reply to this email or call {{phone}}.

See you soon!

Creating Templates

Via BASIC

CREATE TEMPLATE "welcome", "email", "Welcome {{name}}!", "Hello {{name}}, thank you for joining {{company}}!"

Retrieving Templates

template = GET TEMPLATE "welcome"
TALK "Template body: " + template.body

Variable Extraction

Variables are automatically extracted from template content when the template is created. The system identifies all {{variable}} patterns and stores them for validation. Built-in variables (recipient, date, time, etc.) are excluded from the extraction.

Fallback Values

Handle missing variables using NVL in your code:

WITH vars
    .name = NVL(user_name, "Friend")
    .company = NVL(user_company, "your organization")
END WITH

SEND TEMPLATE "greeting", "email", email, vars

Multi-Channel Delivery

Send the same template to multiple channels in one call:

WITH vars
    .name = "John"
    .message = "Your appointment is confirmed"
END WITH

SEND TEMPLATE "appointment-confirm", "email,sms,whatsapp", recipient, vars

Or send channel-specific versions:

SEND TEMPLATE "appointment-email", "email", email, vars
SEND TEMPLATE "appointment-sms", "sms", phone, vars

Bulk Sending

Send templates to multiple recipients:

recipients = ["a@example.com", "b@example.com", "c@example.com"]
count = SEND TEMPLATE "newsletter" TO "email" recipients, #{month: "January"}
TALK "Sent to " + count + " recipients"

Best Practices

Keep variable names simple. Use name rather than customer_first_name_from_database. Shorter names are easier to maintain.

Provide fallbacks. Always handle the case where a variable might be missing or empty.

Test templates. Verify all variables populate correctly before deploying to production.

Respect channel limits. SMS has a 160-character single-segment limit. WhatsApp templates require Meta approval.

Personalize thoughtfully. Using {{name}} improves engagement, but avoid over-personalization that feels intrusive.

Include unsubscribe options. Marketing emails should always provide an unsubscribe mechanism.

Database Storage

Templates are stored in the message_templates table:

ColumnTypeDescription
idUUIDTemplate identifier
bot_idUUIDOwning bot
nameTEXTTemplate name
channelTEXTemail/whatsapp/sms/telegram/push
subjectTEXTEmail subject (nullable)
bodyTEXTTemplate body
variablesJSONBList of variable names
is_activeBOOLActive status

See Also

TALK

Sends a message to the current conversation or to a specific recipient on any supported channel.

Syntax

Single Message

TALK message

TALK TO recipient, message

Multi-Line Block with Variable Substitution

BEGIN TALK
Line 1 with ${variable}
Line 2 with ${anotherVariable}
Plain text line
END TALK

Parameters

ParameterTypeDescription
messageStringThe message to send
recipientStringChannel and address in format channel:address
${variable}ExpressionVariable substitution within TALK blocks

Description

TALK is the primary keyword for sending messages in General Bots BASIC.

  • TALK message - Sends to the current conversation (web chat, WhatsApp, etc.)
  • TALK TO recipient, message - Sends to a specific recipient on any channel
  • BEGIN TALK / END TALK - Multi-line block with automatic variable substitution

BEGIN TALK / END TALK Blocks

The BEGIN TALK / END TALK block syntax allows you to write multiple messages with automatic variable substitution using ${variable} syntax.

Syntax

BEGIN TALK
Hello ${name}!
Your order ${orderId} is confirmed.
Total: ${FORMAT(total, "currency")}
Thank you for your purchase!
END TALK

Each line within the block becomes a separate TALK statement. The ${variable} syntax is automatically converted to string concatenation.

How It Works

Input:

nomeNoivo = "Carlos"
protocolo = "CAS123456"

BEGIN TALK
Solicitacao de Casamento enviada com sucesso!
PROTOCOLO: ${protocolo}
Noivo: ${nomeNoivo}
END TALK

Converted to:

TALK "Solicitacao de Casamento enviada com sucesso!"
TALK "PROTOCOLO: " + protocolo
TALK "Noivo: " + nomeNoivo

Variable Substitution Rules

  • ${variableName} - Replaced with the variable value using string concatenation
  • ${FUNCTION(args)} - Function calls are evaluated and substituted
  • Plain text without ${} is treated as a string literal
  • Special characters like $ (not followed by {) are preserved as-is

Examples

Simple Substitution

nome = "João"
idade = 30

BEGIN TALK
Olá ${nome}!
Você tem ${idade} anos.
END TALK

Equivalent to:

TALK "Olá " + nome + "!"
TALK "Você tem " + idade + " anos."

With Function Calls

total = 299.90
numero = 42

BEGIN TALK
Seu pedido: ${numero}
Total: ${FORMAT(total, "currency")}
Obrigado pela preferência!
END TALK

Mixed Content

nome = "Maria"
codigo = "PROMO2024"
desconto = 20

BEGIN TALK
🎉 Oferta Especial para ${nome}!

Use o código: ${codigo}
Desconto de ${desconto}%

Aproveite!
END TALK

Real-World Example: Wedding Confirmation

PARAM nomeNoivo AS STRING LIKE "Carlos" DESCRIPTION "Nome do noivo"
PARAM nomeNoiva AS STRING LIKE "Ana" DESCRIPTION "Nome da noiva"
PARAM protocolo AS STRING LIKE "CAS123456" DESCRIPTION "Protocolo"
PARAM dataCasamento AS DATE LIKE "2026-12-15" DESCRIPTION "Data do casamento"

casamentoId = "CAS-" + FORMAT(NOW(), "yyyyMMddHHmmss")
dataDisplay = FORMAT(dataCasamento, "dd/MM/yyyy")

BEGIN TALK
✅ Solicitação de Casamento enviada com sucesso!

Protocolo: ${protocolo}
ID: ${casamentoId}
Noivo: ${nomeNoivo}
Noiva: ${nomeNoiva}
Data: ${dataDisplay}

Status: Aguardando verificação de disponibilidade
Contato: (21) 4101-0770
END TALK

This is much cleaner than writing individual TALK statements with manual concatenation:

Old way:

TALK "Solicitacao de Casamento enviada com sucesso!"
TALK "Protocolo: " + protocolo
TALK "ID: " + casamentoId
TALK "Noivo: " + nomeNoivo
TALK "Noiva: " + nomeNoiva
TALK "Data: " + dataDisplay
TALK "Status: Aguardando verificacao de disponibilidade"
TALK "Contato: (21) 4101-0770"

Advantages

  1. Cleaner Syntax - No more repetitive TALK statements and + concatenations
  2. Easier to Read - Multi-line messages are more natural to write
  3. Less Error-Prone - Automatic substitution reduces typos in variable names
  4. Template-Like - Write messages like templates with ${variable} placeholders
  5. Perfect for TOOL Functions - Variables are automatically filled by user input

TALK - Current Conversation

TALK - Current Conversation

Send a message to whoever is currently chatting with the bot:

TALK "Hello! How can I help you today?"

name = "João"
TALK "Welcome, " + name + "!"

total = 299.90
TALK "Your total is $" + total

TALK TO - Specific Recipients

Send messages to specific people on specific channels using the format channel:address.

Supported Channels

ChannelFormatExample
WhatsAppwhatsapp:+phonewhatsapp:+5511999887766
Teamsteams:user@domainteams:john@company.com
Emailemail:addressemail:customer@example.com
Web Sessionweb:session_idweb:abc123
Instagraminstagram:usernameinstagram:@myuser

WhatsApp Examples

' Send simple message
TALK TO "whatsapp:+5511999887766", "Your order has been shipped!"

' Send with variables
phone = "+5511999887766"
customer_name = "Maria"
TALK TO "whatsapp:" + phone, "Hello " + customer_name + "! Your order is ready."

' Send formatted message (WhatsApp supports markdown-like formatting)
message = "*Order Confirmed* ✅\n\n"
message = message + "Order: #12345\n"
message = message + "Total: R$ 299,90\n\n"
message = message + "_Thank you for your purchase!_"
TALK TO "whatsapp:" + customer_phone, message

WhatsApp Message Formatting

WhatsApp supports rich text formatting:

FormatSyntaxResult
Bold*text*text
Italic_text_text
Strikethrough~text~text
Monospace`text`text
Line break\nNew line
' Example with all formatting
msg = "🎉 *PROMOTION!*\n\n"
msg = msg + "~R$ 199,90~ *R$ 149,90*\n"
msg = msg + "_Limited time offer!_\n\n"
msg = msg + "Use code: `PROMO2024`"

TALK TO "whatsapp:" + phone, msg

Microsoft Teams Examples

' Send to Teams user
TALK TO "teams:john.smith@company.com", "Meeting reminder: 3pm today"

' Send with formatting (Teams supports markdown)
msg = "**Project Update**\n\n"
msg = msg + "- Task 1: ✅ Complete\n"
msg = msg + "- Task 2: 🔄 In Progress\n"
msg = msg + "- Task 3: ⏳ Pending"

TALK TO "teams:" + manager_email, msg

Email Examples

' Simple email (uses SEND MAIL internally for full email)
TALK TO "email:customer@example.com", "Your password has been reset."

' For full email with subject, use SEND MAIL instead
SEND MAIL "customer@example.com", "Password Reset", "Your password has been reset successfully."

Complete Examples

Order Notification System

WEBHOOK "order-status"

order_id = body.order_id
customer_phone = body.phone
status = body.status

SELECT CASE status
    CASE "confirmed"
        msg = "✅ *Order Confirmed*\n\n"
        msg = msg + "Order #" + order_id + "\n"
        msg = msg + "We're preparing your order!"
        
    CASE "shipped"
        tracking = body.tracking_number
        msg = "📦 *Order Shipped*\n\n"
        msg = msg + "Order #" + order_id + "\n"
        msg = msg + "Tracking: " + tracking + "\n"
        msg = msg + "Track at: https://track.example.com/" + tracking
        
    CASE "delivered"
        msg = "🎉 *Order Delivered*\n\n"
        msg = msg + "Order #" + order_id + "\n"
        msg = msg + "Enjoy your purchase!\n\n"
        msg = msg + "_Rate your experience: reply 1-5_"
        
    CASE ELSE
        msg = "Order #" + order_id + " status: " + status
END SELECT

TALK TO "whatsapp:" + customer_phone, msg

result_status = "ok"

Support Ticket Notifications

SUB NotifyCustomer(phone, ticket_id, message)
    full_msg = "🎫 *Ticket #" + ticket_id + "*\n\n"
    full_msg = full_msg + message
    TALK TO "whatsapp:" + phone, full_msg
END SUB

SUB NotifyAgent(agent_email, ticket_id, customer_name, issue)
    msg = "New ticket assigned:\n\n"
    msg = msg + "Ticket: #" + ticket_id + "\n"
    msg = msg + "Customer: " + customer_name + "\n"
    msg = msg + "Issue: " + issue
    TALK TO "teams:" + agent_email, msg
END SUB

' Usage
CALL NotifyCustomer("+5511999887766", "TKT-001", "Your ticket has been created. We'll respond within 24 hours.")
CALL NotifyAgent("support@company.com", "TKT-001", "João Silva", "Payment issue")

Multi-Channel Broadcast

SUB Broadcast(message, channels)
    FOR EACH channel IN channels
        TALK TO channel, message
        WAIT 1  ' Rate limiting
    NEXT channel
END SUB

' Send to multiple recipients
promo = "🎉 *Flash Sale!* 50% off everything today only!"

recipients = [
    "whatsapp:+5511999887766",
    "whatsapp:+5511888776655",
    "teams:marketing@company.com"
]

CALL Broadcast(promo, recipients)

Appointment Reminders

WEBHOOK "send-reminder"

appointment_id = body.id
appointment = FIND "appointments", "id=" + appointment_id

phone = appointment.customer_phone
name = appointment.customer_name
service = appointment.service
date_time = FORMAT(appointment.datetime, "DD/MM/YYYY HH:mm")

reminder = "📅 *Appointment Reminder*\n\n"
reminder = reminder + "Hi " + name + "!\n\n"
reminder = reminder + "You have an appointment scheduled:\n\n"
reminder = reminder + "📋 " + service + "\n"
reminder = reminder + "🗓️ " + date_time + "\n\n"
reminder = reminder + "Reply *CONFIRM* to confirm or *CANCEL* to cancel."

TALK TO "whatsapp:" + phone, reminder

result_status = "ok"

Notes

  • TALK sends to the current active conversation
  • TALK TO can send to any supported channel
  • WhatsApp requires phone numbers in international format with country code
  • Teams requires valid email addresses from your organization
  • Message formatting varies by channel (WhatsApp uses different syntax than Teams)
  • Rate limiting may apply - use WAIT between bulk messages
  • SEND FILE TO - Send files to specific recipients
  • SEND MAIL - Send emails with subject and attachments
  • HEAR - Receive input from users
  • PRINT - Alias for TALK (debug output)

See Also

HEAR Keyword

The HEAR keyword pauses script execution and waits for user input. With optional type validation, it automatically verifies and normalizes input, retrying with helpful error messages when validation fails.

General Bots

Basic Syntax

HEAR variable_name
HEAR variable_name AS TYPE
HEAR variable_name AS "Option1", "Option2", "Option3"

The simplest form accepts any input. Adding AS TYPE enables automatic validation with user-friendly retry prompts.

Simple HEAR

TALK "What would you like to know?"
HEAR question
TALK "You asked: " + question

The script waits for any user message and stores it in the variable.

Validated Input Types

When using HEAR AS <TYPE>, the system validates input automatically, retries up to 3 times with helpful messages, and returns normalized values.

Text Types

EMAIL validates email format and normalizes to lowercase:

TALK "What's your email address?"
HEAR email AS EMAIL
TALK "We'll send confirmation to: " + email

Accepts: User@Example.COM → Returns: user@example.com

NAME validates name format with proper capitalization:

TALK "What's your full name?"
HEAR name AS NAME

Accepts letters, spaces, hyphens, apostrophes. Auto-capitalizes: john doeJohn Doe

URL validates and normalizes URLs:

TALK "Enter your website:"
HEAR website AS URL

Auto-adds https:// if protocol missing.

PASSWORD validates minimum strength:

TALK "Create a password (minimum 8 characters):"
HEAR password AS PASSWORD

Requires 8+ characters. Never echoes the actual password back.

COLOR accepts color names or hex values:

HEAR color AS COLOR

Accepts: red, #FF0000, rgb(255, 0, 0) → Returns: #FF0000

Numeric Types

INTEGER validates whole numbers:

TALK "How many items?"
HEAR quantity AS INTEGER

Removes formatting (commas, spaces). Returns numeric value.

FLOAT / DECIMAL validates decimal numbers:

TALK "Enter the temperature:"
HEAR temperature AS FLOAT

Handles both . and , as decimal separators.

MONEY / CURRENCY / AMOUNT validates monetary values:

TALK "How much to transfer?"
HEAR amount AS MONEY

Accepts: 100, 1,234.56, R$ 100,00, $100.00 → Returns: 1234.56

CREDITCARD / CARD validates card numbers with Luhn algorithm:

TALK "Enter your card number:"
HEAR card AS CREDITCARD

Returns masked format: 4111 **** **** 1111

Date and Time Types

DATE validates and parses dates:

TALK "When is your birthday?"
HEAR birthday AS DATE

Accepts: 25/12/2024, 12/25/2024, 2024-12-25, December 25, 2024, today, tomorrow, hoje, amanhã

Returns: ISO format YYYY-MM-DD

HOUR / TIME validates time input:

TALK "What time for the meeting?"
HEAR meeting_time AS HOUR

Accepts: 14:30, 2:30 PM → Returns: 14:30

Brazilian Document Types

CPF validates Brazilian individual taxpayer ID:

TALK "Enter your CPF:"
HEAR cpf AS CPF

Validates 11 digits with mod 11 check. Returns: 123.456.789-09

CNPJ validates Brazilian company taxpayer ID:

TALK "Enter your company's CNPJ:"
HEAR cnpj AS CNPJ

Validates 14 digits. Returns: 12.345.678/0001-95

Contact Types

MOBILE / PHONE validates phone numbers:

TALK "What's your phone number?"
HEAR phone AS MOBILE

Accepts 10-15 digits, auto-formats based on detected country.

ZIPCODE / CEP / POSTALCODE validates postal codes:

HEAR cep AS ZIPCODE

Supports Brazilian CEP, US ZIP, UK postcode formats.

Provide options directly in the HEAR statement:

TALK "Choose your fruit:"
HEAR fruit AS "Apple", "Banana", "Orange", "Mango"

Accepts exact match, case-insensitive match, numeric selection (1, 2, 3), or partial match if unique.

BOOLEAN validates yes/no responses:

TALK "Do you agree to the terms?"
HEAR agreed AS BOOLEAN
IF agreed THEN
    TALK "Thank you!"
END IF

True: yes, y, sim, ok, sure, confirm False: no, n, não, cancel, deny

LANGUAGE validates language codes:

HEAR language AS LANGUAGE

Accepts: en, pt, English, Português → Returns: ISO 639-1 code

Media Types

IMAGE / PHOTO waits for image upload:

TALK "Send a photo of your document:"
HEAR document_photo AS IMAGE

Returns URL to uploaded image.

QRCODE waits for image and decodes QR:

TALK "Send me the QR code:"
HEAR qr_data AS QRCODE

Uses vision API to decode. Returns decoded data.

AUDIO / VOICE transcribes audio input:

TALK "Send a voice message:"
HEAR transcription AS AUDIO

Uses Whisper for transcription. Returns text.

VIDEO analyzes video content:

TALK "Send a video of the issue:"
HEAR video_description AS VIDEO

Uses vision API to describe. Returns description.

FILE / DOCUMENT waits for file upload:

TALK "Upload your contract:"
HEAR contract AS DOCUMENT

Accepts PDF, DOC, DOCX, XLS, XLSX, PPT, PPTX, TXT, CSV. Returns URL.

Authentication

LOGIN waits for OAuth completion:

TALK "Click the link to authenticate:"
HEAR user AS LOGIN

Returns user object with tokens after OAuth callback.

Complete Examples

Registration Flow

TALK "Let's create your account!"

TALK "What's your full name?"
HEAR name AS NAME

TALK "Enter your email address:"
HEAR email AS EMAIL

TALK "Enter your CPF:"
HEAR cpf AS CPF

TALK "What's your phone number?"
HEAR phone AS MOBILE

TALK "Choose a password:"
HEAR password AS PASSWORD

TALK "What's your birth date?"
HEAR birthdate AS DATE

TALK "Select your gender:"
HEAR gender AS "Male", "Female", "Other", "Prefer not to say"

SAVE "users.csv", name, email, cpf, phone, birthdate, gender, NOW()
TALK "Account created for " + name + "!"

Payment Flow

TALK "Enter the amount:"
HEAR amount AS MONEY

IF amount < 1 THEN
    TALK "Minimum payment is R$ 1.00"
    RETURN
END IF

TALK "How would you like to pay?"
HEAR method AS "Credit Card", "Debit Card", "PIX", "Boleto"

TALK "Confirm payment of R$ " + FORMAT(amount, "#,##0.00") + "?"
HEAR confirm AS BOOLEAN

IF confirm THEN
    TALK "Processing payment..."
ELSE
    TALK "Payment cancelled."
END IF

Validation Behavior

When validation fails, the system automatically prompts for correction:

User: my email
Bot: Please enter a valid email address (e.g., user@example.com)
User: test@example.com
Bot: Email confirmed!

After 3 failed attempts, execution continues with an empty value. Check for this:

HEAR email AS EMAIL
IF email = "" THEN
    TALK "Unable to validate email. Please contact support."
    RETURN
END IF

Best Practices

Always use appropriate types — automatic validation is safer than manual checking:

' Good
HEAR email AS EMAIL

' Avoid
HEAR email
IF NOT email CONTAINS "@" THEN ...

Provide context before HEAR — users should know what to enter:

TALK "Enter the transfer amount (minimum R$ 1.00):"
HEAR amount AS MONEY

Use menus for limited options:

HEAR method AS "Credit Card", "Debit Card", "PIX"

Combine with SET CONTEXT for AI-enhanced input handling:

SET CONTEXT "You are a banking assistant. Confirm amounts before processing."
HEAR amount AS MONEY

Validation Summary

TypeExample InputNormalized Output
EMAILUser@Example.COMuser@example.com
NAMEjohn DOEJohn Doe
INTEGER1,2341234
MONEYR$ 1.234,561234.56
DATE25/12/20242024-12-25
HOUR2:30 PM14:30
BOOLEANyes / simtrue
CPF12345678909123.456.789-09
MOBILE11999998888(11) 99999-8888
CREDITCARD41111111111111114111 **** **** 1111
QRCODE[image]decoded data
AUDIO[audio file]transcribed text

See Also

SET_CONTEXT Keyword

The SET_CONTEXT keyword defines the operational context for the bot’s current session.
It allows scripts to switch between different logical modes or workflows, influencing how subsequent commands are interpreted.


Syntax

SET_CONTEXT "context-name"

Parameters

  • "context-name" — A string representing the new context.
    Common examples include "sales_mode", "support_mode", or "training_mode".

Description

SET_CONTEXT updates the bot’s internal state to reflect a specific operational context.
Contexts are used to modify behavior dynamically — for example, changing which tools are active, which memory entries are prioritized, or which prompts are used for LLM responses.

When a context is set, the bot automatically adjusts its logic and available commands to match that mode.
This enables modular dialog design and flexible automation workflows.

If the context name does not exist, the system creates a new one automatically and stores it in the session cache.


Example

' Switch to sales mode
SET_CONTEXT "sales_mode"

' Perform a context-specific action
TALK "Welcome to the sales assistant. How can I help you today?"

' Later, switch to support mode
SET_CONTEXT "support_mode"
TALK "Support mode activated. Please describe your issue."

Implementation Notes

  • Implemented in Rust under src/context/mod.rs and src/context/langcache.rs.
  • The keyword interacts with the session manager and context cache to update the active context.
  • Contexts are stored in memory and optionally persisted in cache component or a local cache file.
  • Changing context may trigger automatic loading of associated tools or memory entries.


Summary

SET_CONTEXT is a key command for managing dynamic behavior in GeneralBots.
It enables flexible, modular workflows by allowing scripts to switch between operational modes seamlessly.

GET BOT MEMORY

Retrieve persistent key-value data stored at the bot level.

Syntax

GET BOT MEMORY key

Parameters

ParameterTypeDescription
keyStringThe identifier of the memory item to retrieve

Description

The GET BOT MEMORY keyword retrieves values previously stored with SET BOT MEMORY. These values are:

  • Persistent across all user sessions
  • Shared between all users of the same bot
  • Stored in the database permanently
  • Available until explicitly updated or cleared

If the key doesn’t exist, returns an empty string.

Examples

Retrieve Simple Values

welcome = GET BOT MEMORY "welcome_message"
IF welcome = "" THEN
    welcome = "Welcome to our bot!"
END IF
TALK welcome

Read Configuration

max_retries = GET BOT MEMORY "max_retries"
IF max_retries = "" THEN
    max_retries = "3"
END IF

timeout = GET BOT MEMORY "timeout_seconds"
IF timeout = "" THEN
    timeout = "30"
END IF

Retrieve and Parse JSON

user_data = GET BOT MEMORY "user_profile"
IF user_data <> "" THEN
    ' Parse JSON data
    name = JSON_GET(user_data, "name")
    level = JSON_GET(user_data, "level")
    TALK "Welcome back, " + name + "! You are level " + level
END IF

Counter Management

' Get current visitor count
count = GET BOT MEMORY "visitor_count"
IF count = "" THEN
    count = "0"
END IF
count = VAL(count) + 1
SET BOT MEMORY "visitor_count", STR(count)
TALK "Visitor #" + STR(count)

Dynamic Keys

today = FORMAT(NOW(), "YYYY-MM-DD")
daily_stats = GET BOT MEMORY "stats_" + today
IF daily_stats = "" THEN
    TALK "No statistics for today yet"
ELSE
    TALK "Today's count: " + daily_stats
END IF

Configuration with Defaults

' Function to get config with default
FUNCTION GetConfig(key, default_value)
    value = GET BOT MEMORY key
    IF value = "" THEN
        value = default_value
        SET BOT MEMORY key, default_value
    END IF
    RETURN value
END FUNCTION

' Use the function
email_server = GetConfig("email_server", "mail.example.com")
email_port = GetConfig("email_port", "587")

Return Value

Returns a string containing:

  • The stored value if the key exists
  • Empty string (“”) if the key doesn’t exist
  • Empty string if database error occurs

Performance

  • Direct database lookup with indexed keys
  • Single query execution
  • Synchronous operation (blocks until complete)
  • Cached at database level for repeated access

Best Practices

  1. Always Check for Empty Values

    value = GET BOT MEMORY "some_key"
    IF value = "" THEN
        ' Handle missing value
        value = "default"
    END IF
    
  2. Use Consistent Key Naming

    ' Good: hierarchical keys
    server = GET BOT MEMORY "config:email:server"
    port = GET BOT MEMORY "config:email:port"
    
    ' Bad: inconsistent naming
    ' srv = GET BOT MEMORY "emailSrv"
    ' p = GET BOT MEMORY "mail_port"
    
  3. Cache Frequently Used Values

    ' At start of conversation
    config_timeout = GET BOT MEMORY "timeout"
    config_retries = GET BOT MEMORY "retries"
    
    ' Use cached values throughout
    IF elapsed > VAL(config_timeout) THEN
        TALK "Request timed out"
    END IF
    
  4. Validate Retrieved Data

    max_items = GET BOT MEMORY "max_items"
    IF max_items = "" OR NOT IS_NUMERIC(max_items) THEN
        max_items = "10"
    END IF
    

Error Handling

  • Database connection failures return empty string
  • Invalid bot IDs return empty string
  • Non-existent keys return empty string
  • All errors are logged for debugging

Use Cases

Global Configuration

api_key = GET BOT MEMORY "api_key"
api_url = GET BOT MEMORY "api_url"

Feature Flags

feature_enabled = GET BOT MEMORY "feature:new_ui"
IF feature_enabled = "true" THEN
    ' Show new interface
ELSE
    ' Show old interface
END IF

Shared Counters

total_processed = GET BOT MEMORY "total_processed"
daily_limit = GET BOT MEMORY "daily_limit"
IF VAL(total_processed) >= VAL(daily_limit) THEN
    TALK "Daily limit reached"
END IF

Bot State

maintenance_mode = GET BOT MEMORY "maintenance_mode"
IF maintenance_mode = "true" THEN
    TALK "System is under maintenance. Please try again later."
    EXIT
END IF

Implementation

Located in src/basic/keywords/bot_memory.rs

The implementation:

  • Performs synchronous database query
  • Uses connection pooling for efficiency
  • Returns empty string on any error
  • Validates bot ID before querying

SET BOT MEMORY

Store persistent key-value data at the bot level that persists across all conversations.

Syntax

SET BOT MEMORY key, value

Parameters

ParameterTypeDescription
keyStringUnique identifier for the memory item
valueStringValue to store (can be any string data)

Description

The SET BOT MEMORY keyword stores data that is:

  • Persistent across all user sessions
  • Shared between all users of the same bot
  • Stored in the database permanently
  • Available until explicitly updated or cleared

Bot memory is useful for:

  • Configuration settings
  • Global counters and statistics
  • Shared state between users
  • Bot-wide preferences
  • Cached data that applies to all conversations

Examples

Store Simple Values

SET BOT MEMORY "welcome_message", "Hello! Welcome to our service."
SET BOT MEMORY "support_email", "support@example.com"
SET BOT MEMORY "business_hours", "9 AM - 5 PM EST"

Store Counters

current_count = GET BOT MEMORY "visitor_count"
IF current_count = "" THEN
    current_count = "0"
END IF
new_count = VAL(current_count) + 1
SET BOT MEMORY "visitor_count", STR(new_count)
TALK "You are visitor number " + STR(new_count)

Store JSON Data

user_data = '{"name": "John", "level": 5, "points": 1200}'
SET BOT MEMORY "user_profile", user_data

Dynamic Keys

today = FORMAT(NOW(), "YYYY-MM-DD")
daily_key = "stats_" + today
SET BOT MEMORY daily_key, "25"

Configuration Management

' Store bot configuration
SET BOT MEMORY "max_retries", "3"
SET BOT MEMORY "timeout_seconds", "30"
SET BOT MEMORY "api_version", "v2"

' Later, read configuration
max_retries = GET BOT MEMORY "max_retries"
timeout = GET BOT MEMORY "timeout_seconds"

Database Storage

Bot memories are stored in the bot_memories table:

  • id: UUID primary key
  • bot_id: Reference to the bot
  • key: Memory key (indexed for fast lookup)
  • value: Memory value (text)
  • created_at: Timestamp of creation
  • updated_at: Timestamp of last update

Performance Considerations

  • Keys are indexed for fast retrieval
  • Values are stored as text (no size limit in PostgreSQL)
  • Updates are asynchronous to avoid blocking
  • Consider using structured keys for organization

Best Practices

  1. Use Descriptive Keys: Make keys self-documenting

    SET BOT MEMORY "config:email:smtp_server", "mail.example.com"
    SET BOT MEMORY "stats:daily:2024-01-15", "150"
    
  2. Handle Missing Values: Always check if memory exists

    value = GET BOT MEMORY "some_key"
    IF value = "" THEN
        ' Initialize with default
        SET BOT MEMORY "some_key", "default_value"
        value = "default_value"
    END IF
    
  3. Avoid Sensitive Data: Don’t store passwords or tokens

    ' BAD: Don't do this
    ' SET BOT MEMORY "admin_password", "secret123"
    
    ' GOOD: Store non-sensitive config
    SET BOT MEMORY "admin_email", "admin@example.com"
    
  4. Structure Complex Data: Use JSON for complex structures

    settings = '{"theme": "dark", "language": "en", "notifications": true}'
    SET BOT MEMORY "user_preferences", settings
    
  5. Clean Up Old Data: Remove unused memories periodically

    ' Remove old daily stats
    old_date = FORMAT(DATE_ADD(NOW(), -30, "days"), "YYYY-MM-DD")
    SET BOT MEMORY "stats_" + old_date, ""
    

Differences from User Memory

AspectBot MemoryUser Memory
ScopeAll users of the botSingle user
PersistencePermanentSession or permanent
Use CaseGlobal settingsPersonal data
AccessAny conversationUser’s conversations only

Error Handling

  • If database connection fails, operation is logged but doesn’t crash
  • Invalid bot IDs are logged as errors
  • Duplicate keys update existing values
  • Empty keys are not allowed

Implementation

Located in src/basic/keywords/bot_memory.rs

The implementation:

  • Uses async database operations
  • Handles updates atomically with transactions
  • Validates bot ID format
  • Logs all operations for debugging

GET USER MEMORY

Retrieves data stored at the user level, accessible across sessions and bots. This is the companion to SET USER MEMORY for reading persistent user data.

Syntax

value = GET USER MEMORY("key")

Parameters

ParameterTypeDescription
keyStringThe identifier for the stored value

Returns

The stored value, or empty string ("") if the key doesn’t exist.

Description

GET USER MEMORY retrieves persistent data associated with a specific user. This data:

  • Persists across sessions - Available when user returns days/weeks later
  • Persists across bots - Accessible from any bot the user interacts with
  • Returns original type - Objects, arrays, strings, numbers preserved
  • Returns empty on miss - No error if key doesn’t exist

Examples

Basic Usage

' Retrieve user preferences
language = GET USER MEMORY("language")
timezone = GET USER MEMORY("timezone")
theme = GET USER MEMORY("theme")

TALK "Your settings: " + language + ", " + timezone + ", " + theme

Check If User Is Returning

' Personalized greeting based on stored name
name = GET USER MEMORY("name")

IF name = "" THEN
    TALK "Hello! I don't think we've met. What's your name?"
    HEAR name
    SET USER MEMORY "name", name
ELSE
    TALK "Welcome back, " + name + "! How can I help you today?"
END IF

Retrieve Complex Objects

' Get stored user profile
profile = GET USER MEMORY("profile")

IF profile <> "" THEN
    TALK "Hello " + profile.name + "!"
    TALK "Your plan: " + profile.plan
    TALK "Member since: " + profile.signupDate
ELSE
    TALK "Please complete your profile first."
END IF

Cross-Bot Data Access

' Support bot accessing sales data
lastPurchase = GET USER MEMORY("lastPurchase")

IF lastPurchase <> "" THEN
    TALK "I can see your recent order #" + lastPurchase.orderId
    TALK "Purchased on: " + lastPurchase.date
    TALK "Amount: $" + lastPurchase.amount
    TALK "How can I help with this order?"
ELSE
    TALK "I don't see any recent purchases. How can I help?"
END IF

Retrieve User Facts for AI Context

' Load user facts into context for personalization
occupation = GET USER MEMORY("fact_occupation")
interests = GET USER MEMORY("fact_interests")
company = GET USER MEMORY("fact_company")

IF occupation <> "" THEN
    SET CONTEXT "user_occupation" AS occupation
END IF

IF interests <> "" THEN
    SET CONTEXT "user_interests" AS interests
END IF

' Now AI responses will be personalized based on these facts

Default Values Pattern

' Get with fallback to default
language = GET USER MEMORY("language")
IF language = "" THEN
    language = "en-US"
END IF

' Or use inline default
theme = GET USER MEMORY("theme")
IF theme = "" THEN theme = "light"

TALK "Using language: " + language + ", theme: " + theme

Session Continuity

' Resume conversation from previous session
lastTopic = GET USER MEMORY("lastTopic")
lastQuestion = GET USER MEMORY("lastQuestion")

IF lastTopic <> "" THEN
    TALK "Last time we were discussing " + lastTopic
    TALK "You asked: " + lastQuestion
    TALK "Would you like to continue from there?"
    HEAR continueChoice AS BOOLEAN
    
    IF continueChoice THEN
        ' Resume previous conversation
        SET CONTEXT "topic" AS lastTopic
    END IF
END IF
KeywordDescription
SET USER MEMORYStore user-level persistent data
GET BOT MEMORYRetrieve bot-level data
SET BOT MEMORYStore data at bot level
USER FACTSGet all stored user facts

Comparison: User Memory vs Bot Memory

AspectUser MemoryBot Memory
ScopePer user, across all botsPer bot, across all users
Use caseUser preferences, profileBot state, counters
AccessAny bot can read/writeOnly owning bot
Examplelanguage, name, timezonetotalOrders, lastDeployed

Error Handling

' GET USER MEMORY never throws - returns empty on missing key
value = GET USER MEMORY("nonexistent_key")
' value = ""

' Always check for empty before using
data = GET USER MEMORY("important_data")
IF data = "" THEN
    TALK "Data not found. Please provide it."
    ' Handle missing data case
ELSE
    ' Use the data
END IF

Best Practices

  1. Always check for empty - Keys may not exist for new users
  2. Use consistent key naming - user_name vs userName vs name
  3. Document your keys - Keep track of what data you’re storing
  4. Handle missing gracefully - New users won’t have stored data
  5. Don’t assume structure - Stored objects might have missing fields

See Also

SET USER MEMORY

Persists data at the user level, accessible across sessions and bots. Unlike SET BOT MEMORY which stores data per-bot, user memory follows the user wherever they go.

Syntax

SET USER MEMORY "key", value

Parameters

ParameterTypeDescription
keyStringUnique identifier for the stored value
valueAnyThe value to store (string, number, object)

Description

SET USER MEMORY stores persistent data associated with a specific user. This data:

  • Persists across sessions - Available when user returns days/weeks later
  • Persists across bots - Accessible from any bot the user interacts with
  • Survives restarts - Stored in the database, not just memory
  • Supports TTL - Optional time-to-live for automatic expiration

This is ideal for user preferences, profile data, and cross-bot personalization.

Examples

Basic Usage

' Store user preferences
SET USER MEMORY "language", "pt-BR"
SET USER MEMORY "timezone", "America/Sao_Paulo"
SET USER MEMORY "theme", "dark"

TALK "Preferences saved!"

Store Complex Objects

' Store user profile
profile = #{ 
    name: username,
    email: useremail,
    plan: "premium",
    signupDate: NOW()
}
SET USER MEMORY "profile", profile

TALK "Profile updated successfully!"

Cross-Bot Data Sharing

' In sales-bot: Store purchase history
purchase = #{
    orderId: orderid,
    amount: total,
    date: NOW()
}
SET USER MEMORY "lastPurchase", purchase

' In support-bot: Access the same data
lastPurchase = GET USER MEMORY("lastPurchase")
TALK "I see your last order was #" + lastPurchase.orderId

User Preferences for Personalization

' Check if returning user
name = GET USER MEMORY("name")

IF name = "" THEN
    TALK "Welcome! What's your name?"
    HEAR name
    SET USER MEMORY "name", name
    TALK "Nice to meet you, " + name + "!"
ELSE
    TALK "Welcome back, " + name + "!"
END IF

Store User Facts

' Store facts about the user for AI context
SET USER MEMORY "fact_occupation", "software engineer"
SET USER MEMORY "fact_interests", "AI, automation, productivity"
SET USER MEMORY "fact_company", "Acme Corp"

' These can be used to personalize AI responses
KeywordDescription
GET USER MEMORYRetrieve user-level persisted data
SET BOT MEMORYStore data at bot level
GET BOT MEMORYRetrieve bot-level data
USER FACTSGet all stored user facts

Database Storage

User memory is stored in the user_memory table with the following structure:

ColumnDescription
user_idThe user’s unique identifier
keyThe memory key
valueJSON-encoded value
memory_typeType classification (preference, fact, context)
ttlOptional expiration timestamp
created_atWhen the memory was created
updated_atLast modification time

Config.csv Options

name,value
user-memory-enabled,true
user-memory-max-keys,1000
user-memory-default-ttl,0
OptionDefaultDescription
user-memory-enabledtrueEnable/disable user memory
user-memory-max-keys1000Maximum keys per user
user-memory-default-ttl0Default TTL in seconds (0 = no expiry)

Best Practices

  1. Use descriptive keys - user_language not lang
  2. Prefix related keys - pref_theme, pref_language, fact_name
  3. Don’t store sensitive data - No passwords or tokens
  4. Consider TTL for temporary data - Session-specific data should expire
  5. Keep values reasonable size - Don’t store large files or blobs

See Also

REMEMBER / RECALL Keywords

The REMEMBER and RECALL keywords provide a powerful time-based memory system for storing and retrieving data associated with users. Unlike standard memory operations, REMEMBER supports automatic expiration of stored values.

Syntax

REMEMBER

REMEMBER key, value, duration

RECALL

result = RECALL key

Parameters

REMEMBER Parameters

ParameterTypeDescription
keyStringUnique identifier for the memory entry
valueAnyData to store (string, number, boolean, array, or object)
durationStringHow long to remember the value

Duration Formats

FormatExampleDescription
N seconds"30 seconds"Expires after N seconds
N minutes"5 minutes"Expires after N minutes
N hours"2 hours"Expires after N hours
N days"7 days"Expires after N days
N weeks"2 weeks"Expires after N weeks
N months"3 months"Expires after ~N×30 days
N years"1 year"Expires after ~N×365 days
forever"forever"Never expires
permanent"permanent"Never expires (alias)
Plain number"30"Interpreted as days

Examples

Basic Usage

' Remember user's preferred language for 30 days
REMEMBER "preferred_language", "Spanish", "30 days"

' Later, recall the preference
language = RECALL "preferred_language"
TALK "Your language preference is: " + language

Session-Based Memory

' Remember a temporary verification code for 5 minutes
code = RANDOM(100000, 999999)
REMEMBER "verification_code", code, "5 minutes"
TALK "Your verification code is: " + code

' Verify the code later
HEAR user_code
stored_code = RECALL "verification_code"

IF user_code = stored_code THEN
    TALK "Code verified successfully!"
ELSE
    TALK "Invalid or expired code."
END IF

Storing Complex Data

' Store user preferences as an array
preferences = ["dark_mode", "notifications_on", "english"]
REMEMBER "user_preferences", preferences, "1 year"

' Store a shopping cart temporarily
cart = ["item1", "item2", "item3"]
REMEMBER "shopping_cart", cart, "2 hours"

Permanent Storage

' Store important user information permanently
REMEMBER "account_created", NOW(), "forever"
REMEMBER "user_tier", "premium", "permanent"

Promotional Campaigns

' Track if user has seen a promotional message
has_seen = RECALL "promo_summer_2024"

IF has_seen = null THEN
    TALK "🎉 Special summer offer: 20% off all products!"
    REMEMBER "promo_summer_2024", true, "30 days"
END IF

Rate Limiting

' Simple rate limiting for API calls
call_count = RECALL "api_calls_today"

IF call_count = null THEN
    call_count = 0
END IF

IF call_count >= 100 THEN
    TALK "You've reached your daily API limit. Please try again tomorrow."
ELSE
    call_count = call_count + 1
    REMEMBER "api_calls_today", call_count, "24 hours"
    ' Process the API call
END IF

How It Works

  1. Storage: Data is stored in the bot_memories database table with:

    • User ID and Bot ID association
    • JSON-serialized value
    • Creation timestamp
    • Optional expiration timestamp
  2. Retrieval: When RECALL is called:

    • System checks if the key exists for the user/bot combination
    • Verifies the entry hasn’t expired
    • Returns the value or null if not found/expired
  3. Automatic Cleanup: Expired entries are not returned and can be periodically cleaned up by maintenance tasks.

Database Schema

The REMEMBER keyword uses the following database structure:

CREATE TABLE bot_memories (
    id TEXT PRIMARY KEY,
    user_id TEXT NOT NULL,
    bot_id TEXT NOT NULL,
    session_id TEXT,
    key TEXT NOT NULL,
    value JSONB NOT NULL,
    created_at TEXT NOT NULL,
    expires_at TEXT,
    UNIQUE(user_id, bot_id, key)
);

Comparison with Other Memory Keywords

KeywordScopePersistenceExpiration
SET USER MEMORYUserPermanentNo
SET BOT MEMORYBot (all users)PermanentNo
REMEMBERUserConfigurableYes
REMEMBER USER FACTUserPermanentNo

Best Practices

  1. Use descriptive keys: Choose meaningful key names like "last_login" instead of "ll".

  2. Set appropriate durations: Match the duration to your use case:

    • Session data: minutes to hours
    • Preferences: weeks to months
    • Important data: forever
  3. Handle null values: Always check if RECALL returns null:

    value = RECALL "some_key"
    IF value = null THEN
        ' Handle missing/expired data
    END IF
    
  4. Avoid storing sensitive data: Don’t store passwords, API keys, or other secrets.

Error Handling

' REMEMBER returns a confirmation message on success
result = REMEMBER "key", "value", "1 day"
' result = "Remembered 'key' for 1 day"

' RECALL returns null if key doesn't exist or has expired
value = RECALL "nonexistent_key"
' value = null

See Also

BOOK / BOOK_MEETING / CHECK_AVAILABILITY Keywords

The BOOK family of keywords provides calendar and scheduling functionality, allowing bots to create appointments, schedule meetings with attendees, and check availability.

Keywords Overview

KeywordPurpose
BOOKCreate a simple calendar appointment
BOOK_MEETINGSchedule a meeting with multiple attendees
CHECK_AVAILABILITYFind available time slots

BOOK

Creates a calendar appointment for the current user.

Syntax

result = BOOK title, description, start_time, duration_minutes, location

Parameters

ParameterTypeDescription
titleStringTitle/subject of the appointment
descriptionStringDetailed description of the appointment
start_timeStringWhen the appointment starts (see Time Formats)
duration_minutesIntegerDuration in minutes (default: 30)
locationStringLocation or meeting room

Example

' Book a dentist appointment
result = BOOK "Dentist Appointment", "Annual checkup", "2024-03-15 14:00", 60, "123 Medical Center"
TALK "Your appointment has been booked: " + result

' Book a quick meeting
result = BOOK "Team Sync", "Weekly standup", "tomorrow 10:00", 30, "Conference Room A"

BOOK_MEETING

Schedules a meeting with multiple attendees, sending calendar invites.

Syntax

result = BOOK_MEETING meeting_details, attendees

Parameters

ParameterTypeDescription
meeting_detailsJSON StringMeeting configuration object
attendeesArrayList of attendee email addresses

Meeting Details Object

{
    "title": "Meeting Title",
    "description": "Meeting description",
    "start_time": "2024-03-15 14:00",
    "duration": 60,
    "location": "Conference Room B",
    "reminder_minutes": 15,
    "recurrence": "weekly"
}

Example

' Schedule a team meeting
meeting = '{
    "title": "Sprint Planning",
    "description": "Plan next sprint tasks and priorities",
    "start_time": "Monday 09:00",
    "duration": 90,
    "location": "Main Conference Room",
    "reminder_minutes": 30
}'

attendees = ["alice@company.com", "bob@company.com", "carol@company.com"]

result = BOOK_MEETING meeting, attendees
TALK "Meeting scheduled with " + LEN(attendees) + " attendees"

CHECK_AVAILABILITY

Finds available time slots for a given date and duration.

Syntax

available_slots = CHECK_AVAILABILITY date, duration_minutes

Parameters

ParameterTypeDescription
dateStringThe date to check availability
duration_minutesIntegerRequired duration for the meeting

Example

' Check availability for a 1-hour meeting tomorrow
slots = CHECK_AVAILABILITY "tomorrow", 60

TALK "Available time slots:"
FOR EACH slot IN slots
    TALK "  - " + slot
NEXT

Time Formats

The BOOK keywords support flexible time formats:

Absolute Formats

FormatExample
ISO 8601"2024-03-15T14:00:00"
Date + Time"2024-03-15 14:00"
Date + Time (12h)"2024-03-15 2:00 PM"

Relative Formats

FormatExample
Day name"Monday 10:00"
Relative day"tomorrow 14:00"
Next week"next Tuesday 09:00"

Complete Example: Appointment Scheduling Bot

' appointment-bot.bas
' A complete appointment scheduling workflow

TALK "Welcome to our scheduling assistant!"
TALK "What type of appointment would you like to book?"

HEAR appointment_type

SWITCH appointment_type
    CASE "consultation"
        duration = 60
        description = "Initial consultation meeting"
    CASE "follow-up"
        duration = 30
        description = "Follow-up discussion"
    CASE "review"
        duration = 45
        description = "Project review session"
    DEFAULT
        duration = 30
        description = appointment_type
END SWITCH

TALK "When would you like to schedule this?"
HEAR preferred_date

' Check available slots
slots = CHECK_AVAILABILITY preferred_date, duration

IF LEN(slots) = 0 THEN
    TALK "Sorry, no availability on that date. Please try another day."
ELSE
    TALK "Available times:"
    index = 1
    FOR EACH slot IN slots
        TALK index + ". " + slot
        index = index + 1
    NEXT
    
    TALK "Which time slot would you prefer? (enter number)"
    HEAR choice
    
    selected_time = slots[choice - 1]
    
    TALK "Where would you like the meeting to take place?"
    HEAR location
    
    ' Book the appointment
    result = BOOK appointment_type, description, selected_time, duration, location
    
    TALK "✅ Your appointment has been booked!"
    TALK "Details: " + result
END IF

Meeting with Recurrence

' Schedule a recurring weekly meeting
meeting = '{
    "title": "Weekly Team Standup",
    "description": "Daily sync on project progress",
    "start_time": "Monday 09:00",
    "duration": 15,
    "location": "Virtual - Teams",
    "reminder_minutes": 5,
    "recurrence": {
        "frequency": "weekly",
        "interval": 1,
        "count": 12,
        "by_day": ["MO", "WE", "FR"]
    }
}'

attendees = ["team@company.com"]
result = BOOK_MEETING meeting, attendees

Event Status

Calendar events can have the following statuses:

StatusDescription
ConfirmedEvent is confirmed and scheduled
TentativeEvent is tentatively scheduled
CancelledEvent has been cancelled

Calendar Event Structure

When an event is created, it contains:

{
    "id": "uuid",
    "title": "Meeting Title",
    "description": "Description",
    "start_time": "2024-03-15T14:00:00Z",
    "end_time": "2024-03-15T15:00:00Z",
    "location": "Conference Room",
    "organizer": "user@example.com",
    "attendees": ["attendee1@example.com"],
    "reminder_minutes": 15,
    "recurrence_rule": null,
    "status": "Confirmed",
    "created_at": "2024-03-10T10:00:00Z",
    "updated_at": "2024-03-10T10:00:00Z"
}

Configuration

To enable calendar functionality, configure the following in config.csv:

KeyDescription
calendar-providerCalendar service (google, outlook, caldav)
calendar-client-idOAuth client ID
calendar-client-secretOAuth client secret
calendar-default-reminderDefault reminder time in minutes

Error Handling

' Handle booking errors gracefully
ON ERROR GOTO handle_error

result = BOOK "Meeting", "Description", "invalid-date", 30, "Location"
TALK "Booked: " + result
END

handle_error:
    TALK "Sorry, I couldn't book that appointment. Please check the date and time format."
    TALK "Error: " + ERROR_MESSAGE
END

Best Practices

  1. Always check availability first: Before booking, use CHECK_AVAILABILITY to ensure the time slot is free.

  2. Use descriptive titles: Make appointment titles clear and searchable.

  3. Set appropriate reminders: Configure reminder times based on appointment importance.

  4. Handle time zones: Be explicit about time zones when scheduling across regions.

  5. Validate inputs: Check user-provided dates and times before attempting to book.

  • SET SCHEDULE - Schedule recurring bot tasks
  • WAIT - Pause execution for a duration
  • SEND MAIL - Send meeting confirmations via email

See Also

WEATHER / FORECAST Keywords

Get weather information for any location using OpenWeatherMap API.

WEATHER

result = WEATHER "London"
TALK result

Returns current conditions: temperature, humidity, wind, visibility.

FORECAST

result = FORECAST "Paris", 5
TALK result

Returns multi-day forecast with high/low temps and rain chance.

Configuration

Add to config.csv:

weather-api-key,your-openweathermap-api-key

Get a free API key at openweathermap.org.

See Also

ADD BOT Keywords

Dynamically add bots to a session with specific triggers, tools, or schedules.

Keywords

KeywordPurpose
ADD BOT ... WITH TRIGGERAdd bot activated by keyword
ADD BOT ... WITH TOOLSAdd bot with specific tools
ADD BOT ... WITH SCHEDULEAdd bot on a schedule
REMOVE BOTRemove bot from session

ADD BOT WITH TRIGGER

ADD BOT "sales-bot" WITH TRIGGER "pricing"

When user mentions “pricing”, sales-bot activates.

ADD BOT WITH TOOLS

ADD BOT "data-bot" WITH TOOLS "database,spreadsheet,charts"

ADD BOT WITH SCHEDULE

ADD BOT "report-bot" WITH SCHEDULE "0 9 * * MON"

Adds bot that runs every Monday at 9 AM (cron format).

REMOVE BOT

REMOVE BOT "sales-bot"

Example: Multi-Bot Setup

' Set up specialized bots for different topics
ADD BOT "orders-bot" WITH TRIGGER "order status, shipping, delivery"
ADD BOT "support-bot" WITH TRIGGER "help, problem, issue, broken"
ADD BOT "sales-bot" WITH TRIGGER "pricing, quote, purchase"

TALK "I've set up our specialist team. Just ask about orders, support, or sales!"

See Also

USE MODEL

Dynamically switches the LLM model used for AI operations within a script. Enables model routing based on task requirements, cost optimization, or performance needs.

Syntax

USE MODEL "modelname"
USE MODEL "auto"

Parameters

ParameterTypeDescription
modelnameStringName of the model to use, or “auto” for automatic routing

Description

USE MODEL allows scripts to dynamically select which language model to use for subsequent AI operations. This is essential for:

  • Cost optimization - Use smaller/cheaper models for simple tasks
  • Quality control - Use powerful models for complex reasoning
  • Speed optimization - Use fast models for real-time responses
  • Specialized tasks - Use code-specific models for programming

When set to "auto", the system automatically routes queries to the most appropriate model based on task complexity, latency requirements, and cost considerations.

Examples

Basic Model Selection

' Use a fast model for simple queries
USE MODEL "fast"
response = LLM "What time is it in New York?"
TALK response

' Switch to quality model for complex analysis
USE MODEL "quality"
analysis = LLM "Analyze the market trends for Q4 and provide recommendations"
TALK analysis

Automatic Model Routing

' Let the system choose the best model
USE MODEL "auto"

' Simple query -> routes to fast model
greeting = LLM "Say hello"

' Complex query -> routes to quality model  
report = LLM "Generate a detailed financial analysis with projections"

Code Generation

' Use code-specialized model
USE MODEL "code"

code = LLM "Write a Python function to calculate fibonacci numbers"
TALK code

Cost-Aware Processing

' Process bulk items with cheap model
USE MODEL "fast"
FOR EACH item IN items
    summary = LLM "Summarize in one sentence: " + item.text
    item.summary = summary
NEXT item

' Final review with quality model
USE MODEL "quality"
review = LLM "Review these summaries for accuracy: " + summaries

Model Fallback Pattern

' Try preferred model first
USE MODEL "claude-sonnet-4.5"
ON ERROR GOTO fallback
response = LLM prompt
GOTO done

fallback:
' Fall back to local model if API fails
USE MODEL "local"
response = LLM prompt

done:
TALK response

Model Routing Strategies

The system supports several routing strategies configured in config.csv:

StrategyDescription
manualExplicit model selection only
autoAutomatic routing based on query analysis
load-balancedDistribute across models for throughput
fallbackTry models in order until one succeeds

Built-in Model Aliases

AliasDescriptionUse Case
fastOptimized for speedSimple queries, real-time chat
qualityOptimized for accuracyComplex reasoning, analysis
codeCode-specialized modelProgramming tasks
localLocal GGUF modelOffline/private operation
autoSystem-selectedLet routing decide

Config.csv Options

name,value
model-routing-strategy,auto
model-default,fast
model-fast,DeepSeek-R3-Distill-Qwen-1.5B-Q3_K_M.gguf
model-quality,claude-sonnet-4.5
model-code,codellama-7b.gguf
model-fallback-enabled,true
model-fallback-order,quality,fast,local
OptionDefaultDescription
model-routing-strategyautoRouting strategy to use
model-defaultfastDefault model when not specified
model-fast(configured)Model for fast/simple tasks
model-quality(configured)Model for quality/complex tasks
model-code(configured)Model for code generation
model-fallback-enabledtrueEnable automatic fallback
model-fallback-orderquality,fast,localOrder to try on failure

Auto-Routing Criteria

When USE MODEL "auto" is active, the system considers:

  1. Query complexity - Token count, reasoning required
  2. Task type - Code, analysis, chat, translation
  3. Latency requirements - Real-time vs batch
  4. Cost budget - Per-query and daily limits
  5. Model availability - Health checks, rate limits
KeywordDescription
LLMQuery the language model
SET CONTEXTAdd context for LLM
BEGIN SYSTEM PROMPTDefine AI persona

Performance Considerations

  • Model switching has minimal overhead
  • Auto-routing adds ~10ms for classification
  • Consider batching similar queries under one model
  • Local models avoid network latency

Best Practices

  1. Start with auto - Let the system optimize, then tune
  2. Batch by model - Group similar tasks to reduce switching
  3. Monitor costs - Track per-model usage in analytics
  4. Test fallbacks - Ensure graceful degradation
  5. Profile your queries - Understand which need quality vs speed

See Also

DELEGATE TO BOT

Delegates a task or message to another bot in a multi-agent system. This enables agent-to-agent communication using the A2A (Agent-to-Agent) protocol.

Syntax

DELEGATE "message" TO BOT "botname"
DELEGATE "message" TO BOT "botname" TIMEOUT seconds
result = DELEGATE "message" TO BOT "botname"

Parameters

ParameterTypeDescription
messageStringThe task or message to send to the target bot
botnameStringName of the target bot to delegate to
secondsNumberOptional timeout in seconds (default: 30)

Description

DELEGATE TO BOT sends a message or task to another bot and optionally waits for a response. This is the core keyword for multi-agent orchestration, enabling:

  • Task specialization - Route tasks to specialized bots
  • Agent collaboration - Multiple bots working together
  • Workload distribution - Spread tasks across agents
  • Expert consultation - Query domain-specific bots

The delegation uses the A2A (Agent-to-Agent) protocol which handles:

  • Message routing between agents
  • Correlation IDs for request/response matching
  • Timeout handling
  • Error propagation

Examples

Basic Delegation

' Delegate a translation task to a specialized bot
DELEGATE "Translate 'Hello World' to Portuguese" TO BOT "translator-bot"
TALK "Translation request sent!"

Get Response from Delegated Bot

' Ask the finance bot for a calculation
result = DELEGATE "Calculate ROI for investment of $10000 with 12% annual return over 5 years" TO BOT "finance-bot"
TALK "The finance expert says: " + result

Delegation with Timeout

' Long-running task with extended timeout
result = DELEGATE "Analyze this quarterly report and provide insights" TO BOT "analyst-bot" TIMEOUT 120
TALK result

Multi-Bot Workflow

' Customer support escalation workflow
issue = "Customer reports billing discrepancy"

' First, check with billing bot
billingInfo = DELEGATE "Check account status for customer " + customerid TO BOT "billing-bot" TIMEOUT 30

IF INSTR(billingInfo, "discrepancy") > 0 THEN
    ' Escalate to senior support
    resolution = DELEGATE "Priority: " + issue + " Details: " + billingInfo TO BOT "senior-support-bot" TIMEOUT 60
    TALK "A senior agent is handling your case: " + resolution
ELSE
    TALK "Your account looks fine: " + billingInfo
END IF

Parallel Expert Consultation

' Get opinions from multiple specialized bots
question = "What's the best approach for this investment portfolio?"

' Delegate to multiple experts
stockAnalysis = DELEGATE question TO BOT "stock-analyst"
bondAnalysis = DELEGATE question TO BOT "bond-analyst"
riskAssessment = DELEGATE question TO BOT "risk-assessor"

' Combine insights
BEGIN TALK
**Investment Analysis Summary**

📈 **Stock Analysis:** {stockAnalysis}

📊 **Bond Analysis:** {bondAnalysis}

⚠️ **Risk Assessment:** {riskAssessment}
END TALK

Conditional Routing

' Route to appropriate specialist based on query type
HEAR userquery

' Use LLM to classify the query
category = LLM "Classify this query into one of: billing, technical, sales, general. Query: " + userquery

SWITCH category
    CASE "billing"
        response = DELEGATE userquery TO BOT "billing-bot"
    CASE "technical"
        response = DELEGATE userquery TO BOT "tech-support-bot"
    CASE "sales"
        response = DELEGATE userquery TO BOT "sales-bot"
    CASE ELSE
        response = DELEGATE userquery TO BOT "general-assistant"
END SWITCH

TALK response

Chain of Delegation

' Research assistant that coordinates multiple bots
topic = "renewable energy trends 2025"

' Step 1: Gather data
rawData = DELEGATE "Search for recent data on " + topic TO BOT "research-bot" TIMEOUT 60

' Step 2: Analyze data
analysis = DELEGATE "Analyze this data and identify key trends: " + rawData TO BOT "analyst-bot" TIMEOUT 45

' Step 3: Generate report
report = DELEGATE "Create an executive summary from this analysis: " + analysis TO BOT "writer-bot" TIMEOUT 30

TALK report

A2A Protocol Details

When you use DELEGATE TO BOT, the system creates an A2A message with:

FieldDescription
from_agentThe current bot’s identifier
to_agentThe target bot name
message_typeDelegate for task delegation
payloadThe message content
correlation_idUnique ID to match response
timestampWhen the message was sent

Error Handling

' Handle delegation failures gracefully
ON ERROR RESUME NEXT

result = DELEGATE "Process payment" TO BOT "payment-bot" TIMEOUT 30

IF ERROR THEN
    TALK "I'm having trouble reaching our payment system. Please try again in a moment."
    ' Log the error
    PRINT "Delegation failed: " + ERROR_MESSAGE
ELSE
    TALK result
END IF
KeywordDescription
ADD BOTAdd a bot to the current session
BROADCAST TO BOTSSend message to all bots
TRANSFER CONVERSATIONHand off conversation to another bot

Config.csv Options

name,value
a2a-enabled,true
a2a-timeout,30
a2a-max-hops,5
a2a-retry-count,3
OptionDefaultDescription
a2a-enabledtrueEnable agent-to-agent communication
a2a-timeout30Default timeout in seconds
a2a-max-hops5Maximum delegation chain depth
a2a-retry-count3Number of retry attempts on failure

Best Practices

  1. Set appropriate timeouts - Long tasks need longer timeouts
  2. Handle failures gracefully - Always have a fallback
  3. Avoid circular delegation - Bot A → Bot B → Bot A
  4. Keep delegation chains short - Max 3-4 hops recommended
  5. Log delegations - Helps with debugging multi-agent flows
  6. Use descriptive bot names - billing-bot not bot2

Limitations

  • Maximum message size: 1MB
  • Maximum timeout: 300 seconds (5 minutes)
  • Maximum concurrent delegations: 10 per session
  • Target bot must be registered and active

See Also

RUN PYTHON / RUN JAVASCRIPT / RUN BASH

Executes code in a sandboxed environment. Enables safe execution of dynamic code for data processing, calculations, and automation tasks.

Syntax

result = RUN PYTHON "code"
result = RUN JAVASCRIPT "code"
result = RUN BASH "code"
result = RUN PYTHON WITH FILE "script.py"
result = RUN JAVASCRIPT WITH FILE "script.js"
result = RUN BASH WITH FILE "script.sh"

Parameters

ParameterTypeDescription
codeStringInline code to execute
filepathStringPath to script file (with WITH FILE variant)

Returns

The output (stdout) from the executed code as a string.

Description

The RUN keywords execute code in isolated, sandboxed environments. This provides:

  • Security - Code runs in isolated containers (LXC, Docker, or Firecracker)
  • Flexibility - Use the right language for the task
  • Safety - Resource limits prevent runaway processes
  • Integration - Pass data between BASIC and other languages

The sandbox prevents:

  • File system access outside designated areas
  • Network access (unless explicitly enabled)
  • System calls and privilege escalation
  • Excessive CPU or memory usage

Examples

Basic Python Execution

' Simple calculation
result = RUN PYTHON "print(2 + 2)"
TALK "2 + 2 = " + result

' Data processing
code = "
import json
data = [1, 2, 3, 4, 5]
print(json.dumps({'sum': sum(data), 'avg': sum(data)/len(data)}))
"
stats = RUN PYTHON code
TALK "Statistics: " + stats

JavaScript for JSON Processing

' Parse and transform JSON
jsonData = GET "https://api.example.com/data"
code = "
const data = JSON.parse('" + jsonData + "');
const transformed = data.items.map(i => ({
    id: i.id,
    name: i.name.toUpperCase()
}));
console.log(JSON.stringify(transformed));
"
result = RUN JAVASCRIPT code
TALK result

Bash for System Tasks

' List files and get disk usage
result = RUN BASH "ls -la /data && df -h"
TALK "System info:\n" + result

Run Script from File

' Execute a Python script from .gbdrive
result = RUN PYTHON WITH FILE "scripts/analyze_data.py"
TALK "Analysis complete: " + result

' Run a bash script
output = RUN BASH WITH FILE "scripts/backup.sh"
PRINT "Backup output: " + output

Data Pipeline

' Fetch data, process with Python, store result
rawData = GET "https://api.example.com/sales"

pythonCode = "
import json
import statistics

data = json.loads('''" + rawData + "''')
sales = [item['amount'] for item in data]

result = {
    'total': sum(sales),
    'average': statistics.mean(sales),
    'median': statistics.median(sales),
    'std_dev': statistics.stdev(sales) if len(sales) > 1 else 0
}
print(json.dumps(result))
"

analysis = RUN PYTHON pythonCode
SAVE "sales_analysis.csv", analysis
TALK "Sales analysis saved!"

Machine Learning Inference

' Run ML model for prediction
inputData = #{ features: [1.5, 2.3, 4.1, 0.8] }

code = "
import json
import pickle

# Load pre-trained model (stored in sandbox)
with open('/data/model.pkl', 'rb') as f:
    model = pickle.load(f)

input_data = " + JSON(inputData) + "
prediction = model.predict([input_data['features']])[0]
print(json.dumps({'prediction': float(prediction)}))
"

result = RUN PYTHON code
prediction = JSON_PARSE(result)
TALK "Predicted value: " + prediction.prediction

Image Processing

' Process an uploaded image
imagePath = UPLOAD userImage, "uploads/"

code = "
from PIL import Image
import json

img = Image.open('/data/" + imagePath + "')
width, height = img.size
format = img.format

# Resize if too large
if width > 1920:
    ratio = 1920 / width
    new_size = (1920, int(height * ratio))
    img = img.resize(new_size)
    img.save('/data/resized_" + imagePath + "')

print(json.dumps({
    'original_size': [width, height],
    'format': format,
    'resized': width > 1920
}))
"

result = RUN PYTHON code
TALK "Image processed: " + result

Multi-Language Pipeline

' Use different languages for different strengths
data = GET "https://api.example.com/raw-data"

' Step 1: Clean data with Python (pandas)
cleanCode = "
import pandas as pd
import json
df = pd.read_json('''" + data + "''')
df = df.dropna()
df = df[df['value'] > 0]
print(df.to_json(orient='records'))
"
cleanedData = RUN PYTHON cleanCode

' Step 2: Transform with JavaScript (fast JSON manipulation)
transformCode = "
const data = JSON.parse('" + cleanedData + "');
const result = data.reduce((acc, item) => {
    acc[item.category] = (acc[item.category] || 0) + item.value;
    return acc;
}, {});
console.log(JSON.stringify(result));
"
aggregated = RUN JAVASCRIPT transformCode

TALK "Results: " + aggregated

Sandbox Configuration

Runtime Options

The sandbox supports multiple isolation backends:

RuntimeSecurityPerformanceRequirements
LXCHighExcellentLXC installed
DockerHighGoodDocker daemon
FirecrackerHighestGoodFirecracker binary
ProcessLowBestNone (fallback)

Config.csv Options

name,value
sandbox-runtime,lxc
sandbox-timeout,30
sandbox-memory-mb,512
sandbox-cpu-percent,50
sandbox-network,false
sandbox-python-packages,numpy,pandas,pillow
sandbox-allowed-paths,/data,/tmp
OptionDefaultDescription
sandbox-runtimelxcIsolation backend to use
sandbox-timeout30Maximum execution time (seconds)
sandbox-memory-mb512Memory limit in MB
sandbox-cpu-percent50CPU usage limit
sandbox-networkfalseAllow network access
sandbox-python-packages(none)Pre-installed Python packages
sandbox-allowed-paths/data,/tmpAccessible filesystem paths

Security Considerations

What’s Blocked

  • Direct file system access outside sandbox
  • Network connections (unless sandbox-network=true)
  • System calls (fork, exec, etc.)
  • Environment variable access
  • Process spawning

What’s Allowed

  • Standard library operations
  • File I/O within /data and /tmp
  • Computation up to resource limits
  • Pre-approved packages

Input Sanitization

' IMPORTANT: Always sanitize user input before embedding in code
userInput = HEAR input
' Remove potential code injection
safeInput = REPLACE(userInput, "'", "\'")
safeInput = REPLACE(safeInput, '"', '\"')

code = "print('User said: " + safeInput + "')"
result = RUN PYTHON code

Error Handling

' Handle execution errors
ON ERROR RESUME NEXT

result = RUN PYTHON "
import nonexistent_module
print('hello')
"

IF ERROR THEN
    TALK "Code execution failed: " + ERROR_MESSAGE
    ' Fall back to alternative approach
ELSE
    TALK result
END IF

Resource Limits

ResourceDefaultMaximum
Execution time30s300s
Memory512 MB4096 MB
CPU50%100%
Output size1 MB10 MB
File writes10 MB100 MB
KeywordDescription
LLMAI-generated code execution
GETFetch data for processing
SAVEStore processed results

Best Practices

  1. Keep code snippets small - Large scripts should use WITH FILE
  2. Sanitize all inputs - Never trust user data in code strings
  3. Set appropriate timeouts - Match timeout to expected execution time
  4. Use the right language - Python for data, JS for JSON, Bash for files
  5. Handle errors gracefully - Code can fail for many reasons
  6. Pre-install packages - Don’t pip install in every execution
  7. Log execution times - Monitor for performance issues

Limitations

  • No persistent state between executions
  • No GPU access (use dedicated ML endpoints instead)
  • No interactive input (stdin)
  • No graphical output (use file output instead)
  • Package installation not allowed at runtime

See Also

USE KB

Activate a knowledge base collection for semantic search.

Syntax

USE KB "collection_name"
USE KB collection_variable

Parameters

ParameterTypeDescription
collection_nameStringName of folder inside .gbkb/

Description

Loads a knowledge base collection, enabling automatic semantic search for that content. Once active, the LLM searches this collection when answering questions - no explicit search code needed.

Examples

Basic Usage

USE KB "policies"
' Bot now answers questions using policy documents

Multiple Collections

USE KB "products"
USE KB "pricing"
USE KB "support"
' All three collections searchable

Conditional Loading

dept = GET user_department
IF dept = "HR" THEN
  USE KB "hr_policies"
ELSE IF dept = "IT" THEN
  USE KB "it_docs"
END IF

Dynamic Collection

topic = HEAR "What topic?"
USE KB topic

How It Works

  1. User asks question
  2. System searches active collections
  3. Top matching chunks added to LLM context
  4. LLM generates informed response

Collection Structure

bot.gbkb/
├── policies/      → USE KB "policies"
├── products/      → USE KB "products"
└── support/       → USE KB "support"

Supported File Types

PDF, DOCX, TXT, MD, HTML, CSV, JSON

Performance

  • Each collection uses ~50MB RAM when active
  • First search: 100-200ms
  • Subsequent: 20-50ms (cached)

Tip: Load only what’s needed, clear when done.

Common Patterns

Role-Based

SWITCH GET user_role
  CASE "manager"
    USE KB "management"
  CASE "developer"
    USE KB "documentation"
  CASE "customer"
    USE KB "products"
END SWITCH

With Context

USE KB "technical_docs"
SET CONTEXT "You are a technical expert" AS prompt

With Website

USE WEBSITE "https://docs.example.com"
USE KB "documentation"
' Fresh web content now searchable

Error Handling

TRY
  USE KB user_requested_kb
CATCH
  TALK "That knowledge base doesn't exist"
END TRY

See Also

THINK KB

Perform explicit knowledge base reasoning with structured results.

Syntax

results = THINK KB "query_text"
results = THINK KB query_variable

Parameters

ParameterTypeDescription
query_textStringThe question or search query to execute
query_variableVariableVariable containing the search query

Description

Unlike automatic KB search (USE KB), THINK KB provides explicit control over knowledge base queries with structured results for analysis and decision-making.

Return Structure

{
  "results": [
    {
      "content": "Relevant text content",
      "source": "document.pdf", 
      "kb_name": "knowledge_base_name",
      "relevance": 0.85,
      "tokens": 150
    }
  ],
  "summary": "Brief summary of findings",
  "confidence": 0.78,
  "total_results": 5,
  "sources": ["doc1.pdf", "doc2.md"],
  "query": "original search query",
  "kb_count": 2
}

Examples

Basic Usage

USE KB "policies"
results = THINK KB "What is the remote work policy?"

TALK results.summary
PRINT "Confidence: " + results.confidence

FOR i = 0 TO results.results.length - 1
  result = results.results[i]
  PRINT "Source: " + result.source
  PRINT "Content: " + result.content
NEXT i

Decision Making with Confidence

USE KB "technical_docs"
results = THINK KB "How to fix database errors?"

IF results.confidence > 0.8 THEN
  TALK "I found reliable information: " + results.summary
  top_result = results.results[0]
  TALK "From: " + top_result.source
  TALK top_result.content
ELSE IF results.confidence > 0.5 THEN
  TALK "Found some information, but not completely certain"
ELSE
  TALK "Couldn't find reliable information. Consult additional resources."
END IF

Multi-Stage Reasoning

USE KB "research_papers"

' Stage 1: General search
general = THINK KB "machine learning applications"

' Stage 2: Specific search based on findings
IF general.confidence > 0.6 THEN
  specific_query = "deep learning " + general.results[0].content.substring(0, 50)
  specific = THINK KB specific_query
  
  TALK "Overview: " + general.summary
  TALK "Details: " + specific.summary
END IF

Source Filtering

results = THINK KB "contract clauses"

pdf_results = []
FOR i = 0 TO results.results.length - 1
  result = results.results[i]
  IF result.source CONTAINS ".pdf" THEN
    pdf_results.push(result)
  END IF
NEXT i

TALK "Found " + pdf_results.length + " PDF results"

Key Differences from USE KB

FeatureUSE KB (Automatic)THINK KB (Explicit)
TriggerAutomatic on user questionsExplicit keyword execution
ControlBehind-the-scenesFull programmatic control
ResultsInjected into LLM contextStructured data for processing
ConfidenceNot exposedExplicit confidence scoring
FilteringNot availableFull result filtering

Best Practices

  1. Activate KBs First: Use USE KB to activate knowledge bases
  2. Check Confidence: Use thresholds for decision making
  3. Handle Empty Results: Check total_results before accessing array
  4. Filter by Relevance: Consider filtering results below 0.5 relevance
  5. Cache Results: Store in variables for multiple uses

Error Handling

TRY
  results = THINK KB user_query
  IF results.total_results = 0 THEN
    TALK "No information found for: " + user_query
  END IF
CATCH error
  TALK "Search failed: " + error.message
END TRY

Performance

  • Search Time: 100-500ms depending on KB size
  • Memory: Results cached for session
  • Token Limits: Respects 2000 token default limit
  • Concurrent: Searches all active KBs in parallel

See Also

GET Keyword

The GET keyword retrieves content from a specified source — either a remote URL or a local file stored in the bot’s configured storage system.
It is used to fetch data dynamically during script execution.


Syntax

variable = GET "source"

Parameters

  • "source" — The location of the content to retrieve.
    This can be:
    • An HTTP/HTTPS URL (e.g., "https://api.example.com/data")
    • A relative path to a file stored in the bot’s drive bucket or local storage.
  • variable — The variable that will receive the fetched content.

Description

GET performs a read operation from the specified source.
If the source is a URL, the bot sends an HTTP GET request and retrieves the response body.
If the source is a file path, the bot reads the file content directly from its configured storage (e.g., drive component or local filesystem).

The command automatically handles text extraction from PDF and DOCX files, converting them to plain UTF‑8 text.
If the request fails or the file cannot be found, an error message is returned.

This keyword is essential for integrating external APIs, reading stored documents, and dynamically loading data into scripts.


Example

' Fetch data from a remote API
GET "https://api.example.com/users" INTO RESPONSE
PRINT RESPONSE

' Read a local file from the bot’s storage
GET "reports/summary.txt" INTO CONTENT
TALK CONTENT

Implementation Notes

  • Implemented in Rust under src/file/mod.rs and src/web_automation/crawler.rs.
  • Uses the reqwest library for HTTP requests with timeout and error handling.
  • Automatically detects file type and performs extraction for supported formats (PDF, DOCX, TXT).
  • Validates paths to prevent directory traversal or unsafe access.
  • Runs in a separate thread to avoid blocking the main engine.

  • FIND — Searches for data within the current context.
  • FORMAT — Formats retrieved data for display.
  • PRINT — Outputs data to the console or chat.

Summary

GET is a versatile keyword for retrieving external or stored content.
It enables bots to access APIs, read documents, and integrate dynamic data sources seamlessly within BASIC scripts.

SET

Assign values to variables in BASIC dialogs.

Syntax

SET variable = value

or simply:

variable = value

Parameters

ParameterTypeDescription
variableIdentifierVariable name to assign to
valueAnyValue to assign (string, number, boolean, array, object)

Description

The SET keyword assigns values to variables within BASIC dialog scripts. Variables are dynamically typed and can hold any type of value. The SET keyword is optional - you can use direct assignment with =.

Variables are scoped to the current dialog execution and persist throughout the conversation session until explicitly changed or the session ends.

Examples

Basic Assignment

SET name = "John Doe"
SET age = 25
SET is_premium = true
SET score = 98.5

Direct Assignment (without SET)

name = "Jane Smith"
count = 0
message = "Welcome!"

Array Assignment

SET colors = ["red", "green", "blue"]
SET numbers = [1, 2, 3, 4, 5]
SET mixed = ["text", 123, true]

Object/Map Assignment

SET user = {
    "name": "Alice",
    "email": "alice@example.com",
    "age": 30,
    "active": true
}

Dynamic Values

SET current_time = NOW()
SET user_input = HEAR "What's your name?"
SET calculated = price * quantity * tax_rate
SET formatted = FORMAT("Hello, {0}!", username)

Variable Types

BASIC supports these variable types:

  • String: Text values
  • Number: Integers and decimals
  • Boolean: true/false
  • Array: Ordered lists
  • Object: Key-value maps
  • Null: Empty/undefined

Variable Naming

Valid variable names:

  • Start with letter or underscore
  • Contain letters, numbers, underscores
  • Case-sensitive
  • No reserved keywords

Examples:

SET userName = "John"
SET user_name = "John"
SET _private = true
SET value123 = 456
SET firstName = "Jane"

Invalid names:

' These will cause errors
SET 123name = "error"      ' Starts with number
SET user-name = "error"    ' Contains hyphen
SET if = "error"           ' Reserved keyword

Variable Scope

Session Variables

Regular variables exist for the session:

SET session_data = "persists during conversation"

Global Variables

Use special prefixes for broader scope:

SET $global_var = "accessible across dialogs"
SET @bot_var = "bot-level variable"

Temporary Variables

SET _temp = "temporary use"
' Prefix with underscore for temporary/internal use

Type Conversion

Variables automatically convert types when needed:

SET text = "123"
SET number = text + 0       ' Converts to number: 123
SET back_to_text = number + ""  ' Converts to string: "123"
SET boolean = number > 100  ' Converts to boolean: true

Operations on Variables

String Operations

SET full_name = first_name + " " + last_name
SET uppercase = UPPER(name)
SET length = LEN(message)
SET substring = MID(text, 1, 5)

Numeric Operations

SET sum = a + b
SET difference = a - b
SET product = a * b
SET quotient = a / b
SET remainder = a MOD b
SET power = a ^ b

Array Operations

SET first = colors[0]
SET last = colors[LEN(colors) - 1]
colors[1] = "yellow"  ' Modify array element
SET combined = array1 + array2  ' Concatenate

Object/Map Operations

SET email = user["email"]
SET age = user.age
user["status"] = "active"
user.last_login = NOW()

Conditional Assignment

SET status = IF(score >= 70, "pass", "fail")
SET discount = IF(is_member, 0.2, 0.1)
SET greeting = IF(hour < 12, "Good morning", "Good afternoon")

Common Patterns

Counter Variables

SET counter = 0
FOR i = 1 TO 10
    counter = counter + 1
NEXT

Flag Variables

SET is_complete = false
' ... process ...
SET is_complete = true

Accumulator Variables

SET total = 0
FOR EACH item IN cart
    total = total + item.price
NEXT

State Variables

SET state = "initial"
' ... logic ...
SET state = "processing"
' ... more logic ...
SET state = "complete"

Best Practices

  1. Use descriptive names: customer_email instead of e
  2. Initialize variables: Set initial values before use
  3. Use consistent naming: camelCase or snake_case
  4. Avoid global pollution: Use local variables when possible
  5. Clean up large variables: Set to null when done
  6. Document complex variables: Add comments
  7. Validate before use: Check if variable exists

Error Handling

' Check if variable exists
IF EXISTS(user_data) THEN
    SET name = user_data.name
ELSE
    SET name = "Guest"
END IF

' Safe assignment with default
SET value = GET_VALUE_OR_DEFAULT(config.setting, "default")

Memory Management

' Clear large variables when done
SET big_data = LOAD_FILE("large.json")
' ... use big_data ...
SET big_data = null  ' Free memory
  • GET - Retrieve data from external sources
  • HEAR - Get user input into variable
  • FORMAT - Format values for assignment
  • SET BOT MEMORY - Persistent storage

Implementation Notes

Variables are stored in the BASIC engine’s scope map and persist for the duration of the dialog execution. The SET keyword is syntactic sugar - the parser treats both SET x = y and x = y identically.

ON Keyword

Syntax

ON trigger-type OF "table-name"

Parameters

  • trigger-type – The type of database trigger to listen for. Valid values are:
    • INSERT
    • UPDATE
    • DELETE
  • "table-name" – The name of the database table to monitor.

Description

ON registers a database trigger for the current session. When the specified event occurs on the given table, the engine records the trigger in the system_automations table, linking it to the session. This enables scripts to react to data changes by executing associated actions (e.g., sending a notification, updating a variable).

The keyword performs the following steps:

  1. Validates the trigger-type and converts it to the internal TriggerKind enum.
  2. Constructs a parameter name in the form <table>_<trigger>.rhai (e.g., orders_insert.rhai).
  3. Inserts a row into system_automations with the trigger kind, target table, and parameter name.
  4. Returns the number of rows affected (normally 1 on success).

If the trigger type is invalid, the keyword raises a runtime error.

Example

ON INSERT OF "orders"
TALK "A new order was added. Processing..."

After execution, any new row inserted into the orders table will cause the session to be notified, allowing the script to handle the event.

Implementation Notes

  • The keyword runs synchronously but performs the database insertion on a separate thread to avoid blocking.
  • Errors during insertion are logged and returned as runtime errors.

SET SCHEDULE

Schedule a script or task to run at specified times using natural language or cron expressions.

Syntax

SET SCHEDULE expression

Parameters

ParameterTypeDescription
expressionStringNatural language schedule or cron expression

Description

The SET SCHEDULE keyword schedules the current script to run automatically at specified intervals. It supports natural language expressions that are automatically converted to cron format, making scheduling intuitive and readable.

Natural Language Patterns

Time Intervals

SET SCHEDULE "every minute"
SET SCHEDULE "every 5 minutes"
SET SCHEDULE "every 15 minutes"
SET SCHEDULE "every 30 minutes"
SET SCHEDULE "every hour"
SET SCHEDULE "every 2 hours"
SET SCHEDULE "every 6 hours"
SET SCHEDULE "every day"
SET SCHEDULE "every week"
SET SCHEDULE "every month"
SET SCHEDULE "every year"

Aliases

SET SCHEDULE "hourly"      ' Same as "every hour"
SET SCHEDULE "daily"       ' Same as "every day"
SET SCHEDULE "weekly"      ' Same as "every week"
SET SCHEDULE "monthly"     ' Same as "every month"
SET SCHEDULE "yearly"      ' Same as "every year"

Specific Times

SET SCHEDULE "at 9am"
SET SCHEDULE "at 9:30am"
SET SCHEDULE "at 2pm"
SET SCHEDULE "at 14:00"
SET SCHEDULE "at midnight"
SET SCHEDULE "at noon"

Day-Specific

SET SCHEDULE "every monday"
SET SCHEDULE "every friday"
SET SCHEDULE "every sunday"
SET SCHEDULE "every monday at 9am"
SET SCHEDULE "every friday at 5pm"

Weekdays & Weekends

SET SCHEDULE "weekdays"              ' Monday-Friday at midnight
SET SCHEDULE "every weekday"         ' Same as above
SET SCHEDULE "weekdays at 8am"       ' Monday-Friday at 8 AM
SET SCHEDULE "weekends"              ' Saturday & Sunday at midnight
SET SCHEDULE "weekends at 10am"      ' Saturday & Sunday at 10 AM

Combined Patterns

SET SCHEDULE "every day at 9am"
SET SCHEDULE "every day at 6:30pm"
SET SCHEDULE "every hour from 9 to 17"

Business Hours

SET SCHEDULE "business hours"                           ' Every hour 9-17, Mon-Fri
SET SCHEDULE "every hour during business hours"         ' Same as above
SET SCHEDULE "every 30 minutes during business hours"   ' Every 30 min, 9-17, Mon-Fri
SET SCHEDULE "every 15 minutes during business hours"

Raw Cron (Advanced)

You can still use standard cron expressions for maximum flexibility:

SET SCHEDULE "0 * * * *"       ' Every hour at minute 0
SET SCHEDULE "*/5 * * * *"     ' Every 5 minutes
SET SCHEDULE "0 9-17 * * 1-5"  ' Hourly 9AM-5PM on weekdays
SET SCHEDULE "0 0 1 * *"       ' First day of each month

Cron Expression Format (Reference)

┌───────────── minute (0-59)
│ ┌───────────── hour (0-23)
│ │ ┌───────────── day of month (1-31)
│ │ │ ┌───────────── month (1-12)
│ │ │ │ ┌───────────── day of week (0-6, Sunday=0)
│ │ │ │ │
* * * * *

Quick Reference Table

Natural LanguageCron EquivalentDescription
every minute* * * * *Runs every minute
every 5 minutes*/5 * * * *Every 5 minutes
every hour0 * * * *Start of every hour
hourly0 * * * *Same as every hour
every day0 0 * * *Daily at midnight
daily0 0 * * *Same as every day
at 9am0 9 * * *Daily at 9 AM
at 9:30am30 9 * * *Daily at 9:30 AM
at noon0 12 * * *Daily at noon
at midnight0 0 * * *Daily at midnight
every monday0 0 * * 1Monday at midnight
every monday at 9am0 9 * * 1Monday at 9 AM
weekdays0 0 * * 1-5Mon-Fri at midnight
weekdays at 8am0 8 * * 1-5Mon-Fri at 8 AM
weekends0 0 * * 0,6Sat-Sun at midnight
every week0 0 * * 0Sunday at midnight
weekly0 0 * * 0Same as every week
every month0 0 1 * *1st of month
monthly0 0 1 * *Same as every month
business hours0 9-17 * * 1-5Hourly 9-5 weekdays
every hour from 9 to 170 9-17 * * *Hourly 9 AM - 5 PM

Examples

Daily Report at 9 AM

SET SCHEDULE "every day at 9am"

data = GET "reports/daily.json"
summary = LLM "Summarize key metrics: " + data
SEND MAIL "team@company.com", "Daily Report", summary

Hourly Data Sync

SET SCHEDULE "every hour"

fresh_data = GET "https://api.example.com/data"
SET BOT MEMORY "cached_data", fresh_data
PRINT "Data refreshed at " + NOW()

Every 15 Minutes Monitoring

SET SCHEDULE "every 15 minutes"

status = GET "https://api.example.com/health"
IF status.healthy = false THEN
    SEND MAIL "ops@company.com", "Alert: Service Down", status.message
END IF

Weekly Newsletter (Monday 10 AM)

SET SCHEDULE "every monday at 10am"

subscribers = FIND "subscribers", "active=true"
content = LLM "Generate weekly newsletter with latest updates"

FOR EACH email IN subscribers
    SEND MAIL email.address, "Weekly Update", content
NEXT

Business Hours Support Check

SET SCHEDULE "every 30 minutes during business hours"

tickets = FIND "support_tickets", "status=open AND priority=high"
IF LEN(tickets) > 5 THEN
    TALK TO "support-manager", "High priority ticket queue: " + LEN(tickets) + " tickets waiting"
END IF

Weekend Backup

SET SCHEDULE "weekends at 3am"

PRINT "Starting weekend backup..."
result = POST "https://backup.service/run", { "type": "full" }
SET BOT MEMORY "last_backup", NOW()
SEND MAIL "admin@company.com", "Backup Complete", result

End of Month Report

SET SCHEDULE "monthly"

' Runs on 1st of each month at midnight
month_data = AGGREGATE "sales", "SUM(amount)", "month=" + MONTH(DATEADD("month", -1, NOW()))
report = LLM "Generate monthly sales report for: " + month_data
SEND MAIL "finance@company.com", "Monthly Sales Report", report

Best Practices

  1. Use Natural Language: Prefer readable expressions like "every day at 9am" over cron syntax

  2. Stagger Tasks: Avoid scheduling all tasks at the same time

    ' Good: Different times
    SET SCHEDULE "every day at 2am"   ' Cleanup
    SET SCHEDULE "every day at 3am"   ' Backup
    SET SCHEDULE "every day at 4am"   ' Reports
    
  3. Consider Time Zones: Schedule times are in server’s local time

  4. Error Handling: Always include error recovery

    SET SCHEDULE "every hour"
    
    TRY
        PROCESS_DATA()
    CATCH
        PRINT "Schedule failed: " + ERROR_MESSAGE
        SEND MAIL "admin@example.com", "Schedule Error", ERROR_DETAILS
    END TRY
    
  5. Idempotency: Make scheduled tasks safe to re-run

    last_run = GET BOT MEMORY "last_process_time"
    IF DATEDIFF("minute", last_run, NOW()) > 55 THEN
        PROCESS()
        SET BOT MEMORY "last_process_time", NOW()
    END IF
    

Cancel Schedule

Schedules are automatically canceled when SET SCHEDULE is removed from the .bas file. Simply delete or comment out the line:

' SET SCHEDULE "every hour"   ' Commented out = disabled

Limitations

  • Maximum 100 scheduled tasks per bot
  • Minimum interval: 1 minute
  • Scripts timeout after 5 minutes by default
  • Time zone is server’s local time

Monitoring

Scheduled tasks are logged automatically:

  • Execution start/end times
  • Success/failure status
  • Error messages if any
  • Performance metrics

Implementation

Located in src/basic/keywords/set_schedule.rs

The implementation:

  • Uses a fast rule-based natural language parser (no LLM required)
  • Falls back to raw cron if input is already in cron format
  • Validates expressions before saving
  • Integrates with system scheduler
  • Persists schedules in database
  • Handles concurrent execution
  • Provides retry logic for failures

CREATE TASK

Create and assign tasks within the task management system.

Syntax

CREATE TASK title, description, assignee, due_date, priority

Parameters

ParameterTypeDescription
titleStringTask title/name
descriptionStringDetailed task description
assigneeStringEmail or user ID of the assignee
due_dateStringDue date in format “YYYY-MM-DD” or relative like “tomorrow”, “next week”
priorityStringTask priority: “low”, “medium”, “high”, “urgent”

Description

The CREATE TASK keyword creates tasks in the task engine system with automatic assignment to users or groups, due date tracking and reminders, priority-based organization, integration with the calendar system, email notifications to assignees, and progress tracking capabilities.

Examples

Basic Task Creation

CREATE TASK "Review proposal", "Review and provide feedback on Q4 proposal", "john@example.com", "2024-01-15", "high"

Task with Current User

user_email = GET "user.email"
CREATE TASK "Follow up", "Contact customer about renewal", user_email, "tomorrow", "medium"

Bulk Task Creation

team = ["alice@example.com", "bob@example.com", "carol@example.com"]
FOR EACH member IN team
    CREATE TASK "Complete training", "Finish security awareness training", member, "next week", "medium"
NEXT

Task from User Input

task_info = HEAR "What task should I create?"
CREATE TASK task_info, "User requested task", "support@example.com", "today", "high"
TALK "Task created and assigned to support team"

Return Value

The keyword returns a task object containing the task_id as a unique task identifier, status indicating the task state (such as “created”, “assigned”, “in_progress”, or “completed”), created_at with the creation timestamp, url providing a link to the task in the web interface, and reminder_set indicating whether a reminder was configured.

Task Statuses

Tasks progress through a defined lifecycle. The created status indicates initial creation, followed by assigned when the task has been assigned to a user. Once work begins, the status changes to in_progress. If the task is waiting on a dependency, it enters the blocked state. When finished, it reaches completed, or alternatively cancelled if the task was terminated without completion.

Integration Points

Calendar Integration

Tasks automatically appear in the assignee’s calendar when a due date is specified, calendar integration is enabled, and the user has calendar permissions.

Email Notifications

The system sends notifications for task assignment, due date reminders, status changes, and when comments are added.

Task Dependencies

Tasks can be linked together to create parent-child relationships:

parent_task = CREATE TASK "Project", "Main project", "pm@example.com", "next month", "high"
subtask = CREATE TASK "Research", "Initial research", "analyst@example.com", "next week", "medium"
LINK_TASKS parent_task.task_id, subtask.task_id

Priority Levels

PriorityDescriptionSLA
urgentImmediate attention required4 hours
highImportant, time-sensitive1 day
mediumStandard priority3 days
lowNon-urgent1 week

Date Formats

The keyword supports multiple date formats. Absolute dates can be specified as “2024-01-15” or “01/15/2024”. Relative dates include “today”, “tomorrow”, “next week”, and “in 3 days”. Natural language formats like “Monday”, “next Friday”, and “end of month” are also supported.

Error Handling

The keyword validates that the assignee exists in the system, checks that the date is in the future, verifies the priority is valid, returns an error if task creation fails, and handles permission issues gracefully.

Permissions

To create tasks, the user must have task creation permission, project member status, admin privileges, or delegation rights from the assignee.

Best Practices

Use clear, action-oriented titles that describe what needs to be done. Include detailed descriptions with acceptance criteria so the assignee understands the requirements. Set realistic deadlines that can actually be achieved. Reserve high and urgent priorities for tasks that truly warrant them rather than marking everything as urgent. Verify the assignee can handle the task before assignment. Follow up periodically to check task status and provide assistance if needed.

Advanced Usage

Task Templates

template = GET_TASK_TEMPLATE("customer_onboarding")
CREATE TASK template.title, template.description, assigned_user, due_date, template.priority

Conditional Creation

IF urgency = "high" AND department = "support" THEN
    CREATE TASK "Urgent Support", issue_description, "support-lead@example.com", "today", "urgent"
ELSE
    CREATE TASK "Support Request", issue_description, "support@example.com", "tomorrow", "medium"
END IF

Task with Attachments

task = CREATE TASK "Review document", "Please review attached", reviewer, deadline, "high"
' Note: Use document sharing systems for attachments

The BOOK keyword schedules meetings instead of tasks. Use SET SCHEDULE to create recurring tasks. The SEND MAIL keyword sends task notifications, and ADD MEMBER adds users to task groups.

Database Tables

Tasks are stored across several database tables. The tasks table holds main task records. User assignments are tracked in task_assignments. Discussions happen in task_comments. Related files are referenced in task_attachments. The task_history table records status changes over time.

Implementation

The CREATE TASK keyword is implemented in src/basic/keywords/create_task.rs. It integrates with the task engine module for task management, the calendar engine for scheduling, the email module for notifications, and the storage module for attachments.

FOR EACH Keyword

Syntax

FOR EACH $var IN $collection
    // block of statements
NEXT $var

Parameters

  • $var – Identifier that will hold each element of the collection during iteration.
  • $collection – An array or iterable expression whose items will be traversed.

Description

FOR EACH iterates over every element of the supplied collection, assigning the current element to the loop variable $var for the duration of the block. The block is executed once per element. After the loop finishes, execution continues after the matching NEXT $var statement.

If the collection is not an array, the keyword raises a runtime error indicating the expected type.

Example

SET numbers = [1, 2, 3, 4, 5]
FOR EACH n IN numbers
    TALK "Number: " + n
NEXT n
TALK "All numbers processed."

The script outputs each number in the list sequentially and then prints a final message.

Control Flow

  • EXIT FOR can be used inside the block to break out of the loop early.
  • Nested FOR EACH loops are supported; each must have a distinct loop variable.

Implementation Notes

  • The keyword evaluates the collection expression once before entering the loop.
  • The loop variable is scoped to the block; it does not affect variables outside the loop.

SWITCH

The SWITCH statement provides multi-way branching based on a value, allowing clean handling of multiple conditions without nested IF statements.

Syntax

SWITCH expression
  CASE value1
    ' statements for value1
  CASE value2
    ' statements for value2
  CASE value3, value4
    ' statements for value3 or value4
  DEFAULT
    ' statements if no case matches
END SWITCH

Parameters

ElementDescription
expressionThe value to evaluate
CASE valueA specific value to match
CASE value1, value2Multiple values for the same case
DEFAULTOptional fallback when no case matches

Description

SWITCH evaluates an expression once and compares it against multiple CASE values. When a match is found, the corresponding statements execute. Unlike some languages, General Bots BASIC does not require explicit BREAK statements - execution automatically stops after the matched case.

If no case matches and a DEFAULT block exists, those statements execute. If no case matches and there’s no DEFAULT, execution continues after END SWITCH.

Examples

Role-Based Knowledge Base Selection

role = GET role

SWITCH role
  CASE "manager"
    USE KB "management"
    USE KB "reports"
  CASE "developer"
    USE KB "documentation"
    USE KB "apis"
  CASE "customer"
    USE KB "products"
    USE KB "support"
  DEFAULT
    USE KB "general"
END SWITCH
TALK "Select an option:"
TALK "1. Check balance"
TALK "2. Transfer funds"
TALK "3. View history"
TALK "4. Exit"

choice = HEAR "Enter your choice:"

SWITCH choice
  CASE "1"
    balance = GET BOT MEMORY "balance"
    TALK "Your balance is: $" + balance
  CASE "2"
    TALK "Transfer initiated..."
    ' Transfer logic here
  CASE "3"
    history = FIND "recent transactions"
    TALK history
  CASE "4"
    TALK "Goodbye!"
  DEFAULT
    TALK "Invalid option. Please choose 1-4."
END SWITCH

Multiple Values Per Case

day = GET day_of_week

SWITCH day
  CASE "monday", "tuesday", "wednesday", "thursday", "friday"
    TALK "It's a weekday. Office hours: 9am-5pm"
  CASE "saturday", "sunday"
    TALK "It's the weekend. We're closed."
  DEFAULT
    TALK "Unknown day"
END SWITCH

Language Selection

lang = GET user_language

SWITCH lang
  CASE "en"
    TALK "Hello! How can I help you today?"
  CASE "es"
    TALK "¡Hola! ¿Cómo puedo ayudarte hoy?"
  CASE "pt"
    TALK "Olá! Como posso ajudá-lo hoje?"
  CASE "fr"
    TALK "Bonjour! Comment puis-je vous aider?"
  DEFAULT
    TALK "Hello! How can I help you today?"
END SWITCH

Department Routing

department = HEAR "Which department? (sales, support, billing)"

SWITCH department
  CASE "sales"
    SET CONTEXT "You are a sales assistant. Focus on products and pricing."
    USE KB "products"
    USE KB "pricing"
  CASE "support"
    SET CONTEXT "You are a technical support agent. Help resolve issues."
    USE KB "troubleshooting"
    USE KB "faq"
  CASE "billing"
    SET CONTEXT "You are a billing specialist. Handle payment questions."
    USE KB "invoices"
    USE KB "payment_methods"
  DEFAULT
    TALK "I'll connect you with general assistance."
    USE KB "general"
END SWITCH

Status Code Handling

status = GET api_response_status

SWITCH status
  CASE "200"
    TALK "Request successful!"
  CASE "400"
    TALK "Bad request. Please check your input."
  CASE "401", "403"
    TALK "Authentication error. Please log in again."
  CASE "404"
    TALK "Resource not found."
  CASE "500", "502", "503"
    TALK "Server error. Please try again later."
  DEFAULT
    TALK "Unexpected status: " + status
END SWITCH

Numeric Ranges (Using Categories)

score = GET test_score
grade = ""

' Convert score to grade category
IF score >= 90 THEN
    grade = "A"
ELSE IF score >= 80 THEN
    grade = "B"
ELSE IF score >= 70 THEN
    grade = "C"
ELSE IF score >= 60 THEN
    grade = "D"
ELSE
    grade = "F"
END IF

SWITCH grade
  CASE "A"
    TALK "Excellent work!"
    SET BOT MEMORY "achievement", "honor_roll"
  CASE "B"
    TALK "Good job!"
  CASE "C"
    TALK "Satisfactory performance."
  CASE "D"
    TALK "You passed, but could improve."
  CASE "F"
    TALK "Please see a tutor for help."
END SWITCH

Comparison with IF-ELSE

Using IF-ELSE (Verbose)

IF color = "red" THEN
    TALK "Stop"
ELSE IF color = "yellow" THEN
    TALK "Caution"
ELSE IF color = "green" THEN
    TALK "Go"
ELSE
    TALK "Unknown signal"
END IF

Using SWITCH (Cleaner)

SWITCH color
  CASE "red"
    TALK "Stop"
  CASE "yellow"
    TALK "Caution"
  CASE "green"
    TALK "Go"
  DEFAULT
    TALK "Unknown signal"
END SWITCH

Notes

  • No fall-through: Each CASE is isolated; no BREAK needed
  • Case sensitivity: String comparisons are case-sensitive
  • Expression evaluated once: The switch expression is evaluated only once
  • DEFAULT is optional: Without DEFAULT, unmatched values skip the block
  • Multiple values: Use commas to match multiple values in one CASE

Best Practices

  1. Always include DEFAULT for robust error handling
  2. Use meaningful case values that are self-documenting
  3. Order cases logically - most common first or alphabetically
  4. Keep case blocks concise - extract complex logic to separate scripts

See Also

SAVE

Saves data to a database table using upsert (insert or update) semantics.

Syntax

Form 1: Save with object (classic)

SAVE "table", id, data

Form 2: Save with variables (direct)

SAVE "table", id, field1, field2, field3, ...

The variable names are used as column names automatically.

Parameters

Form 1 (with object)

ParameterTypeDescription
tableStringThe name of the database table
idString/NumberThe unique identifier for the record
dataObjectA map/object containing field names and values

Form 2 (with variables)

ParameterTypeDescription
tableStringThe name of the database table
idString/NumberThe unique identifier for the record
field1, field2, …AnyVariable references (names become column names)

Description

SAVE performs an upsert operation:

  • If a record with the given id exists, it updates the record
  • If no record exists, it inserts a new one

The id parameter maps to the id column in the table.

Form 1 vs Form 2

Form 1 (with object) is useful when you need custom column names or complex data structures:

data = #{
    "customer_name": "João Silva",
    "email": "joao@example.com"
}
SAVE "customers", "CUST-001", data

Form 2 (with variables) is simpler - variable names become column names:

customerName = "João Silva"
email = "joao@example.com"
phone = "+5511999887766"
SAVE "customers", "CUST-001", customerName, email, phone
' Creates columns: customerName, email, phone

This eliminates the need for WITH blocks when variable names match your desired column names.

Perfect for TOOL Functions

This is especially useful for TOOL functions where variables are automatically filled by user input and can be saved directly without needing WITH blocks:

' TOOL function parameters - automatically filled by LLM
PARAM nome AS STRING LIKE "João Silva" DESCRIPTION "Nome completo"
PARAM email AS EMAIL LIKE "joao@example.com" DESCRIPTION "Email"
PARAM telefone AS STRING LIKE "(21) 98888-8888" DESCRIPTION "Telefone"

' Generate unique ID
customerId = "CUST-" + FORMAT(NOW(), "yyyyMMddHHmmss")

' Save directly - variable names become column names automatically!
' No need for WITH block - just pass the variables directly
SAVE "customers", customerId, nome, email, telefone

RETURN customerId

In TOOL functions, the parameters (variables like nome, email, telefone) are automatically extracted from user input by the LLM. The direct SAVE syntax allows you to persist these variables immediately without manual object construction.

Examples

Basic Save with Object (Form 1)

' Create data object using Rhai map syntax
data = #{
    "customer_name": "João Silva",
    "email": "joao@example.com",
    "phone": "+5511999887766",
    "status": "active"
}

SAVE "customers", "CUST-001", data

Save with Variables - No WITH Block Needed (Form 2)

' Variable names become column names automatically
casamentoId = "CAS-20250117-1234"
protocolo = "CAS123456"
nomeNoivo = "Carlos Eduardo"
nomeNoiva = "Juliana Cristina"
telefoneNoivo = "(21) 98888-8888"
telefoneNoiva = "(21) 97777-7777"
emailNoivo = "carlos@example.com"
emailNoiva = "juliana@example.com"
tipoCasamento = "RELIGIOSO_COM_EFEITO_CIVIL"
dataPreferencial = "2026-12-15"
horarioPreferencial = "16:00"

' Save directly without WITH block
SAVE "casamentos", casamentoId, protocolo, nomeNoivo, nomeNoiva, telefoneNoivo, telefoneNoiva, emailNoivo, emailNoiva, tipoCasamento, dataPreferencial, horarioPreferencial

Save Order Data (Direct Syntax - No Object)

order_id = "ORD-" + FORMAT(NOW(), "YYYYMMDDHHmmss")
customer_id = "CUST-001"
customer_name = "João Silva"
total = 150.50
status = "pending"

' Save directly - variable names become columns
SAVE "orders", order_id, customer_id, customer_name, total, status

TALK "Order " + order_id + " saved successfully!"

Save Event Registration

' Event registration form data
eventId = "EVT-" + FORMAT(NOW(), "YYYYMMDDHHmmss")
nome = "Maria Santos"
email = "maria@example.com"
telefone = "(11) 91234-5678"
dataEvento = "2025-03-15"
quantidadePessoas = 3
observacoes = "Precisa de cadeira de rodas"

' Direct save - no WITH block needed
SAVE "eventos", eventId, nome, email, telefone, dataEvento, quantidadePessoas, observacoes

TALK "Inscrição confirmada! ID: " + eventId

Update Existing Record

' If order exists, this updates it; otherwise creates it
order_id = "ORD-20250117-0001"
status = "shipped"
shipped_at = NOW()
tracking_number = "TRACK123456"

' Use object for updates to specific columns
update_data = #{
    "status": status,
    "shipped_at": shipped_at,
    "tracking_number": tracking_number
}

SAVE "orders", order_id, update_data

With WhatsApp Notification

WEBHOOK "new-customer"

customer_id = "CUST-" + FORMAT(NOW(), "YYYYMMDDHHmmss")
phone = body.phone
name = body.name
source = "webhook"

' Direct save with variables
SAVE "customers", customer_id, phone, name, source

' Notify via WhatsApp
TALK TO "whatsapp:" + phone, "Welcome " + name + "! Your account has been created."

result_status = "ok"
result_customer_id = customer_id

Building Data Dynamically

' Start with empty map and add fields
data = #{}
data.name = customer_name
data.email = customer_email
data.phone = customer_phone
data.registered_at = NOW()

IF has_referral THEN
    data.referral_code = referral_code
    data.discount = 10
END IF

SAVE "customers", customer_id, data
WEBHOOK "create-order"

' Save order
order_id = body.order_id
customer_id = body.customer_id
total = body.total
status = "pending"

SAVE "orders", order_id, customer_id, total, status

' Save each line item
FOR EACH item IN body.items
    line_id = order_id + "-" + item.sku
    line_data = #{
        "order_id": order_id,
        "sku": item.sku,
        "quantity": item.quantity,
        "price": item.price
    }
    SAVE "order_items", line_id, line_data
NEXT item

' Notify customer
TALK TO "whatsapp:" + body.customer_phone, "Order #" + order_id + " confirmed!"

result_status = "ok"

Comparison: WITH Block vs Direct Syntax

Old way (WITH block):

WITH casamento
    id = casamentoId
    protocolo = protocolo
    noivo = nomeNoivo
    noiva = nomeNoiva
END WITH
SAVE "casamentos", casamento

New way (direct):

' Variable names become column names automatically
SAVE "casamentos", casamentoId, protocolo, nomeNoivo, nomeNoiva

The direct syntax is cleaner and avoids the intermediate object creation. Use it when your variable names match your desired column names.

Return Value

Returns an object with:

  • command: “save”
  • table: The table name
  • id: The record ID
  • rows_affected: Number of rows affected (1 for insert/update)

Notes

  • Table must exist in the database
  • The id column is used as the primary key for conflict detection
  • All string values are automatically sanitized to prevent SQL injection
  • Column names are validated to prevent injection

Comparison with INSERT and UPDATE

KeywordBehavior
SAVEUpsert - inserts if new, updates if exists
INSERTAlways creates new record (may fail if ID exists)
UPDATEOnly updates existing records (no-op if not found)
' SAVE is preferred for most cases
SAVE "customers", id, data      ' Insert or update

' Use INSERT when you need a new record guaranteed
INSERT "logs", log_entry        ' Always creates new

' Use UPDATE for targeted updates
UPDATE "orders", "status=pending", update_data   ' Update matching rows

See Also

INSERT

The INSERT keyword adds new records to database tables, enabling bots to store data collected from conversations and integrations.


Syntax

INSERT INTO "table_name" WITH field1 = value1, field2 = value2
result = INSERT INTO "table_name" WITH field1 = value1, field2 = value2
INSERT INTO "table_name" ON connection WITH field1 = value1

Parameters

ParameterTypeDescription
table_nameStringName of the target database table
WITHClauseField-value pairs for the new record
ON connectionStringOptional named database connection

Description

INSERT creates a new record in a database table. The WITH clause specifies the field names and values for the new row. The keyword returns the newly created record, including any auto-generated fields like id.

Use cases include:

  • Storing user information collected during conversations
  • Logging interactions and events
  • Creating orders, tickets, or other business records
  • Saving form submissions

Examples

Basic Insert

' Insert a new customer record
INSERT INTO "customers" WITH
    name = "John Doe",
    email = "john@example.com",
    phone = "+1-555-0100"

TALK "Customer record created!"

Insert with Return Value

' Insert and capture the new record
result = INSERT INTO "customers" WITH
    name = customer_name,
    email = customer_email,
    created_at = NOW()

TALK "Customer created with ID: " + result.id

Insert from Conversation

' Collect data from user and insert
TALK "What is your name?"
HEAR user_name

TALK "What is your email?"
HEAR user_email

TALK "What is your phone number?"
HEAR user_phone

result = INSERT INTO "contacts" WITH
    name = user_name,
    email = user_email,
    phone = user_phone,
    source = "chatbot",
    created_at = NOW()

TALK "Thanks " + user_name + "! Your contact ID is " + result.id

Insert Order

' Create a new order
result = INSERT INTO "orders" WITH
    customer_id = user.id,
    product_id = selected_product.id,
    quantity = order_quantity,
    total = selected_product.price * order_quantity,
    status = "pending",
    created_at = NOW()

TALK "Order #" + result.id + " created for $" + result.total

Insert with Foreign Key

' Insert related records
customer = INSERT INTO "customers" WITH
    name = customer_name,
    email = customer_email

address = INSERT INTO "addresses" WITH
    customer_id = customer.id,
    street = street_address,
    city = city_name,
    postal_code = zip_code,
    country = "US"

TALK "Customer and address saved!"

Insert to Named Connection

' Insert to a specific database
INSERT INTO "audit_log" ON "analytics_db" WITH
    event = "user_signup",
    user_id = user.id,
    timestamp = NOW(),
    ip_address = session.ip

Batch Insert

' Insert multiple records from a data source
new_contacts = READ "imports/contacts.csv" AS TABLE

inserted_count = 0

FOR EACH contact IN new_contacts
    INSERT INTO "contacts" WITH
        name = contact.name,
        email = contact.email,
        phone = contact.phone,
        imported_at = NOW()
    
    inserted_count = inserted_count + 1
NEXT

TALK "Imported " + inserted_count + " contacts"

Common Use Cases

Log User Interaction

' Log every conversation for analytics
INSERT INTO "conversation_logs" WITH
    user_id = user.id,
    session_id = session.id,
    message = user_message,
    response = bot_response,
    timestamp = NOW()

Create Support Ticket

' Create a support ticket from conversation
result = INSERT INTO "tickets" WITH
    customer_id = user.id,
    subject = ticket_subject,
    description = ticket_description,
    priority = "medium",
    status = "open",
    created_at = NOW()

TALK "Ticket #" + result.id + " created. Our team will respond within 24 hours."

Save Form Submission

' Save a lead form submission
result = INSERT INTO "leads" WITH
    first_name = form.first_name,
    last_name = form.last_name,
    email = form.email,
    company = form.company,
    interest = form.product_interest,
    source = "website_chatbot",
    created_at = NOW()

' Notify sales team
SEND MAIL "sales@company.com", "New Lead: " + form.first_name, "A new lead has been captured via chatbot."

Record Event

' Record a business event
INSERT INTO "events" WITH
    event_type = "purchase",
    user_id = user.id,
    data = '{"product_id": "' + product_id + '", "amount": ' + amount + '}',
    occurred_at = NOW()

Error Handling

ON ERROR RESUME NEXT

result = INSERT INTO "customers" WITH
    name = customer_name,
    email = customer_email

IF ERROR THEN
    PRINT "Insert failed: " + ERROR_MESSAGE
    
    IF INSTR(ERROR_MESSAGE, "duplicate") > 0 THEN
        TALK "This email is already registered."
    ELSE IF INSTR(ERROR_MESSAGE, "constraint") > 0 THEN
        TALK "Please provide all required information."
    ELSE
        TALK "Sorry, I couldn't save your information. Please try again."
    END IF
ELSE
    TALK "Information saved successfully!"
END IF

Common Errors

ErrorCauseSolution
DUPLICATE_KEYUnique constraint violatedCheck for existing record first
NOT_NULL_VIOLATIONRequired field missingInclude all required fields
FOREIGN_KEY_VIOLATIONReferenced record doesn’t existVerify foreign key values
CHECK_VIOLATIONValue fails check constraintValidate data before insert
TABLE_NOT_FOUNDTable doesn’t existVerify table name

Validation Before Insert

' Validate data before inserting
IF LEN(email) < 5 OR INSTR(email, "@") = 0 THEN
    TALK "Please provide a valid email address."
ELSE IF LEN(name) < 2 THEN
    TALK "Please provide your full name."
ELSE
    result = INSERT INTO "contacts" WITH
        name = name,
        email = email,
        created_at = NOW()
    
    TALK "Contact saved!"
END IF

INSERT vs MERGE

KeywordPurposeUse When
INSERTCreate new recordAdding new data
MERGEInsert or updateRecord may already exist
' INSERT - Always creates new record (may fail if duplicate)
INSERT INTO "users" WITH email = "john@example.com", name = "John"

' MERGE - Creates or updates based on key
MERGE INTO "users" ON email = "john@example.com" WITH
    email = "john@example.com",
    name = "John Updated"

Configuration

Database connection is configured in config.csv:

name,value
database-provider,postgres
database-pool-size,10
database-timeout,30

Database credentials are stored in Vault, not in config files.


Implementation Notes

  • Implemented in Rust under src/database/operations.rs
  • Uses parameterized queries to prevent SQL injection
  • Auto-generates id if not specified (serial/UUID)
  • Timestamps can be set with NOW() function
  • Returns the complete inserted record including defaults

  • UPDATE — Modify existing records
  • DELETE — Remove records
  • MERGE — Insert or update (upsert)
  • FIND — Query records
  • TABLE — Create tables

Summary

INSERT creates new records in database tables. Use it to store user data, log events, create orders, and save form submissions. Always validate data before inserting and handle potential errors like duplicates and constraint violations. For cases where a record may already exist, consider using MERGE instead.

UPDATE

The UPDATE keyword modifies existing records in database tables, enabling bots to change stored data based on conditions.


Syntax

UPDATE "table_name" SET field1 = value1 WHERE condition
UPDATE "table_name" SET field1 = value1, field2 = value2 WHERE condition
UPDATE "table_name" ON connection SET field1 = value1 WHERE condition

Parameters

ParameterTypeDescription
table_nameStringName of the target database table
SETClauseField-value pairs to update
WHEREClauseCondition to select records to update
ON connectionStringOptional named database connection

Description

UPDATE modifies existing records in a database table that match the specified WHERE condition. The SET clause specifies which fields to change and their new values. Without a WHERE clause, all records in the table would be updated (which is usually not desired).

Use cases include:

  • Updating user profiles
  • Changing order status
  • Recording timestamps for actions
  • Incrementing counters
  • Marking items as read/processed

Examples

Basic Update

' Update a customer's email
UPDATE "customers" SET email = "new.email@example.com" WHERE id = 123

TALK "Email updated successfully!"

Update Multiple Fields

' Update multiple fields at once
UPDATE "orders" SET
    status = "shipped",
    shipped_at = NOW(),
    tracking_number = tracking_id
WHERE id = order_id

TALK "Order #" + order_id + " marked as shipped"

Update with Variable Values

' Update from conversation data
TALK "What is your new phone number?"
HEAR new_phone

UPDATE "customers" SET phone = new_phone WHERE id = user.id

TALK "Your phone number has been updated to " + new_phone

Increment Counter

' Increment a counter field
UPDATE "products" SET view_count = view_count + 1 WHERE id = product_id

Update Based on Condition

' Mark old sessions as expired
UPDATE "sessions" SET
    status = "expired",
    expired_at = NOW()
WHERE last_activity < DATEADD(NOW(), -30, "minute")

TALK "Inactive sessions have been expired"

Update with Named Connection

' Update on specific database
UPDATE "audit_log" ON "analytics_db" SET
    reviewed = true,
    reviewed_by = admin.id
WHERE id = log_entry_id

Common Use Cases

Update User Profile

' User wants to update their profile
TALK "What would you like to update? (name, email, phone)"
HEAR field_to_update

TALK "What is the new value?"
HEAR new_value

SWITCH field_to_update
    CASE "name"
        UPDATE "users" SET name = new_value WHERE id = user.id
    CASE "email"
        UPDATE "users" SET email = new_value WHERE id = user.id
    CASE "phone"
        UPDATE "users" SET phone = new_value WHERE id = user.id
    CASE ELSE
        TALK "Unknown field. Please choose name, email, or phone."
END SWITCH

TALK "Your " + field_to_update + " has been updated!"

Change Order Status

' Update order through its lifecycle
UPDATE "orders" SET
    status = "processing",
    processed_at = NOW()
WHERE id = order_id AND status = "pending"

TALK "Order is now being processed"

Mark as Read

' Mark notification as read
UPDATE "notifications" SET
    read = true,
    read_at = NOW()
WHERE user_id = user.id AND id = notification_id

TALK "Notification marked as read"

Record Last Activity

' Update last activity timestamp
UPDATE "users" SET last_active = NOW() WHERE id = user.id

Soft Delete

' Soft delete (mark as deleted without removing)
UPDATE "records" SET
    deleted = true,
    deleted_at = NOW(),
    deleted_by = user.id
WHERE id = record_id

TALK "Record archived"

Batch Update

' Update multiple records matching condition
UPDATE "subscriptions" SET
    status = "active",
    renewed_at = NOW()
WHERE expires_at > NOW() AND auto_renew = true

TALK "Active subscriptions renewed"

Error Handling

ON ERROR RESUME NEXT

UPDATE "customers" SET email = new_email WHERE id = customer_id

IF ERROR THEN
    PRINT "Update failed: " + ERROR_MESSAGE
    
    IF INSTR(ERROR_MESSAGE, "duplicate") > 0 THEN
        TALK "This email is already in use by another account."
    ELSE IF INSTR(ERROR_MESSAGE, "constraint") > 0 THEN
        TALK "The value you entered is not valid."
    ELSE
        TALK "Sorry, I couldn't update your information. Please try again."
    END IF
ELSE
    TALK "Information updated successfully!"
END IF

Common Errors

ErrorCauseSolution
DUPLICATE_KEYUnique constraint violatedValue already exists
CHECK_VIOLATIONValue fails check constraintValidate before update
NOT_NULL_VIOLATIONSetting required field to nullProvide a value
NO_ROWS_AFFECTEDWHERE matched no recordsVerify condition

Safety Considerations

Always Use WHERE Clause

' DANGEROUS - updates ALL records!
' UPDATE "users" SET status = "inactive"

' SAFE - updates only matching records
UPDATE "users" SET status = "inactive" WHERE last_login < "2024-01-01"

Verify Before Update

' Check record exists before updating
record = FIND "orders" WHERE id = order_id

IF record THEN
    UPDATE "orders" SET status = "cancelled" WHERE id = order_id
    TALK "Order cancelled"
ELSE
    TALK "Order not found"
END IF

Limit Scope

' Update only records the user owns
UPDATE "documents" SET
    title = new_title
WHERE id = document_id AND owner_id = user.id

UPDATE vs MERGE

KeywordPurposeUse When
UPDATEModify existing recordsRecord definitely exists
MERGEInsert or updateRecord may or may not exist
' UPDATE - Only modifies if exists
UPDATE "users" SET name = "John" WHERE email = "john@example.com"

' MERGE - Creates if not exists, updates if exists
MERGE INTO "users" ON email = "john@example.com" WITH
    email = "john@example.com",
    name = "John"

Configuration

Database connection is configured in config.csv:

name,value
database-provider,postgres
database-pool-size,10
database-timeout,30

Database credentials are stored in Vault, not in config files.


Implementation Notes

  • Implemented in Rust under src/database/operations.rs
  • Uses parameterized queries to prevent SQL injection
  • Returns number of affected rows
  • WHERE clause is required by default for safety
  • Supports all comparison operators (=, <, >, <=, >=, <>)
  • Supports AND/OR in WHERE conditions

  • INSERT — Add new records
  • DELETE — Remove records
  • MERGE — Insert or update (upsert)
  • FIND — Query records
  • TABLE — Create tables

Summary

UPDATE modifies existing database records that match a WHERE condition. Use it to change user data, update statuses, record timestamps, and modify stored information. Always include a WHERE clause to avoid accidentally updating all records. For cases where you’re unsure if a record exists, consider using MERGE instead.

DELETE

The DELETE keyword is a unified command that automatically detects context and handles HTTP requests, database operations, and file deletions through a single interface.


Syntax

' HTTP DELETE - auto-detected by URL
DELETE "https://api.example.com/resource/123"

' Database DELETE - table with filter
DELETE "table_name", "filter_condition"

' File DELETE - path without URL
DELETE "path/to/file.txt"

Parameters

ContextParameter 1Parameter 2Description
HTTPURL (string)-DELETE request to the URL
DatabaseTable nameFilter conditionDelete matching records
FileFile path-Delete the file

Description

DELETE is a smart, unified keyword that detects what you want to delete based on the arguments:

  1. HTTP DELETE: If the first argument starts with http:// or https://, sends an HTTP DELETE request
  2. Database DELETE: If two arguments are provided (table, filter), performs SQL DELETE
  3. File DELETE: Otherwise, treats the argument as a file path

This eliminates the need for separate DELETE HTTP, DELETE FILE commands - just use DELETE.


Examples

HTTP DELETE

' Delete a resource via REST API
DELETE "https://api.example.com/users/123"

TALK "User deleted from API"
' Delete with authentication (set headers first)
SET HEADER "Authorization", "Bearer " + api_token
DELETE "https://api.example.com/posts/" + post_id
CLEAR HEADERS

TALK "Post deleted"

Database DELETE

' Delete by ID
DELETE "customers", "id = 123"

TALK "Customer deleted"
' Delete with variable
DELETE "orders", "id = " + order_id + " AND user_id = " + user.id

TALK "Order cancelled"
' Delete with multiple conditions
DELETE "sessions", "user_id = " + user.id + " AND status = 'expired'"

TALK "Expired sessions cleared"
' Delete old records
DELETE "logs", "created_at < '2024-01-01'"

TALK "Old logs purged"

File DELETE

' Delete a file
DELETE "temp/report.pdf"

TALK "File deleted"
' Delete uploaded file
DELETE "uploads/" + filename

TALK "Upload removed"

Common Use Cases

REST API Resource Deletion

' Delete item from external service
TALK "Removing item from inventory system..."

SET HEADER "Authorization", "Bearer " + inventory_api_key
SET HEADER "Content-Type", "application/json"

result = DELETE "https://inventory.example.com/api/items/" + item_id

CLEAR HEADERS

IF result THEN
    TALK "Item removed from inventory"
ELSE
    TALK "Failed to remove item"
END IF

User Account Deletion

' Complete account deletion flow
TALK "Are you sure you want to delete your account? Type 'DELETE' to confirm."
HEAR confirmation

IF confirmation = "DELETE" THEN
    ' Delete related records first
    DELETE "orders", "customer_id = " + user.id
    DELETE "addresses", "customer_id = " + user.id
    DELETE "preferences", "user_id = " + user.id
    
    ' Delete the user
    DELETE "users", "id = " + user.id
    
    TALK "Your account has been deleted."
ELSE
    TALK "Account deletion cancelled."
END IF

Cleanup Temporary Files

' Clean up temp files after processing
temp_files = ["temp/doc1.pdf", "temp/doc2.pdf", "temp/merged.pdf"]

FOR EACH f IN temp_files
    DELETE f
NEXT

TALK "Temporary files cleaned up"

Cancel Order via API

' Cancel order in external system
order_api_url = "https://orders.example.com/api/orders/" + order_id

SET HEADER "Authorization", "Bearer " + api_key
DELETE order_api_url
CLEAR HEADERS

' Also remove from local database
DELETE "local_orders", "external_id = '" + order_id + "'"

TALK "Order cancelled"

Remove Expired Data

' Scheduled cleanup task
' Delete expired tokens
DELETE "tokens", "expires_at < NOW()"

' Delete old notifications
DELETE "notifications", "read = true AND created_at < DATEADD(NOW(), -90, 'day')"

' Delete abandoned carts
DELETE "carts", "updated_at < DATEADD(NOW(), -7, 'day') AND checkout_completed = false"

TALK "Cleanup complete"

Error Handling

ON ERROR RESUME NEXT

DELETE "orders", "id = " + order_id

IF ERROR THEN
    error_msg = ERROR MESSAGE
    
    IF INSTR(error_msg, "foreign key") > 0 THEN
        TALK "Cannot delete: this record is referenced by other data."
    ELSE IF INSTR(error_msg, "not found") > 0 THEN
        TALK "Record not found."
    ELSE IF INSTR(error_msg, "permission") > 0 THEN
        TALK "You don't have permission to delete this."
    ELSE
        TALK "Delete failed: " + error_msg
    END IF
ELSE
    TALK "Deleted successfully!"
END IF

ON ERROR GOTO 0

Common Errors

ErrorCauseSolution
FOREIGN_KEY_VIOLATIONDatabase record referenced elsewhereDelete child records first
FILE_NOT_FOUNDFile doesn’t existCheck file path
HTTP 404API resource not foundVerify URL and resource ID
HTTP 401/403Authentication failedCheck API credentials
PERMISSION_DENIEDInsufficient privilegesCheck permissions

Context Detection

The DELETE keyword automatically detects context:

Argument PatternDetected As
"https://..." or "http://..."HTTP DELETE
Two arguments: "table", "filter"Database DELETE
Single argument without URL prefixFile DELETE
' HTTP - starts with http/https
DELETE "https://api.example.com/resource/1"

' Database - two arguments
DELETE "users", "id = 123"

' File - single argument, no URL prefix
DELETE "temp/file.txt"

Safety Considerations

Always Use Filters for Database

' DANGEROUS - would delete all records!
' DELETE "users", ""

' SAFE - specific condition
DELETE "users", "id = " + user_id

Verify Before Deleting

' Check record exists and belongs to user
record = FIND "documents", "id = " + doc_id + " AND owner_id = " + user.id

IF record THEN
    DELETE "documents", "id = " + doc_id
    TALK "Document deleted"
ELSE
    TALK "Document not found or access denied"
END IF

Confirm Destructive Actions

TALK "Delete " + item_name + "? This cannot be undone. Type 'yes' to confirm."
HEAR confirmation

IF LOWER(confirmation) = "yes" THEN
    DELETE "items", "id = " + item_id
    TALK "Deleted"
ELSE
    TALK "Cancelled"
END IF

Consider Soft Delete

' Instead of permanent delete, mark as deleted
UPDATE "records", #{ "deleted": true, "deleted_at": NOW() }, "id = " + record_id

TALK "Record archived (can be restored)"

Return Values

ContextReturns
HTTPResponse body as string
DatabaseNumber of deleted rows
Filetrue on success, error message on failure

Configuration

No specific configuration required. Uses:

  • HTTP: Standard HTTP client
  • Database: Connection from config.csv
  • Files: Bot’s .gbdrive storage

Implementation Notes

  • Implemented in data_operations.rs
  • Auto-detects URL vs table vs file
  • HTTP DELETE supports custom headers via SET HEADER
  • Database DELETE uses parameterized queries (SQL injection safe)
  • File DELETE works within bot’s storage sandbox

  • INSERT — Add new records
  • UPDATE — Modify existing records
  • FIND — Query records
  • POST — HTTP POST requests
  • PUT — HTTP PUT requests
  • READ — Read file contents
  • WRITE — Write file contents

Summary

DELETE is a unified keyword that intelligently handles HTTP API deletions, database record removal, and file deletion through a single interface. It auto-detects context based on arguments: URLs trigger HTTP DELETE, table+filter triggers database DELETE, and paths trigger file DELETE. Always use filters for database operations, verify ownership before deleting user data, and confirm destructive actions. For recoverable deletions, consider soft delete instead.

FIND

Search and retrieve data from database tables using filter criteria.

Syntax

result = FIND "table_name", "filter_criteria"

Parameters

  • table_name - The name of the database table to search
  • filter_criteria - Filter expression in the format “field=value”

Description

FIND searches database tables for records matching specified criteria. It returns an array of matching records that can be iterated over using FOR EACH loops.

Examples

' Find records with specific action
items = FIND "gb.rob", "ACTION=EMUL"

FOR EACH item IN items
    TALK "Found: " + item.company
NEXT

Single Field Filter

' Find pending orders
orders = FIND "orders", "status=pending"

FOR EACH order IN orders
    TALK "Order #" + order.id + " is pending"
NEXT

Working with Results

' Find and process customer records
customers = FIND "customers", "city=Seattle"

FOR EACH customer IN customers
    TALK customer.name + " from " + customer.address
    
    ' Access fields with dot notation
    email = customer.email
    phone = customer.phone
    
    ' Update related data
    SET "contacts", "id=" + customer.id, "last_contacted=" + NOW()
NEXT

Return Value

FIND returns an array of records from the specified table. Each record is an object with fields accessible via dot notation.

  • Returns empty array if no matches found
  • Returns array of matching records if successful
  • Each record contains all columns from the table

Field Access

Access fields in returned records using dot notation:

items = FIND "products", "category=electronics"

FOR EACH item IN items
    ' Access fields directly
    TALK item.name
    TALK item.price
    TALK item.description
    
    ' Use null coalescing for optional fields
    website = item.website ?? ""
    
    ' Check field existence
    IF item.discount != "" THEN
        TALK "On sale: " + item.discount + "% off"
    END IF
NEXT

Common Patterns

Process All Matching Records

tasks = FIND "tasks", "status=open"

FOR EACH task IN tasks
    ' Process each task
    TALK "Processing task: " + task.title
    
    ' Update task status
    SET "tasks", "id=" + task.id, "status=in_progress"
NEXT

Check If Records Exist

users = FIND "users", "email=john@example.com"

IF LENGTH(users) > 0 THEN
    TALK "User exists"
ELSE
    TALK "User not found"
END IF

Data Enrichment

companies = FIND "companies", "needs_update=true"

FOR EACH company IN companies
    ' Get additional data
    website = company.website ?? ""
    
    IF website == "" THEN
        ' Look up website
        website = WEBSITE OF company.name
        
        ' Update record
        SET "companies", "id=" + company.id, "website=" + website
    END IF
    
    ' Fetch and process website data
    page = GET website
    ' Process page content...
NEXT

Batch Processing with Delays

emails = FIND "email_queue", "sent=false"

FOR EACH email IN emails
    ' Send email
    SEND MAIL email.to, email.subject, email.body
    
    ' Mark as sent
    SET "email_queue", "id=" + email.id, "sent=true"
    
    ' Rate limiting
    WAIT 1000
NEXT

Filter Expressions

The filter parameter uses simple equality expressions:

  • "field=value" - Match exact value
  • Multiple conditions must be handled in BASIC code after retrieval
' Get all records then filter in BASIC
all_orders = FIND "orders", "status=active"

FOR EACH order IN all_orders
    ' Additional filtering in code
    IF order.amount > 1000 AND order.priority == "high" THEN
        ' Process high-value orders
        TALK "Priority order: " + order.id
    END IF
NEXT

Working with Different Data Types

products = FIND "products", "active=true"

FOR EACH product IN products
    ' String fields
    name = product.name
    
    ' Numeric fields
    price = product.price
    quantity = product.quantity
    
    ' Date fields
    created = product.created_at
    
    ' Boolean-like fields (stored as strings)
    IF product.featured == "true" THEN
        TALK "Featured: " + name
    END IF
NEXT

Error Handling

' Handle potential errors
items = FIND "inventory", "warehouse=main"

IF items == null THEN
    TALK "Error accessing inventory data"
ELSE IF LENGTH(items) == 0 THEN
    TALK "No items found in main warehouse"
ELSE
    TALK "Found " + LENGTH(items) + " items"
    ' Process items...
END IF

Performance Considerations

  1. Limit Results: The system automatically limits to 10 results for safety
  2. Use Specific Filters: More specific filters reduce processing time
  3. Avoid Full Table Scans: Always provide a filter criterion
  4. Process in Batches: For large datasets, process in chunks
' Process records in batches
batch = FIND "large_table", "processed=false"

count = 0
FOR EACH record IN batch
    ' Process record
    SET "large_table", "id=" + record.id, "processed=true"
    
    count = count + 1
    IF count >= 10 THEN
        EXIT FOR  ' Process max 10 at a time
    END IF
NEXT

Integration with Other Keywords

With SET for Updates

users = FIND "users", "newsletter=true"

FOR EACH user IN users
    ' Update last_notified field
    SET "users", "id=" + user.id, "last_notified=" + NOW()
NEXT

With LLM for Processing

articles = FIND "articles", "needs_summary=true"

FOR EACH article IN articles
    summary = LLM "Summarize: " + article.content
    SET "articles", "id=" + article.id, "summary=" + summary
NEXT

With CREATE SITE

companies = FIND "companies", "needs_site=true"

FOR EACH company IN companies
    alias = LLM "Create URL alias for: " + company.name
    CREATE SITE alias, "template", "Create site for " + company.name
    SET "companies", "id=" + company.id, "site_url=" + alias
NEXT

Limitations

  • Maximum 10 records returned per query (system limit)
  • Filter supports simple equality only
  • Complex queries require post-processing in BASIC
  • Table must exist in the database
  • User must have read permissions on the table

Best Practices

Always check results - Verify FIND returned data before processing
Use specific filters - Reduce result set size with precise criteria
Handle empty results - Check LENGTH before iterating
Update as you go - Mark records as processed to avoid reprocessing

Don’t assume order - Results may not be sorted
Don’t ignore limits - Remember the 10-record limit
Don’t use without filter - Always provide filter criteria

See Also

  • SET - Update database records
  • GET - Retrieve single values
  • FOR EACH - Iterate over results
  • LLM - Process found data with AI

FILTER

Selects elements from an array that match a specified condition.

Syntax

result = FILTER(array, condition)

Parameters

ParameterTypeDescription
arrayArrayThe source array to filter
conditionStringExpression that evaluates to true/false for each element

Description

FILTER creates a new array containing only the elements from the input array that satisfy the given condition. The condition is evaluated for each element, and only elements where the condition is true are included in the result.

Examples

Filter by Field Value

orders = FIND "orders", "year=2025"
large_orders = FILTER(orders, "item.total > 1000")

TALK "Found " + LEN(large_orders) + " orders over $1000"

Filter by String Match

contacts = FIND "contacts", "active=true"
gmail_users = FILTER(contacts, "INSTR(item.email, 'gmail.com') > 0")

FOR EACH contact IN gmail_users
    TALK contact.name + " - " + contact.email
NEXT

Filter by Status

tasks = FIND "tasks", "assigned_to=me"
pending = FILTER(tasks, "item.status = 'pending'")
completed = FILTER(tasks, "item.status = 'completed'")

TALK "Pending: " + LEN(pending) + ", Completed: " + LEN(completed)

Filter Numbers

scores = [85, 92, 67, 78, 95, 88, 72]
passing = FILTER(scores, "item >= 70")
honors = FILTER(scores, "item >= 90")

TALK "Passing: " + LEN(passing) + ", Honors: " + LEN(honors)

Complex Conditions

products = FIND "products", "category=electronics"
featured = FILTER(products, "item.in_stock = true AND item.rating >= 4.0")

TALK "Featured products:"
FOR EACH product IN featured
    TALK "- " + product.name + " (★" + product.rating + ")"
NEXT

Return Value

Returns a new array containing only elements where the condition evaluated to true.

  • Original array is not modified
  • Returns empty array if no elements match
  • Preserves order of matching elements

Sample Conversation

Condition Operators

OperatorDescriptionExample
=Equals"item.status = 'active'"
!=Not equals"item.type != 'archived'"
>Greater than"item.amount > 100"
>=Greater or equal"item.score >= 70"
<Less than"item.quantity < 10"
<=Less or equal"item.age <= 30"
ANDLogical and"item.active = true AND item.verified = true"
ORLogical or"item.priority = 'high' OR item.urgent = true"

Common Patterns

Filter then Count

users = FIND "users", "registered=true"
premium = FILTER(users, "item.plan = 'premium'")
TALK "Premium users: " + LEN(premium)

Filter then Map

orders = FIND "orders", "status=shipped"
recent = FILTER(orders, "item.ship_date > DATEADD('day', -7, NOW())")
tracking = MAP(recent, "tracking_number")

Chain Multiple Filters

products = FIND "products", "active=true"
in_stock = FILTER(products, "item.quantity > 0")
on_sale = FILTER(in_stock, "item.discount > 0")
featured = FILTER(on_sale, "item.rating >= 4.5")

See Also

  • FIND - Retrieve data from database
  • MAP - Transform filtered results
  • FOR EACH - Iterate over filtered array
  • AGGREGATE - Calculate summary from filtered data

MAP

Transforms each element of an array by applying a function or expression.

Syntax

result = MAP(array, expression)
result = MAP(array, field)

Parameters

ParameterTypeDescription
arrayArrayThe source array to transform
expressionStringExpression to apply to each element, or field name to extract

Description

MAP creates a new array by applying a transformation to each element of the input array. This is useful for extracting specific fields from objects, formatting data, or performing calculations on each item.

Examples

Extract Field from Objects

users = FIND "users", "status=active"
names = MAP(users, "name")

TALK "Active users: " + JOIN(names, ", ")

Transform Values

prices = [100, 200, 300, 400]
with_tax = MAP(prices, "item * 1.1")

FOR EACH price IN with_tax
    TALK "Price with tax: $" + price
NEXT

Format Data

orders = FIND "orders", "date=today"
summaries = MAP(orders, "'Order #' + item.id + ': $' + item.total")

FOR EACH summary IN summaries
    TALK summary
NEXT

Extract Nested Properties

contacts = FIND "contacts", "company=Acme"
emails = MAP(contacts, "email")

email_list = JOIN(emails, "; ")
TALK "Emails: " + email_list

Uppercase Names

products = ["widget", "gadget", "gizmo"]
upper_products = MAP(products, "UPPER(item)")

TALK JOIN(upper_products, ", ")
' Output: "WIDGET, GADGET, GIZMO"

Return Value

Returns a new array with the same length as the input, containing transformed values.

  • Original array is not modified
  • Null values in the source are preserved as null
  • If transformation fails for an element, that element becomes null

Sample Conversation

Common Patterns

Extract IDs for API Calls

records = FIND "items", "sync=pending"
ids = MAP(records, "id")
' Use ids for batch API operations

Create Display Labels

products = FIND "products", "in_stock=true"
labels = MAP(products, "item.name + ' ($' + item.price + ')'")

Calculate Derived Values

line_items = FIND "cart_items", "cart_id=123"
totals = MAP(line_items, "item.quantity * item.unit_price")

See Also

  • FILTER - Filter array elements
  • FOR EACH - Iterate with more control
  • JOIN - Combine mapped results into string
  • AGGREGATE - Calculate summary from mapped values

AGGREGATE

The AGGREGATE keyword performs calculations on collections of data, computing sums, counts, averages, and other statistical operations.


Syntax

result = AGGREGATE collection SUM field
result = AGGREGATE collection COUNT
result = AGGREGATE collection AVERAGE field
result = AGGREGATE collection MIN field
result = AGGREGATE collection MAX field
result = AGGREGATE "table_name" SUM field WHERE condition

Parameters

ParameterTypeDescription
collectionArray/StringData array or table name
SUMOperationCalculate total of numeric field
COUNTOperationCount number of items
AVERAGEOperationCalculate arithmetic mean
MINOperationFind minimum value
MAXOperationFind maximum value
fieldStringField name to aggregate
WHEREClauseOptional filter condition

Description

AGGREGATE performs mathematical and statistical calculations on data collections. It can work with in-memory arrays or query database tables directly. Use it to compute totals, counts, averages, and find extreme values.

Use cases include:

  • Calculating order totals
  • Counting records
  • Computing averages for reports
  • Finding highest/lowest values
  • Summarizing data for dashboards

Examples

Sum Values

' Calculate total sales
orders = FIND "orders" WHERE status = "completed"
total_sales = AGGREGATE orders SUM amount

TALK "Total sales: $" + FORMAT(total_sales, "#,##0.00")

Count Records

' Count active users
active_count = AGGREGATE "users" COUNT WHERE status = "active"

TALK "We have " + active_count + " active users"

Calculate Average

' Calculate average order value
avg_order = AGGREGATE "orders" AVERAGE amount WHERE created_at > "2025-01-01"

TALK "Average order value: $" + FORMAT(avg_order, "#,##0.00")

Find Minimum and Maximum

' Find price range
products = FIND "products" WHERE category = "electronics"

min_price = AGGREGATE products MIN price
max_price = AGGREGATE products MAX price

TALK "Prices range from $" + min_price + " to $" + max_price

Multiple Aggregations

' Calculate multiple statistics
orders = FIND "orders" WHERE customer_id = user.id

total_spent = AGGREGATE orders SUM amount
order_count = AGGREGATE orders COUNT
avg_order = AGGREGATE orders AVERAGE amount
largest_order = AGGREGATE orders MAX amount

TALK "Your order summary:"
TALK "- Total orders: " + order_count
TALK "- Total spent: $" + FORMAT(total_spent, "#,##0.00")
TALK "- Average order: $" + FORMAT(avg_order, "#,##0.00")
TALK "- Largest order: $" + FORMAT(largest_order, "#,##0.00")

Common Use Cases

Sales Dashboard

' Calculate sales metrics
today = FORMAT(NOW(), "YYYY-MM-DD")
this_month = FORMAT(NOW(), "YYYY-MM") + "-01"

today_sales = AGGREGATE "orders" SUM amount WHERE DATE(created_at) = today
month_sales = AGGREGATE "orders" SUM amount WHERE created_at >= this_month
today_count = AGGREGATE "orders" COUNT WHERE DATE(created_at) = today
month_count = AGGREGATE "orders" COUNT WHERE created_at >= this_month

TALK "📊 Sales Dashboard"
TALK "Today: $" + FORMAT(today_sales, "#,##0.00") + " (" + today_count + " orders)"
TALK "This month: $" + FORMAT(month_sales, "#,##0.00") + " (" + month_count + " orders)"

Inventory Summary

' Calculate inventory metrics
total_items = AGGREGATE "products" COUNT
total_value = AGGREGATE "products" SUM (price * stock)
low_stock = AGGREGATE "products" COUNT WHERE stock < 10
out_of_stock = AGGREGATE "products" COUNT WHERE stock = 0

TALK "Inventory Summary:"
TALK "- Total products: " + total_items
TALK "- Total value: $" + FORMAT(total_value, "#,##0.00")
TALK "- Low stock items: " + low_stock
TALK "- Out of stock: " + out_of_stock

Customer Metrics

' Calculate customer statistics
total_customers = AGGREGATE "customers" COUNT
new_this_month = AGGREGATE "customers" COUNT WHERE created_at >= this_month
avg_lifetime_value = AGGREGATE "customers" AVERAGE lifetime_value

TALK "Customer Metrics:"
TALK "- Total customers: " + total_customers
TALK "- New this month: " + new_this_month
TALK "- Avg lifetime value: $" + FORMAT(avg_lifetime_value, "#,##0.00")

Rating Analysis

' Analyze product ratings
reviews = FIND "reviews" WHERE product_id = product.id

avg_rating = AGGREGATE reviews AVERAGE rating
review_count = AGGREGATE reviews COUNT
five_stars = AGGREGATE reviews COUNT WHERE rating = 5

TALK "Product rating: " + FORMAT(avg_rating, "#.#") + " stars"
TALK "Based on " + review_count + " reviews"
TALK five_stars + " customers gave 5 stars"

Aggregate from Array

' Aggregate in-memory data
prices = [29.99, 49.99, 19.99, 99.99, 39.99]

total = AGGREGATE prices SUM
count = AGGREGATE prices COUNT
average = AGGREGATE prices AVERAGE
minimum = AGGREGATE prices MIN
maximum = AGGREGATE prices MAX

TALK "Sum: $" + FORMAT(total, "#,##0.00")
TALK "Count: " + count
TALK "Average: $" + FORMAT(average, "#,##0.00")
TALK "Range: $" + minimum + " - $" + maximum

Aggregate with Expressions

' Calculate computed values
total_revenue = AGGREGATE "order_items" SUM (quantity * unit_price)
total_discount = AGGREGATE "order_items" SUM (quantity * unit_price * discount_percent / 100)
net_revenue = total_revenue - total_discount

TALK "Gross revenue: $" + FORMAT(total_revenue, "#,##0.00")
TALK "Discounts: $" + FORMAT(total_discount, "#,##0.00")
TALK "Net revenue: $" + FORMAT(net_revenue, "#,##0.00")

Conditional Aggregation

' Aggregate with different conditions
pending_total = AGGREGATE "orders" SUM amount WHERE status = "pending"
shipped_total = AGGREGATE "orders" SUM amount WHERE status = "shipped"
delivered_total = AGGREGATE "orders" SUM amount WHERE status = "delivered"

TALK "Order totals by status:"
TALK "- Pending: $" + FORMAT(pending_total, "#,##0.00")
TALK "- Shipped: $" + FORMAT(shipped_total, "#,##0.00")
TALK "- Delivered: $" + FORMAT(delivered_total, "#,##0.00")

Error Handling

ON ERROR RESUME NEXT

total = AGGREGATE "orders" SUM amount WHERE customer_id = user.id

IF ERROR THEN
    PRINT "Aggregation failed: " + ERROR_MESSAGE
    TALK "Sorry, I couldn't calculate your totals."
ELSE IF total = 0 THEN
    TALK "You haven't placed any orders yet."
ELSE
    TALK "Your total purchases: $" + FORMAT(total, "#,##0.00")
END IF

Common Errors

ErrorCauseSolution
INVALID_FIELDField doesn’t existCheck field name spelling
TYPE_ERRORNon-numeric field for SUM/AVGUse numeric fields only
EMPTY_COLLECTIONNo data to aggregateHandle zero/null results
TABLE_NOT_FOUNDTable doesn’t existVerify table name

Null Handling

' AGGREGATE ignores NULL values by default
avg_rating = AGGREGATE "products" AVERAGE rating
' NULL ratings are not included in the average

' Count non-null values
rated_count = AGGREGATE "products" COUNT WHERE rating IS NOT NULL
total_count = AGGREGATE "products" COUNT

TALK rated_count + " of " + total_count + " products have ratings"

Performance Tips

  1. Use WHERE clauses — Filter before aggregating for better performance
  2. Index aggregate fields — Ensure database indexes on frequently aggregated columns
  3. Limit data scope — Aggregate only the date range or subset needed
  4. Cache results — Store aggregated values for expensive calculations
' Efficient: Filter first
total = AGGREGATE "orders" SUM amount WHERE date > "2025-01-01"

' Less efficient: Aggregate all, then filter
' all_orders = FIND "orders"
' recent = FILTER all_orders WHERE date > "2025-01-01"
' total = AGGREGATE recent SUM amount

Configuration

Database connection is configured in config.csv:

name,value
database-provider,postgres
database-pool-size,10
database-timeout,30

Database credentials are stored in Vault, not in config files.


Implementation Notes

  • Implemented in Rust under src/database/aggregate.rs
  • Uses SQL aggregate functions when querying tables
  • Handles NULL values according to SQL standards
  • Supports expressions in aggregate calculations
  • Returns 0 for COUNT on empty sets, NULL for SUM/AVG/MIN/MAX

  • FIND — Query data before aggregating
  • GROUP BY — Group data before aggregating
  • FILTER — Filter in-memory collections
  • MAP — Transform data before aggregating

Summary

AGGREGATE calculates sums, counts, averages, and min/max values from data collections. Use it for dashboards, reports, and any situation where you need to summarize data. It works with both database tables (using SQL) and in-memory arrays. Always handle empty results and use WHERE clauses to improve performance on large datasets.

POST

The POST keyword sends HTTP POST requests to external APIs and web services, enabling bots to create resources, submit data, and integrate with third-party systems.


Syntax

result = POST url, data
result = POST url, data, content_type
POST url, param1, param2, param3, ...

Parameters

ParameterTypeDescription
urlStringThe target URL endpoint
dataString/ObjectRequest body (JSON string or object)
content_typeStringOptional content type (default: application/json)
param1, param2, ...AnyPositional parameters for form-style requests

Description

POST sends data to a specified URL using the HTTP POST method. This is the primary keyword for:

  • Creating new resources in REST APIs
  • Submitting form data
  • Triggering webhooks
  • Sending notifications to external services
  • Integrating with third-party platforms

The response is returned as a parsed JSON object when possible, or as a string for other content types.


Examples

Basic JSON POST

' Create a new user via API
data = '{"name": "John Doe", "email": "john@example.com"}'
result = POST "https://api.example.com/users", data

TALK "User created with ID: " + result.id

Using WITH Syntax

' Create order using WITH keyword
result = POST "https://api.store.com/orders" WITH
    customer_id = "cust-123",
    items = ["item-1", "item-2"],
    total = 99.99

TALK "Order " + result.order_id + " placed successfully!"

Form-Style Parameters

' Submit with positional parameters
POST "https://warehouse.internal/api/orders", order_id, items, shipping_address, "express"

With Custom Headers

' Set authorization header first
SET HEADER "Authorization", "Bearer " + api_token
SET HEADER "X-Request-ID", request_id

result = POST "https://api.service.com/data", payload

' Clear headers after request
SET HEADER "Authorization", ""

Webhook Integration

' Send Slack notification
POST "https://hooks.slack.com/services/xxx/yyy/zzz" WITH
    channel = "#alerts",
    text = "New order received: " + order_id,
    username = "Order Bot"

Creating Records

' Create a support ticket
result = POST "https://helpdesk.example.com/api/tickets" WITH
    title = "Customer inquiry",
    description = user_message,
    priority = "medium",
    customer_email = customer.email

IF result.id THEN
    TALK "Ticket #" + result.id + " created. Our team will respond within 24 hours."
ELSE
    TALK "Sorry, I couldn't create the ticket. Please try again."
END IF

Handling Responses

Check Response Status

result = POST "https://api.example.com/resource", data

IF result.error THEN
    TALK "Error: " + result.error.message
ELSE IF result.id THEN
    TALK "Success! Created resource: " + result.id
END IF

Parse Nested Response

result = POST "https://api.payment.com/charge", payment_data

IF result.status = "succeeded" THEN
    TALK "Payment of $" + result.amount + " processed!"
    TALK "Transaction ID: " + result.transaction_id
ELSE
    TALK "Payment failed: " + result.failure_reason
END IF

Common Use Cases

Send Email via API

POST "https://api.mailservice.com/send" WITH
    to = customer_email,
    subject = "Order Confirmation",
    body = "Thank you for your order #" + order_id

Create Calendar Event

result = POST "https://calendar.api.com/events" WITH
    title = "Meeting with " + contact_name,
    start = meeting_time,
    duration = 60,
    attendees = [contact_email]

TALK "Meeting scheduled! Calendar invite sent."

Log Analytics Event

' Track user action
POST "https://analytics.example.com/track" WITH
    event = "purchase_completed",
    user_id = user.id,
    order_value = total,
    items_count = LEN(cart)

CRM Integration

' Create lead in CRM
result = POST "https://crm.example.com/api/leads" WITH
    first_name = first_name,
    last_name = last_name,
    email = email,
    phone = phone,
    source = "chatbot",
    notes = "Initial inquiry: " + user_query

SET USER MEMORY "crm_lead_id", result.id

Error Handling

ON ERROR RESUME NEXT

result = POST "https://api.example.com/resource", data

IF ERROR THEN
    PRINT "POST failed: " + ERROR_MESSAGE
    ' Try backup endpoint
    result = POST "https://backup-api.example.com/resource", data
END IF

IF result.error THEN
    TALK "The service returned an error. Please try again later."
ELSE
    TALK "Request successful!"
END IF

Content Types

Content TypeUse Case
application/jsonDefault, most REST APIs
application/x-www-form-urlencodedHTML form submissions
multipart/form-dataFile uploads (use UPLOAD instead)
text/xmlSOAP services (use SOAP instead)
' Explicit content type
result = POST "https://legacy.api.com/submit", form_data, "application/x-www-form-urlencoded"

Configuration

Timeouts

Configure request timeout in config.csv:

name,value
http-timeout,30
http-retry-count,3
http-retry-delay,1000

Base URL

Set a base URL for all HTTP requests:

name,value
http-base-url,https://api.mycompany.com

Then use relative paths:

result = POST "/users", user_data  ' Resolves to https://api.mycompany.com/users

Implementation Notes

  • Implemented in Rust under src/web_automation/http.rs
  • Uses reqwest library with async runtime
  • Automatically serializes objects to JSON
  • Handles redirects (up to 10 hops)
  • Validates SSL certificates by default
  • Supports gzip/deflate response compression

  • GET — Retrieve data from URLs
  • PUT — Update existing resources
  • PATCH — Partial resource updates
  • DELETE HTTP — Remove resources
  • SET HEADER — Set request headers
  • GRAPHQL — GraphQL queries and mutations

Summary

POST is essential for integrating bots with external services. Use it to create resources, submit data, trigger webhooks, and connect to any REST API. Combined with SET HEADER for authentication, it enables powerful integrations with CRMs, payment systems, notification services, and more.

GRAPHQL

The GRAPHQL keyword executes GraphQL queries and mutations against external APIs, enabling bots to interact with modern GraphQL-based services.


Syntax

result = GRAPHQL url, query
result = GRAPHQL url, query WITH variables

Parameters

ParameterTypeDescription
urlStringThe GraphQL endpoint URL
queryStringThe GraphQL query or mutation
WITHClauseOptional variables for the query

Description

GRAPHQL sends queries and mutations to GraphQL APIs. GraphQL allows you to request exactly the data you need in a single request, making it efficient for complex data fetching. The keyword handles query formatting, variable substitution, and response parsing.

Use cases include:

  • Fetching specific fields from APIs
  • Creating, updating, or deleting data via mutations
  • Querying nested relationships in one request
  • Interacting with modern API platforms

Examples

Basic Query

' Simple query without variables
query = '
    query {
        users {
            id
            name
            email
        }
    }
'

result = GRAPHQL "https://api.example.com/graphql", query

FOR EACH user IN result.data.users
    TALK user.name + ": " + user.email
NEXT

Query with Variables

' Query with variables
query = '
    query GetUser($id: ID!) {
        user(id: $id) {
            id
            name
            email
            orders {
                id
                total
                status
            }
        }
    }
'

result = GRAPHQL "https://api.example.com/graphql", query WITH id = user_id

TALK "User: " + result.data.user.name
TALK "Orders: " + LEN(result.data.user.orders)

Mutation

' Create a new record
mutation = '
    mutation CreateUser($name: String!, $email: String!) {
        createUser(input: {name: $name, email: $email}) {
            id
            name
            email
            createdAt
        }
    }
'

result = GRAPHQL "https://api.example.com/graphql", mutation WITH
    name = user_name,
    email = user_email

TALK "User created with ID: " + result.data.createUser.id

With Authentication

' Set authorization header for GraphQL
SET HEADER "Authorization", "Bearer " + api_token

query = '
    query {
        me {
            id
            name
            role
        }
    }
'

result = GRAPHQL "https://api.example.com/graphql", query

SET HEADER "Authorization", ""

TALK "Logged in as: " + result.data.me.name

Common Use Cases

Fetch User Profile

' Get detailed user profile
query = '
    query GetProfile($userId: ID!) {
        user(id: $userId) {
            id
            name
            email
            avatar
            settings {
                theme
                language
                notifications
            }
            recentActivity {
                action
                timestamp
            }
        }
    }
'

result = GRAPHQL api_url, query WITH userId = user.id

profile = result.data.user
TALK "Welcome back, " + profile.name + "!"
TALK "Theme: " + profile.settings.theme

Search Products

' Search with filters
query = '
    query SearchProducts($term: String!, $category: String, $limit: Int) {
        products(search: $term, category: $category, first: $limit) {
            edges {
                node {
                    id
                    name
                    price
                    inStock
                }
            }
            totalCount
        }
    }
'

result = GRAPHQL "https://api.store.com/graphql", query WITH
    term = search_term,
    category = selected_category,
    limit = 10

products = result.data.products.edges
TALK "Found " + result.data.products.totalCount + " products:"

FOR EACH edge IN products
    product = edge.node
    TALK "- " + product.name + ": $" + product.price
NEXT

Create Order

' Create order mutation
mutation = '
    mutation CreateOrder($input: OrderInput!) {
        createOrder(input: $input) {
            id
            orderNumber
            total
            status
            estimatedDelivery
        }
    }
'

result = GRAPHQL "https://api.store.com/graphql", mutation WITH
    input = '{"customerId": "' + customer_id + '", "items": ' + cart_items + '}'

order = result.data.createOrder
TALK "Order #" + order.orderNumber + " placed!"
TALK "Total: $" + order.total
TALK "Estimated delivery: " + order.estimatedDelivery

Update Record

' Update mutation
mutation = '
    mutation UpdateUser($id: ID!, $input: UserUpdateInput!) {
        updateUser(id: $id, input: $input) {
            id
            name
            email
            updatedAt
        }
    }
'

result = GRAPHQL api_url, mutation WITH
    id = user.id,
    input = '{"name": "' + new_name + '", "email": "' + new_email + '"}'

TALK "Profile updated!"

Delete Record

' Delete mutation
mutation = '
    mutation DeleteItem($id: ID!) {
        deleteItem(id: $id) {
            success
            message
        }
    }
'

result = GRAPHQL api_url, mutation WITH id = item_id

IF result.data.deleteItem.success THEN
    TALK "Item deleted successfully"
ELSE
    TALK "Delete failed: " + result.data.deleteItem.message
END IF

Error Handling

ON ERROR RESUME NEXT

result = GRAPHQL api_url, query WITH id = resource_id

IF ERROR THEN
    PRINT "GraphQL request failed: " + ERROR_MESSAGE
    TALK "Sorry, I couldn't fetch that data. Please try again."
ELSE IF result.errors THEN
    ' GraphQL returned errors
    FOR EACH err IN result.errors
        PRINT "GraphQL error: " + err.message
    NEXT
    TALK "The request encountered an error: " + result.errors[0].message
ELSE
    ' Success
    TALK "Data retrieved successfully!"
END IF

Common Errors

ErrorCauseSolution
VALIDATION_ERRORInvalid query syntaxCheck query format
NOT_FOUNDResource doesn’t existVerify ID/parameters
UNAUTHORIZEDMissing/invalid authCheck authentication
FORBIDDENInsufficient permissionsVerify access rights
VARIABLE_REQUIREDMissing required variableProvide all variables

GraphQL vs REST

AspectGraphQLREST
Data fetchingRequest exact fieldsFixed response structure
Multiple resourcesSingle requestMultiple requests
VersioningEvolving schemaAPI versions (v1, v2)
Use caseComplex nested dataSimple CRUD operations
' GraphQL - One request for nested data
query = '
    query {
        user(id: "123") {
            name
            orders {
                items {
                    product { name }
                }
            }
        }
    }
'
result = GRAPHQL url, query

' REST equivalent would need multiple calls:
' GET /users/123
' GET /users/123/orders
' GET /orders/{id}/items for each order
' GET /products/{id} for each item

Query Building Tips

Request Only What You Need

' Good - request specific fields
query = '
    query {
        user(id: "123") {
            name
            email
        }
    }
'

' Avoid - requesting everything
' query {
'     user(id: "123") {
'         id name email phone address avatar settings ...
'     }
' }

Use Fragments for Reusable Fields

query = '
    fragment UserFields on User {
        id
        name
        email
    }
    
    query {
        user(id: "123") {
            ...UserFields
        }
        users {
            ...UserFields
        }
    }
'

Configuration

Configure HTTP settings in config.csv:

name,value
http-timeout,30
http-retry-count,3

API keys are stored in Vault:

vault kv put gbo/graphql/example api_key="your-api-key"

Implementation Notes

  • Implemented in Rust under src/web_automation/graphql.rs
  • Sends POST requests with application/json content type
  • Automatically formats query and variables
  • Parses JSON response into accessible objects
  • Supports custom headers via SET HEADER
  • Handles both queries and mutations

  • POST — REST POST requests
  • GET — REST GET requests
  • SET HEADER — Set authentication headers
  • SOAP — SOAP/XML web services

Summary

GRAPHQL executes queries and mutations against GraphQL APIs. Use it when you need precise control over the data you fetch, especially for nested relationships. GraphQL is more efficient than REST for complex data needs, requiring fewer round trips. Always handle both network errors and GraphQL-specific errors in the response.

WEBHOOK

Creates an instant HTTP endpoint for your bot. With WEBHOOK, you can expose any BASIC script as an API endpoint that external systems can call - perfect for integrations, notifications, and building custom APIs with LLM-powered responses.

Why WEBHOOK?

Traditional API development requires:

  • Setting up a web framework
  • Writing routing code
  • Handling HTTP parsing
  • Deploying infrastructure

With General Bots WEBHOOK, you write one line and your endpoint is live:

WEBHOOK "my-endpoint"

That’s it. Your script is now accessible at /api/{botname}/webhook/my-endpoint.

Syntax

WEBHOOK "endpoint-name"

Parameters

ParameterTypeDescription
endpoint-nameStringUnique name for the webhook (alphanumeric, hyphens, underscores)

Request Data Available

When your webhook is called, these variables are automatically available:

VariableDescriptionExample
paramsQuery string parametersparams.id, params.filter
bodyJSON request body as objectbody.customer.name
headersHTTP headersheaders.authorization
methodHTTP method used"POST", "GET"
pathRequest path"/webhook/my-endpoint"

Examples

1. Simple Status Endpoint

' status.bas - Simple health check
WEBHOOK "status"

result_status = "healthy"
result_timestamp = NOW()
result_version = "1.0.0"

Call it:

curl https://bot.example.com/api/mybot/webhook/status

Response:

{"status": "healthy", "timestamp": "2024-01-20T10:30:00Z", "version": "1.0.0"}

2. WhatsApp Order Notification

Send order confirmations directly to customers on WhatsApp:

' order-notify.bas - Notify customer via WhatsApp
WEBHOOK "order-notify"

order_id = body.order_id
customer_phone = body.customer_phone
customer_name = body.customer_name
total = body.total
items = body.items

IF order_id = "" OR customer_phone = "" THEN
    result_status = 400
    result_error = "Missing order_id or customer_phone"
    EXIT
END IF

' Build order summary
order_summary = "🛒 *Order Confirmed #" + order_id + "*\n\n"
order_summary = order_summary + "Hi " + customer_name + "!\n\n"
order_summary = order_summary + "Your order has been confirmed.\n"
order_summary = order_summary + "Total: $" + total + "\n\n"
order_summary = order_summary + "We'll notify you when it ships!"

' Send to WhatsApp using TALK TO
TALK TO "whatsapp:" + customer_phone, order_summary

' Save order to database
order_status = "confirmed"
created_at = NOW()
SAVE "orders", order_id, customer_name, customer_phone, total, order_status, created_at

result_status = "ok"
result_order_id = order_id
result_message = "Customer notified via WhatsApp"

Call it:

curl -X POST https://bot.example.com/api/mybot/webhook/order-notify \
  -H "Content-Type: application/json" \
  -d '{
    "order_id": "ORD-12345",
    "customer_phone": "+5511999887766",
    "customer_name": "João",
    "total": "299.90",
    "items": ["Widget", "Gadget"]
  }'

3. WhatsApp Document Delivery

Send invoices, reports, or documents to WhatsApp:

' send-invoice.bas - Generate and send invoice via WhatsApp
WEBHOOK "send-invoice"

order_id = body.order_id
customer_phone = body.customer_phone
customer_name = body.customer_name

IF order_id = "" OR customer_phone = "" THEN
    result_status = 400
    result_error = "Missing order_id or customer_phone"
    EXIT
END IF

' Get order data
order = FIND "orders", "order_id=" + order_id

' Generate PDF invoice
invoice_date = FORMAT(NOW(), "DD/MM/YYYY")
GENERATE PDF "templates/invoice.html", order_id, customer_name, order.total, order.items, invoice_date, "invoices/" + order_id + ".pdf"

' Send PDF to WhatsApp with caption
SEND FILE TO "whatsapp:" + customer_phone, "invoices/" + order_id + ".pdf", "📄 Invoice #" + order_id + " - Thank you for your purchase!"

' Also send a follow-up message
TALK TO "whatsapp:" + customer_phone, "If you have any questions about your order, just reply to this message! 😊"

result_status = "ok"
result_message = "Invoice sent to WhatsApp"

4. WhatsApp Support Ticket System

Create support tickets and notify via WhatsApp:

' support-ticket.bas - Create ticket and notify customer
WEBHOOK "support-ticket"

customer_phone = body.phone
customer_name = body.name
issue = body.issue
priority = body.priority

IF customer_phone = "" OR issue = "" THEN
    result_status = 400
    result_error = "Missing phone or issue description"
    EXIT
END IF

IF priority = "" THEN
    priority = "normal"
END IF

' Create ticket
ticket_id = "TKT-" + FORMAT(NOW(), "YYYYMMDDHHmmss")
ticket_status = "open"
created_at = NOW()

SAVE "support_tickets", ticket_id, customer_name, customer_phone, issue, priority, ticket_status, created_at

' Notify customer via WhatsApp
confirmation = "🎫 *Support Ticket Created*\n\n"
confirmation = confirmation + "Ticket: #" + ticket_id + "\n"
confirmation = confirmation + "Priority: " + priority + "\n\n"
confirmation = confirmation + "We received your request:\n_" + issue + "_\n\n"
confirmation = confirmation + "Our team will respond within 24 hours."

TALK TO "whatsapp:" + customer_phone, confirmation

' Notify support team
team_msg = "🆕 New ticket #" + ticket_id + "\n"
team_msg = team_msg + "From: " + customer_name + " (" + customer_phone + ")\n"
team_msg = team_msg + "Priority: " + priority + "\n"
team_msg = team_msg + "Issue: " + issue

TALK TO "whatsapp:+5511999000001", team_msg

result_status = "ok"
result_ticket_id = ticket_id

5. AI-Powered WhatsApp Assistant

Create an API that uses AI and responds via WhatsApp:

' ai-assistant.bas - AI assistant that responds via WhatsApp
WEBHOOK "ask-ai"

question = body.question
customer_phone = body.phone
context_type = body.context

IF question = "" OR customer_phone = "" THEN
    result_status = 400
    result_error = "Missing question or phone"
    EXIT
END IF

' Load appropriate knowledge base
IF context_type = "products" THEN
    USE KB "product-catalog"
ELSE IF context_type = "support" THEN
    USE KB "support-docs"
ELSE
    USE KB "general-faq"
END IF

' Set AI context
SET CONTEXT "You are a helpful assistant. Be concise and friendly. Use emojis occasionally."

' Get AI response
answer = LLM question

' Send response via WhatsApp
TALK TO "whatsapp:" + customer_phone, answer

' Log the interaction
log_question = question
log_answer = answer
log_phone = customer_phone
log_context = context_type
log_timestamp = NOW()

INSERT "ai_conversations", log_question, log_answer, log_phone, log_context, log_timestamp

result_status = "ok"
result_answer = answer

6. WhatsApp Broadcast for Promotions

Send promotional messages to multiple customers:

' promo-broadcast.bas - Send promotions to customer list
WEBHOOK "send-promo"

promo_title = body.title
promo_message = body.message
promo_image = body.image_url
customer_segment = body.segment

IF promo_message = "" THEN
    result_status = 400
    result_error = "Missing promotion message"
    EXIT
END IF

IF customer_segment = "" THEN
    customer_segment = "all"
END IF

' Get customers for this segment
customers = FIND "customers", "segment=" + customer_segment + " AND whatsapp_optin=true"

sent_count = 0
error_count = 0

' Build promo message with formatting
full_message = "🎉 *" + promo_title + "*\n\n"
full_message = full_message + promo_message + "\n\n"
full_message = full_message + "_Reply STOP to unsubscribe_"

FOR EACH customer IN customers
    ' Send to each customer
    IF promo_image <> "" THEN
        SEND FILE TO "whatsapp:" + customer.phone, promo_image, full_message
    ELSE
        TALK TO "whatsapp:" + customer.phone, full_message
    END IF
    
    sent_count = sent_count + 1
    
    ' Rate limiting - wait between messages
    WAIT 1
NEXT customer

' Log the campaign
campaign_id = "CAMP-" + FORMAT(NOW(), "YYYYMMDDHHmmss")
campaign_title = promo_title
campaign_sent = sent_count
campaign_date = NOW()

INSERT "campaigns", campaign_id, campaign_title, campaign_sent, customer_segment, campaign_date

result_status = "ok"
result_campaign_id = campaign_id
result_sent = sent_count

7. Payment Notification with WhatsApp Receipt

Handle payment webhooks and notify customers:

' payment-webhook.bas - Handle payment and notify via WhatsApp
WEBHOOK "payment"

event_type = body.type
payment_id = body.data.object.id
amount = body.data.object.amount
customer_id = body.data.object.customer

SELECT CASE event_type
    CASE "payment_intent.succeeded"
        ' Get customer info
        customer = FIND "customers", "stripe_id=" + customer_id
        
        ' Update order status
        order_status = "paid"
        paid_at = NOW()
        UPDATE "orders", "payment_id=" + payment_id, order_status, paid_at
        
        ' Format amount (cents to dollars)
        amount_formatted = amount / 100
        
        ' Send WhatsApp receipt
        receipt = "✅ *Payment Received*\n\n"
        receipt = receipt + "Amount: $" + amount_formatted + "\n"
        receipt = receipt + "Payment ID: " + payment_id + "\n"
        receipt = receipt + "Date: " + FORMAT(NOW(), "DD/MM/YYYY HH:mm") + "\n\n"
        receipt = receipt + "Thank you for your purchase! 🙏"
        
        TALK TO "whatsapp:" + customer.phone, receipt
        
    CASE "payment_intent.payment_failed"
        customer = FIND "customers", "stripe_id=" + customer_id
        
        ' Notify customer of failure
        failure_msg = "⚠️ *Payment Failed*\n\n"
        failure_msg = failure_msg + "We couldn't process your payment.\n"
        failure_msg = failure_msg + "Please try again or use a different payment method.\n\n"
        failure_msg = failure_msg + "Need help? Reply to this message!"
        
        TALK TO "whatsapp:" + customer.phone, failure_msg
        
    CASE ELSE
        ' Log unhandled event
        TALK "Unhandled payment event: " + event_type
END SELECT

result_received = TRUE

8. Appointment Reminder System

Webhook to trigger appointment reminders:

' appointment-reminder.bas - Send appointment reminders via WhatsApp
WEBHOOK "send-reminder"

appointment_id = body.appointment_id
hours_before = body.hours_before

IF appointment_id = "" THEN
    result_status = 400
    result_error = "Missing appointment_id"
    EXIT
END IF

IF hours_before = "" THEN
    hours_before = 24
END IF

' Get appointment details
appointment = FIND "appointments", "id=" + appointment_id

' Format date/time nicely
appt_date = FORMAT(appointment.datetime, "dddd, MMMM DD")
appt_time = FORMAT(appointment.datetime, "HH:mm")

' Build reminder message
reminder = "📅 *Appointment Reminder*\n\n"
reminder = reminder + "Hi " + appointment.customer_name + "!\n\n"
reminder = reminder + "This is a reminder of your upcoming appointment:\n\n"
reminder = reminder + "📍 *" + appointment.service + "*\n"
reminder = reminder + "🗓️ " + appt_date + "\n"
reminder = reminder + "🕐 " + appt_time + "\n"
reminder = reminder + "📌 " + appointment.location + "\n\n"
reminder = reminder + "Reply *CONFIRM* to confirm or *CANCEL* to cancel."

' Send via WhatsApp
TALK TO "whatsapp:" + appointment.customer_phone, reminder

' Update reminder sent status
reminder_sent_at = NOW()
UPDATE "appointments", "id=" + appointment_id, reminder_sent_at

result_status = "ok"
result_message = "Reminder sent"

9. Form Submission with WhatsApp Follow-up

Handle web form submissions and follow up on WhatsApp:

' contact-form.bas - Handle contact form and follow up via WhatsApp
WEBHOOK "contact"

name = body.name
email = body.email
phone = body.phone
message = body.message
source = body.source

IF name = "" OR message = "" THEN
    result_status = 400
    result_error = "Name and message are required"
    EXIT
END IF

' Use AI to categorize and generate response
SET CONTEXT "Categorize this message as: sales, support, feedback, or other. Then write a friendly acknowledgment."

ai_prompt = "Customer: " + name + "\nMessage: " + message
ai_response = LLM ai_prompt

' Save the submission
submission_id = "SUB-" + FORMAT(NOW(), "YYYYMMDDHHmmss")
submission_status = "new"
created_at = NOW()

SAVE "submissions", submission_id, name, email, phone, message, source, ai_response, submission_status, created_at

' If phone provided, send WhatsApp confirmation
IF phone <> "" THEN
    whatsapp_msg = "👋 Hi " + name + "!\n\n"
    whatsapp_msg = whatsapp_msg + "Thanks for reaching out! We received your message:\n\n"
    whatsapp_msg = whatsapp_msg + "_" + message + "_\n\n"
    whatsapp_msg = whatsapp_msg + "We'll get back to you soon. In the meantime, feel free to reply here if you have any questions!"
    
    TALK TO "whatsapp:" + phone, whatsapp_msg
END IF

' Send email confirmation too
IF email <> "" THEN
    SEND MAIL email, "We received your message", "Hi " + name + ",\n\nThank you for contacting us. We'll respond within 24 hours.\n\nBest regards"
END IF

result_status = "ok"
result_submission_id = submission_id

10. Multi-Channel Notification Hub

Single webhook that routes to multiple channels:

' notify.bas - Multi-channel notification hub
WEBHOOK "notify"

channel = body.channel
recipient = body.recipient
message = body.message
file_url = body.file
caption = body.caption

IF recipient = "" OR message = "" THEN
    result_status = 400
    result_error = "Missing recipient or message"
    EXIT
END IF

IF channel = "" THEN
    channel = "whatsapp"
END IF

' Route to appropriate channel
SELECT CASE channel
    CASE "whatsapp"
        IF file_url <> "" THEN
            SEND FILE TO "whatsapp:" + recipient, file_url, caption
        ELSE
            TALK TO "whatsapp:" + recipient, message
        END IF
        
    CASE "email"
        subject = body.subject
        IF subject = "" THEN
            subject = "Notification"
        END IF
        
        IF file_url <> "" THEN
            SEND MAIL recipient, subject, message, file_url
        ELSE
            SEND MAIL recipient, subject, message
        END IF
        
    CASE "teams"
        TALK TO "teams:" + recipient, message
        
    CASE "web"
        ' Send to web session
        TALK TO "web:" + recipient, message
        
    CASE ELSE
        result_status = 400
        result_error = "Unknown channel: " + channel
        EXIT
END SELECT

' Log notification
log_channel = channel
log_recipient = recipient
log_message = message
log_timestamp = NOW()

INSERT "notification_log", log_channel, log_recipient, log_message, log_timestamp

result_status = "ok"
result_channel = channel
result_delivered = TRUE

Response Handling

Control the HTTP response by setting result_ prefixed variables:

Simple Response

result_status = "ok"
result_data = my_data

Custom Status Code

result_status = 201  ' Created
result_id = new_id
result_created = TRUE

Error Response

result_status = 400
result_error = "Invalid request"
result_details = "Missing required field: phone"

WhatsApp Message Formatting

WhatsApp supports rich text formatting:

FormatSyntaxExample
Bold*text**Important*
Italic_text__note_
Strikethrough~text~~old price~
Monospace`text``code`
Line break\n"Line 1\nLine 2"

Example with Formatting

message = "🎉 *Order Confirmed!*\n\n"
message = message + "Order: #" + order_id + "\n"
message = message + "Total: ~$" + old_price + "~ *$" + new_price + "*\n"
message = message + "_Discount applied!_"

TALK TO "whatsapp:" + phone, message

Security Best Practices

1. Validate Webhook Signatures

WEBHOOK "secure-endpoint"

signature = headers.x_webhook_signature
secret = GET BOT MEMORY "webhook_secret"

IF signature = "" THEN
    TALK "Invalid request - no signature"
    result_status = 401
    result_error = "Missing signature"
    EXIT
END IF

' Continue with verified request...

2. Validate Phone Numbers

phone = body.phone

' Remove non-numeric characters
clean_phone = REPLACE(phone, "+", "")
clean_phone = REPLACE(clean_phone, "-", "")
clean_phone = REPLACE(clean_phone, " ", "")

IF LEN(clean_phone) < 10 THEN
    result_status = 400
    result_error = "Invalid phone number"
    EXIT
END IF

' Add country code if missing
IF LEFT(clean_phone, 2) <> "55" THEN
    clean_phone = "55" + clean_phone
END IF

TALK TO "whatsapp:+" + clean_phone, message

3. Rate Limiting

WEBHOOK "rate-limited"

client_ip = headers.x_forwarded_for
rate_key = "rate:" + client_ip
current_count = GET BOT MEMORY rate_key

IF current_count = "" THEN
    current_count = 0
END IF

IF current_count > 100 THEN
    result_status = 429
    result_error = "Rate limit exceeded"
    result_retry_after = 60
    EXIT
END IF

SET BOT MEMORY rate_key, current_count + 1
' Process request...

Use Cases Summary

Use CaseWebhook NameDescription
Order Notifications/order-notifyConfirm orders via WhatsApp
Invoice Delivery/send-invoiceSend PDF invoices to WhatsApp
Support Tickets/support-ticketCreate tickets, notify via WhatsApp
AI Assistant/ask-aiLLM answers sent to WhatsApp
Promotions/send-promoBroadcast promos to customers
Payment Alerts/paymentPayment receipts via WhatsApp
Reminders/send-reminderAppointment reminders
Contact Forms/contactForm follow-up on WhatsApp
Multi-Channel/notifyRoute to any channel

Technical Notes

  • Webhooks register during script compilation
  • Stored in system_automations table with kind = Webhook
  • Endpoint names must be unique per bot
  • Request timeout: 30 seconds (keep processing fast)
  • Maximum request body: 10MB
  • HTTPS required in production

See Also

  • TALK TO - Send messages to specific recipients
  • SEND FILE TO - Send files to recipients
  • SET SCHEDULE - Time-based automation
  • ON - Database trigger events
  • LLM - Language model queries
  • USE KB - Knowledge base integration

PLAY

Open a content projector/player to display various media types including videos, images, documents, and presentations.

Syntax

' Basic playback
PLAY file_or_url

' With options
PLAY file_or_url WITH OPTIONS options_string

Parameters

ParameterTypeRequiredDescription
file_or_urlStringYesPath to file or URL to display
options_stringStringNoComma-separated playback options

Supported Options

OptionDescription
autoplayStart playback automatically
loopLoop content continuously
fullscreenOpen in fullscreen mode
mutedStart with audio muted
controlsShow playback controls
nocontrolsHide playback controls

Supported Content Types

Video

ExtensionFormat
.mp4MPEG-4 Video
.webmWebM Video
.oggOgg Video
.movQuickTime
.aviAVI Video
.mkvMatroska
.m4vM4V Video

Audio

ExtensionFormat
.mp3MP3 Audio
.wavWAV Audio
.flacFLAC Audio
.aacAAC Audio
.m4aM4A Audio
.oggOgg Audio

Images

ExtensionFormat
.jpg .jpegJPEG Image
.pngPNG Image
.gifGIF (animated)
.webpWebP Image
.svgSVG Vector
.bmpBitmap

Documents

ExtensionFormat
.pdfPDF Document
.docx .docWord Document
.pptx .pptPowerPoint
.xlsx .xlsExcel Spreadsheet
.odtOpenDocument Text
.odpOpenDocument Presentation

Code

ExtensionLanguage
.rsRust
.pyPython
.js .tsJavaScript/TypeScript
.javaJava
.goGo
.rbRuby
.mdMarkdown
.htmlHTML

Examples

Play a Video

' Play a video file
PLAY "training-video.mp4"

' Play with autoplay and loop
PLAY "background.mp4" WITH OPTIONS "autoplay,loop,muted"

' Play from URL
PLAY "https://example.com/videos/demo.mp4"

Display an Image

' Show an image
PLAY "product-photo.jpg"

' Show image fullscreen
PLAY "banner.png" WITH OPTIONS "fullscreen"

Show a Presentation

' Display PowerPoint presentation
PLAY "quarterly-report.pptx"

' Fullscreen presentation mode
PLAY "sales-deck.pptx" WITH OPTIONS "fullscreen"

Display a Document

' Show PDF document
PLAY "contract.pdf"

' Show Word document
PLAY "proposal.docx"

Interactive Training Module

TALK "Welcome to the training module!"
TALK "Let's start with an introduction video."

PLAY "intro-video.mp4" WITH OPTIONS "controls"

HEAR ready AS TEXT "Type 'continue' when you're ready to proceed:"

IF LOWER(ready) = "continue" THEN
    TALK "Great! Now let's review the key concepts."
    PLAY "concepts-slides.pptx"
    
    HEAR understood AS TEXT "Did you understand the concepts? (yes/no)"
    
    IF LOWER(understood) = "yes" THEN
        TALK "Excellent! Here's your certificate."
        PLAY "certificate.pdf"
    ELSE
        TALK "Let's review the material again."
        PLAY "concepts-detailed.mp4"
    END IF
END IF

Product Showcase

' Show product images in sequence
products = FIND "products", "featured=true"

FOR EACH product IN products
    TALK "Now showing: " + product.name
    PLAY product.image_path
    WAIT 3000  ' Wait 3 seconds between images
NEXT

Code Review

' Display code for review
TALK "Let's review the implementation:"
PLAY "src/main.rs"

HEAR feedback AS TEXT "Any comments on this code?"
INSERT "code_reviews", file_path, feedback, NOW()

Audio Playback

' Play audio message
TALK "Here's a voice message from your team:"
PLAY "team-message.mp3" WITH OPTIONS "controls"

' Play background music
PLAY "ambient.mp3" WITH OPTIONS "autoplay,loop,muted"

Dynamic Content Display

' Display content based on file type
HEAR file_name AS TEXT "Enter the file name to display:"

file_ext = LOWER(RIGHT(file_name, 4))

IF file_ext = ".mp4" OR file_ext = "webm" THEN
    PLAY file_name WITH OPTIONS "controls,autoplay"
ELSE IF file_ext = ".pdf" THEN
    PLAY file_name
ELSE IF file_ext = ".jpg" OR file_ext = ".png" THEN
    PLAY file_name WITH OPTIONS "fullscreen"
ELSE
    TALK "Unsupported file type"
END IF

Embedded Video from URL

' Play YouTube video (via embed URL)
PLAY "https://www.youtube.com/embed/dQw4w9WgXcQ"

' Play Vimeo video
PLAY "https://player.vimeo.com/video/123456789"

Onboarding Flow

' Multi-step onboarding with media
TALK "Welcome to our platform! Let's get you started."

' Step 1: Welcome video
TALK "First, watch this quick introduction:"
PLAY "onboarding/welcome.mp4" WITH OPTIONS "controls"

HEAR step1_done AS TEXT "Press Enter when done..."

' Step 2: Feature overview
TALK "Here's an overview of our key features:"
PLAY "onboarding/features.pptx"

HEAR step2_done AS TEXT "Press Enter when done..."

' Step 3: Quick start guide
TALK "Finally, here's your quick start guide:"
PLAY "onboarding/quickstart.pdf"

TALK "You're all set! 🎉"

Error Handling

' Check if file exists before playing
file_path = "presentation.pptx"

IF FILE_EXISTS(file_path) THEN
    PLAY file_path
ELSE
    TALK "Sorry, the file could not be found."
    TALK "Please check the file path and try again."
END IF

Player Behavior

Web Interface

When used in the web interface, PLAY opens a modal overlay with:

  • Appropriate player for the content type
  • Close button to dismiss
  • Optional playback controls
  • Fullscreen toggle

WhatsApp/Messaging Channels

On messaging channels, PLAY sends the file directly:

  • Videos/images: Sent as media messages
  • Documents: Sent as file attachments
  • URLs: Sent as links with preview

Desktop Application

In the desktop app, PLAY uses the native media player or viewer appropriate for the content type.

File Locations

Files can be referenced from:

LocationExample
Bot’s .gbdrivedocuments/report.pdf
User’s folderusers/john@email.com/uploads/photo.jpg
Absolute URLhttps://cdn.example.com/video.mp4
Relative path./assets/logo.png

Limitations

  • Maximum file size depends on channel (WhatsApp: 16MB for media, 100MB for documents)
  • Some formats may require conversion for web playback
  • Streaming large files requires adequate bandwidth
  • Protected/DRM content is not supported

See Also

  • SEND FILE - Send files as attachments
  • TALK - Display text messages
  • UPLOAD - Upload files to storage
  • DOWNLOAD - Download files from URLs

Implementation

The PLAY keyword is implemented in src/basic/keywords/play.rs with content type detection and appropriate player selection for each media format.

SEND MAIL

Send email messages.

Syntax

Single Line

SEND MAIL to, subject, body
SEND MAIL to, subject, body USING "account@example.com"

Multi-Line Block with Variable Substitution

BEGIN MAIL recipient
Subject: Email subject here

Dear ${customerName},

Your order ${orderId} is ready.

Thank you!
END MAIL

Parameters

ParameterTypeDescription
toStringRecipient email address(es), comma-separated for multiple
subjectStringEmail subject line
bodyStringEmail body (plain text or HTML)
accountString(Optional) Connected account to send through
${variable}ExpressionVariable substitution within MAIL blocks

Description

The SEND MAIL keyword sends emails using either:

  1. Default SMTP - Configuration from config.csv
  2. Connected Account - Send through Gmail, Outlook, etc. configured in Sources app

BEGIN MAIL / END MAIL Blocks

The BEGIN MAIL / END MAIL block syntax allows you to write elegant multi-line emails with automatic variable substitution using ${variable} syntax.

Syntax

BEGIN MAIL recipient
Subject: Email subject ${variable}

Dear ${customerName},

Your order ${orderId} has been shipped.

Tracking: ${trackingNumber}

Best regards,
The Team
END MAIL

How It Works

  1. First line after BEGIN MAIL: Should contain the email recipient
  2. Line starting with Subject:: Email subject line (supports ${variable})
  3. Blank line after subject: Separates subject from body
  4. Body lines: Email content with automatic ${variable} substitution
  5. Each line is converted to string concatenation with proper newline handling

Input:

nome = "João"
pedido = "12345"

BEGIN MAIL "cliente@example.com"
Subject: Confirmação do Pedido ${pedido}

Olá ${nome},

Seu pedido foi confirmado!

Atenciosamente,
Equipe de Vendas
END MAIL

Converted to:

SEND MAIL "cliente@example.com", "Confirmação do Pedido 12345", "Olá " + nome + ",\n\nSeu pedido foi confirmado!\n\nAtenciosamente,\nEquipe de Vendas"

Variable Substitution Rules

  • ${variableName} - Replaced with the variable value
  • ${FUNCTION(args)} - Function calls are evaluated and substituted
  • Plain text without ${} is treated as a string literal
  • Special characters like $ (not followed by {) are preserved
  • Newlines are preserved as \n in the final email body

Examples

Simple Email

email = "customer@example.com"
nome = "Maria"

BEGIN MAIL email
Subject: Bem-vindo ao nosso serviço!

Olá ${nome},

Obrigado por se cadastrar!

Atenciosamente,
Equipe
END MAIL

With Function Calls

BEGIN MAIL "cliente@empresa.com"
Subject: Pedido ${pedidoId} - Confirmação

Prezado ${nomeCliente},

Confirmamos seu pedido #${pedidoId} no valor de ${FORMAT(total, "currency")}.

Entrega prevista para: ${FORMAT(dataEntrega, "dd/MM/yyyy")}

Atenciosamente,
Departamento de Vendas
END MAIL

HTML Email

BEGIN MAIL "cliente@exemplo.com"
Subject: Seu pedido foi enviado!

<h1>Confirmação de Pedido</h1>

<p>Olá ${nome},</p>
<p>Seu pedido <strong>${pedidoId}</strong> foi enviado com sucesso!</p>

<p>Valor: <em>${FORMAT(valor, "currency")}</em></p>

<p>Atenciosamente,<br>Loja Virtual</p>
END MAIL

Real-World Example: Wedding Confirmation

PARAM nomeNoivo AS STRING LIKE "Carlos" DESCRIPTION "Nome do noivo"
PARAM nomeNoiva AS STRING LIKE "Ana" DESCRIPTION "Nome da noiva"
PARAM emailNoivo AS EMAIL LIKE "noivo@example.com" DESCRIPTION "Email do noivo"
PARAM emailNoiva AS EMAIL LIKE "noiva@example.com" DESCRIPTION "Email da noiva"
PARAM protocolo AS STRING LIKE "CAS123456" DESCRIPTION "Protocolo"

casamentoId = "CAS-" + FORMAT(NOW(), "yyyyMMddHHmmss")
tipoTexto = "Religioso Simples"

BEGIN MAIL emailNoivo
Subject: Confirmação de Casamento - Protocolo ${protocolo}

Queridos ${nomeNoivo} e ${nomeNoiva},

Parabéns pelo compromisso de amor que estão assumindo! Recebemos a solicitação de casamento no Santuário Cristo Redentor.

DADOS DA SOLICITAÇÃO:
Protocolo: ${protocolo}
ID: ${casamentoId}
Noivo: ${nomeNoivo}
Noiva: ${nomeNoiva}
Tipo: ${tipoTexto}

Nossa equipe verificará a disponibilidade e enviará todas as instruções necessárias em breve.

Que Deus abençoe a união de vocês!

Atenciosamente,
Secretaria do Santuário Cristo Redentor
Tel: (21) 4101-0770 | WhatsApp: (21) 99566-5883
END MAIL

Multiple Recipients

Send the same email to multiple people:

BEGIN MAIL "team1@company.com"
Subject: Meeting Reminder

Team meeting tomorrow at 3 PM.
END MAIL

BEGIN MAIL "team2@company.com"
Subject: Meeting Reminder

Team meeting tomorrow at 3 PM.
END MAIL

Or use comma-separated recipients:

recipients = "john@company.com, jane@company.com, bob@company.com"
SEND MAIL recipients, "Meeting Update", "Meeting rescheduled to 4 PM"

Advantages

  1. Cleaner Syntax - No more repetitive string concatenation for email body
  2. Easier to Read - Multi-line emails are natural to write and maintain
  3. Template-Like - Write emails like templates with ${variable} placeholders
  4. Automatic Newlines - Blank lines in the block become \n in the email
  5. Perfect for TOOL Functions - Variables are automatically filled by user input

Examples

Configuration

Default SMTP in config.csv:

name,value
email-from,noreply@example.com
email-server,smtp.example.com
email-port,587
email-user,smtp-user@example.com
email-pass,smtp-password

Examples

SEND MAIL "user@example.com", "Welcome!", "Thank you for signing up."
recipients = "john@example.com, jane@example.com"
SEND MAIL recipients, "Team Update", "Meeting tomorrow at 3 PM"
body = "<h1>Welcome!</h1><p>Thank you for joining us.</p>"
SEND MAIL "user@example.com", "Getting Started", body

USING Clause

Send through a connected account configured in Suite → Sources → Accounts:

SEND MAIL "customer@example.com", "Subject", body USING "support@company.com"

The email appears from that account’s address with proper authentication.

SEND MAIL "customer@example.com", "Ticket Update", "Your ticket has been resolved." USING "support@company.com"

Delivery Status

status = SEND MAIL "user@example.com", "Test", "Message"
IF status = "sent" THEN
    TALK "Email delivered successfully"
END IF

Best Practices

  1. Use connected accounts for better deliverability
  2. Validate email addresses before sending
  3. Implement delays for bulk emails
  4. Handle failures gracefully

Troubleshooting

IssueCauseSolution
Auth failedInvalid credentialsCheck config.csv or re-authenticate account
Not sendingFirewall blockingVerify port 587/465 is open
Going to spamNo domain authConfigure SPF/DKIM
Account not foundNot configuredAdd account in Suite → Sources

See Also

Implementation

Located in src/basic/keywords/send_mail.rs

SEND SMS

Send SMS text messages to phone numbers using various providers with optional priority levels.

Syntax

' Basic SMS sending (default priority: normal)
SEND SMS phone, message

' With priority level
SEND SMS phone, message, priority

' With specific provider
SEND SMS phone, message, provider

' With provider AND priority (full syntax)
SEND SMS phone, message, provider, priority

Parameters

ParameterTypeRequiredDescription
phoneStringYesRecipient phone number (E.164 format recommended)
messageStringYesThe text message to send (max 160 chars for single SMS)
priorityStringNoPriority level: low, normal, high, urgent
providerStringNoSMS provider: twilio, aws_sns, vonage, messagebird, or custom

Priority Levels

PriorityDescriptionProvider Behavior
lowNon-urgent, promotional messagesStandard delivery
normalDefault priorityStandard delivery
highImportant messagesTransactional routing (AWS SNS), priority prefix
urgentCritical/time-sensitiveFlash message (Vonage), [URGENT] prefix (Twilio)

Return Value

Returns a map object with the following properties:

PropertyTypeDescription
successBooleantrue if SMS was sent successfully
message_idStringProvider’s message ID for tracking
providerStringThe provider used to send the message
toStringNormalized recipient phone number
priorityStringThe priority level used
errorStringError message (only present if success is false)

Configuration

Configure SMS provider credentials in config.csv:

key,value
sms-provider,twilio
sms-default-priority,normal
twilio-account-sid,YOUR_ACCOUNT_SID
twilio-auth-token,YOUR_AUTH_TOKEN
twilio-from-number,+15551234567

Provider-Specific Configuration

Twilio:

sms-provider,twilio
twilio-account-sid,ACxxxxx
twilio-auth-token,your_token
twilio-from-number,+15551234567

AWS SNS:

sms-provider,aws_sns
aws-access-key,AKIAXXXXXXXX
aws-secret-key,your_secret
aws-region,us-east-1

Vonage (Nexmo):

sms-provider,vonage
vonage-api-key,your_api_key
vonage-api-secret,your_secret
vonage-from-number,+15551234567

MessageBird:

sms-provider,messagebird
messagebird-access-key,your_access_key
messagebird-originator,YourBrand

Examples

Basic SMS

HEAR phone AS TEXT "Enter phone number:"
SEND SMS phone, "Hello from General Bots!"
TALK "SMS sent successfully!"

SMS with Priority

' Send urgent notification
result = SEND SMS "+15551234567", "Server is DOWN! Immediate action required.", "urgent"

IF result.success THEN
    TALK "Urgent alert sent with ID: " + result.message_id
ELSE
    TALK "Failed to send alert: " + result.error
END IF

Order Confirmation (Normal Priority)

' Send order confirmation via SMS
order_id = "ORD-2025-001"
phone = customer.phone

message = "Your order " + order_id + " has been confirmed. "
message = message + "Estimated delivery: 2-3 business days."

result = SEND SMS phone, message, "normal"

IF result.success THEN
    TALK "Confirmation SMS sent to " + phone
ELSE
    TALK "Failed to send SMS. We'll email you instead."
    SEND MAIL customer.email, "Order Confirmation", message, []
END IF

Two-Factor Authentication (High Priority)

' Generate and send OTP with high priority for faster delivery
otp = RANDOM(100000, 999999)
REMEMBER "otp_" + user.id, otp, "5 minutes"

message = "Your verification code is: " + otp + ". Valid for 5 minutes."
result = SEND SMS user.phone, message, "high"

IF NOT result.success THEN
    TALK "Failed to send verification code. Please try again."
    RETURN
END IF

HEAR entered_code AS TEXT "Enter the code sent to your phone:"

stored_otp = RECALL "otp_" + user.id

IF entered_code = stored_otp THEN
    TALK "✅ Phone verified successfully!"
    SET USER MEMORY "phone_verified", true
ELSE
    TALK "❌ Invalid code. Please try again."
END IF

Emergency Alert (Urgent Priority)

' Send emergency notification to multiple recipients
alert_message = "⚠️ ALERT: System maintenance in 30 minutes. Save your work."

contacts = FIND "emergency_contacts", "notify=true"

sent_count = 0
failed_count = 0

FOR EACH contact IN contacts
    result = SEND SMS contact.phone, alert_message, "urgent"
    
    IF result.success THEN
        sent_count = sent_count + 1
    ELSE
        failed_count = failed_count + 1
        PRINT "Failed to send to " + contact.phone + ": " + result.error
    END IF
    
    WAIT 100  ' Small delay between messages
NEXT

TALK "Emergency alert sent to " + sent_count + " contacts (" + failed_count + " failed)"

Using Specific Provider with Priority

' Use AWS SNS for high-priority transactional messages
result = SEND SMS "+15551234567", "Your appointment is in 1 hour!", "aws", "high"

IF result.success THEN
    TALK "Reminder sent via " + result.provider + " with " + result.priority + " priority"
END IF

Priority-Based Routing

' Route messages based on urgency
SUB send_notification(phone, message, urgency)
    SELECT CASE urgency
        CASE "critical"
            ' Use multiple channels for critical messages
            result = SEND SMS phone, message, "urgent"
            SEND MAIL user.email, "CRITICAL: " + message, message, []
            
        CASE "important"
            result = SEND SMS phone, message, "high"
            
        CASE "info"
            result = SEND SMS phone, message, "low"
            
        CASE ELSE
            result = SEND SMS phone, message, "normal"
    END SELECT
    
    RETURN result
END SUB

' Usage
send_notification(customer.phone, "Your package has been delivered!", "important")

Appointment Reminder with Priority

' Send appointment reminder based on time until appointment
hours_until = DATEDIFF(appointment.datetime, NOW(), "hour")

IF hours_until <= 1 THEN
    ' Urgent - appointment is very soon
    priority = "urgent"
    message = "⏰ REMINDER: Your appointment is in " + hours_until + " hour(s)!"
ELSE IF hours_until <= 4 THEN
    ' High priority - same day
    priority = "high"
    message = "Reminder: Your appointment is today at " + FORMAT(appointment.datetime, "h:mm A")
ELSE
    ' Normal priority - advance reminder
    priority = "normal"
    message = "Reminder: You have an appointment on " + FORMAT(appointment.datetime, "MMMM D")
END IF

result = SEND SMS patient.phone, message, priority

IF result.success THEN
    UPDATE "appointments", appointment.id, "reminder_sent", true
END IF

Bulk SMS with Priority Levels

' Send promotional messages with low priority (cost-effective)
customers = FIND "customers.csv", "marketing_opt_in = true"

FOR EACH customer IN customers
    message = "Hi " + customer.first_name + "! Check out our weekend sale - 20% off!"
    
    ' Use low priority for promotional bulk messages
    result = SEND SMS customer.phone, message, "low"
    
    IF result.success THEN
        INSERT "sms_log", customer.phone, message, result.message_id, NOW()
    END IF
    
    WAIT 500  ' Rate limiting for bulk sends
NEXT

TALK "Campaign completed!"

Multi-Channel Fallback with Priority

' Try SMS first, fall back to other channels
SUB notify_user(user, message, priority)
    ' Try SMS first
    result = SEND SMS user.phone, message, priority
    
    IF result.success THEN
        RETURN "sms"
    END IF
    
    ' SMS failed, try WhatsApp
    wa_result = SEND WHATSAPP user.phone, message
    
    IF wa_result.success THEN
        RETURN "whatsapp"
    END IF
    
    ' Fall back to email
    SEND MAIL user.email, "Notification", message, []
    RETURN "email"
    
END SUB

' Usage
channel_used = notify_user(customer, "Your order has shipped!", "high")
TALK "Notification sent via " + channel_used

Phone Number Formats

The keyword accepts various phone number formats and normalizes them:

FormatExampleResult
E.164+14155551234+14155551234
National (US)(415) 555-1234+14155551234
Digits only (10)4155551234+14155551234
Digits only (11)14155551234+14155551234

Best Practice: Always use E.164 format (+ followed by country code and number).

Message Length

TypeCharactersNotes
Single SMS160Standard ASCII
Unicode SMS70Emojis, non-Latin scripts
Concatenated153 × segmentsLong messages split

Note: High and urgent priority messages may have prefixes added (e.g., [URGENT]), which reduces available characters.

' Check message length before sending
IF LEN(message) > 140 AND priority = "urgent" THEN
    TALK "Warning: Urgent prefix may cause message to split"
END IF

SEND SMS phone, message, priority

Priority Behavior by Provider

ProviderLowNormalHighUrgent
TwilioStandardStandard[HIGH] prefix[URGENT] prefix
AWS SNSPromotionalPromotionalTransactionalTransactional
VonageStandardStandardStandardFlash message (class 0)
MessageBirdStandardStandardClass 1Flash message (class 0)

Error Handling

' Handle SMS errors gracefully
result = SEND SMS phone, message, "high"

IF NOT result.success THEN
    ' Log the failure
    INSERT "sms_failures", phone, message, result.error, NOW()
    
    ' Check error type and respond
    IF result.error LIKE "*INVALID_PHONE*" THEN
        TALK "The phone number appears to be invalid."
    ELSE IF result.error LIKE "*INSUFFICIENT_FUNDS*" THEN
        TALK "SMS service temporarily unavailable."
        ' Alert admin
        SEND MAIL admin.email, "SMS Balance Low", "Please top up SMS credits", []
    ELSE
        TALK "Could not send SMS: " + result.error
    END IF
    
    ' Fallback to email if available
    IF user.email <> "" THEN
        SEND MAIL user.email, "Notification", message, []
    END IF
END IF

Cost Considerations

SMS messages incur costs per message sent. Consider:

  • Use low priority for promotional/non-urgent messages (may use cheaper routes)
  • Use high/urgent only when delivery speed is critical
  • Use SEND WHATSAPP for free messaging when possible
  • Batch non-urgent messages to optimize costs

Compliance

When sending SMS messages, ensure compliance with:

  • TCPA (US) - Require consent before sending
  • GDPR (EU) - Document consent and provide opt-out
  • LGPD (Brazil) - Similar consent requirements
' Check opt-in before sending
IF GET USER MEMORY "sms_opt_in" = true THEN
    SEND SMS phone, message, priority
ELSE
    TALK "User has not opted in to SMS notifications"
END IF

See Also

Implementation

The SEND SMS keyword is implemented in src/basic/keywords/sms.rs with support for multiple providers through a unified interface. Priority levels are mapped to provider-specific features for optimal delivery.

READ

The READ keyword loads content from files stored in the bot’s drive storage, enabling bots to access documents, data files, and other stored resources.


Syntax

content = READ "filename"
content = READ "path/to/filename"
data = READ "filename.csv" AS TABLE
lines = READ "filename.txt" AS LINES

Parameters

ParameterTypeDescription
filenameStringPath to the file in the bot’s storage
AS TABLEFlagParse CSV/Excel files as structured data
AS LINESFlagReturn content as array of lines

Description

READ retrieves file content from the bot’s configured storage (drive bucket). It supports:

  • Text files (.txt, .md, .json, .xml, .csv)
  • Documents (.pdf, .docx) — automatically extracts text
  • Spreadsheets (.xlsx, .csv) — can parse as structured data
  • Binary files — returned as base64 encoded string

The file path is relative to the bot’s storage root. Use forward slashes for subdirectories.


Examples

Basic File Read

' Read a text file
content = READ "welcome-message.txt"
TALK content

Read from Subdirectory

' Read file from nested folder
template = READ "templates/email/welcome.html"

Read JSON Data

' Read and parse JSON configuration
config_text = READ "config.json"
config = JSON_PARSE(config_text)

TALK "Current theme: " + config.theme

Read CSV as Table

' Load CSV data as structured table
products = READ "inventory/products.csv" AS TABLE

FOR EACH product IN products
    TALK product.name + ": $" + product.price
NEXT

Read as Lines

' Read file as array of lines
faq_lines = READ "faq.txt" AS LINES

TALK "We have " + LEN(faq_lines) + " FAQ entries"

FOR EACH line IN faq_lines
    IF INSTR(line, user_question) > 0 THEN
        TALK "Found relevant FAQ: " + line
    END IF
NEXT

Read PDF Document

' Extract text from PDF
contract_text = READ "documents/contract.pdf"
TALK "Contract length: " + LEN(contract_text) + " characters"

' Use LLM to analyze
summary = LLM "Summarize the key points of this contract:\n\n" + contract_text
TALK summary

Read Excel Spreadsheet

' Load Excel data
sales_data = READ "reports/sales-q1.xlsx" AS TABLE

total = 0
FOR EACH row IN sales_data
    total = total + row.amount
NEXT

TALK "Total Q1 sales: $" + FORMAT(total, "#,##0.00")

Working with Different File Types

Text Files

' Plain text - returned as string
notes = READ "notes.txt"
readme = READ "README.md"

JSON Files

' JSON - returned as string, use JSON_PARSE for object
json_text = READ "data.json"
data = JSON_PARSE(json_text)

CSV Files

' CSV as string
csv_raw = READ "data.csv"

' CSV as structured table (recommended)
csv_data = READ "data.csv" AS TABLE
first_row = csv_data[0]

Documents

' PDF - text extracted automatically
pdf_content = READ "report.pdf"

' Word documents - text extracted automatically
doc_content = READ "proposal.docx"

Error Handling

ON ERROR RESUME NEXT

content = READ "optional-file.txt"

IF ERROR THEN
    PRINT "File not found, using default"
    content = "Default content"
END IF

Check File Exists

' List directory to check if file exists
files = LIST "documents/"

found = false
FOR EACH file IN files
    IF file.name = "report.pdf" THEN
        found = true
        EXIT FOR
    END IF
NEXT

IF found THEN
    content = READ "documents/report.pdf"
ELSE
    TALK "Report not found. Would you like me to generate one?"
END IF

Common Use Cases

Load Email Template

' Read HTML template and fill variables
template = READ "templates/order-confirmation.html"

' Replace placeholders
email_body = REPLACE(template, "{{customer_name}}", customer.name)
email_body = REPLACE(email_body, "{{order_id}}", order.id)
email_body = REPLACE(email_body, "{{total}}", FORMAT(order.total, "$#,##0.00"))

SEND MAIL customer.email, "Order Confirmation", email_body

Process Data File

' Read customer list and send personalized messages
customers = READ "campaigns/target-customers.csv" AS TABLE

FOR EACH customer IN customers
    IF customer.opted_in = "yes" THEN
        message = "Hi " + customer.first_name + ", check out our new products!"
        SEND SMS customer.phone, message
    END IF
NEXT

TALK "Campaign sent to " + LEN(customers) + " customers"

Load Bot Configuration

' Read bot settings from file
settings_text = READ "bot-settings.json"
settings = JSON_PARSE(settings_text)

' Apply settings
SET BOT MEMORY "greeting", settings.greeting
SET BOT MEMORY "language", settings.language
SET BOT MEMORY "max_retries", settings.max_retries

Knowledge Base Lookup

' Read FAQ document for quick lookups
faq_content = READ "knowledge/faq.md"

' Search for relevant section
IF INSTR(user_question, "return") > 0 THEN
    ' Extract return policy section
    start_pos = INSTR(faq_content, "## Return Policy")
    end_pos = INSTR(faq_content, "##", start_pos + 1)
    policy = MID(faq_content, start_pos, end_pos - start_pos)
    TALK policy
END IF

File Path Rules

PathDescription
file.txtRoot of bot’s storage
folder/file.txtSubdirectory
folder/sub/file.txtNested subdirectory
../file.txtNot allowed — no parent traversal
/absolute/pathNot allowed — paths are always relative

Configuration

Configure storage settings in config.csv:

name,value
drive-provider,seaweedfs
drive-url,http://localhost:8333
drive-bucket,my-bot
drive-read-timeout,30

Implementation Notes

  • Implemented in Rust under src/file/mod.rs
  • Automatically detects file encoding (UTF-8, UTF-16, etc.)
  • PDF extraction uses pdf-extract crate
  • DOCX extraction parses XML content
  • Maximum file size: 50MB (configurable)
  • Files are cached in memory for repeated reads

  • WRITE — Save content to files
  • LIST — List files in a directory
  • DOWNLOAD — Download files from URLs
  • UPLOAD — Upload files to storage
  • DELETE FILE — Remove files
  • GET — Read from URLs or files

Summary

READ is the primary keyword for accessing stored files. It handles text extraction from various document formats, supports structured data parsing for CSV/Excel files, and integrates seamlessly with the bot’s storage system. Use it to load templates, process data files, access configuration, and work with uploaded documents.

WRITE

The WRITE keyword saves content to files in the bot’s drive storage, enabling bots to create documents, export data, and persist information.


Syntax

WRITE content TO "filename"
WRITE data TO "filename.csv" AS TABLE
WRITE lines TO "filename.txt" AS LINES
WRITE content TO "filename" APPEND

Parameters

ParameterTypeDescription
contentStringThe content to write to the file
filenameStringPath to the file in the bot’s storage
AS TABLEFlagWrite structured data as CSV format
AS LINESFlagWrite array as separate lines
APPENDFlagAdd to existing file instead of overwriting

Description

WRITE saves content to the bot’s configured storage (drive bucket). It supports:

  • Text files (.txt, .md, .json, .xml, .csv)
  • Creating new files or overwriting existing ones
  • Appending to existing files
  • Writing structured data as CSV
  • Automatic directory creation

The file path is relative to the bot’s storage root. Use forward slashes for subdirectories.


Examples

Basic File Write

' Write a simple text file
message = "Welcome to our service!"
WRITE message TO "welcome.txt"

TALK "File saved successfully!"

Write to Subdirectory

' Write file to nested folder (directories created automatically)
report = "Monthly Report\n\nSales: $10,000\nExpenses: $3,000"
WRITE report TO "reports/2025/january.md"

Write JSON Data

' Create JSON configuration file
config_json = '{"theme": "dark", "language": "en", "notifications": true}'
WRITE config_json TO "settings.json"

Write CSV as Table

' Export data as CSV - use FIND to get data from database
orders = FIND "orders" WHERE status = "completed" LIMIT 100

WRITE orders TO "exports/orders.csv" AS TABLE
TALK "Exported " + LEN(orders) + " orders to CSV"

Write Lines

' Write array as separate lines
log_entries = [
    "2025-01-15 10:00 - User logged in",
    "2025-01-15 10:05 - Order placed",
    "2025-01-15 10:10 - Payment processed"
]

WRITE log_entries TO "logs/activity.log" AS LINES

Append to File

' Add entry to existing log file
new_entry = FORMAT(NOW(), "YYYY-MM-DD HH:mm") + " - " + event_description + "\n"
WRITE new_entry TO "logs/events.log" APPEND

Common Use Cases

Generate Report

' Create a formatted report
report = "# Sales Report\n\n"
report = report + "**Date:** " + FORMAT(NOW(), "MMMM DD, YYYY") + "\n\n"
report = report + "## Summary\n\n"
report = report + "- Total Sales: $" + FORMAT(total_sales, "#,##0.00") + "\n"
report = report + "- Orders: " + order_count + "\n"
report = report + "- Average Order: $" + FORMAT(total_sales / order_count, "#,##0.00") + "\n"

filename = "reports/sales-" + FORMAT(NOW(), "YYYYMMDD") + ".md"
WRITE report TO filename

TALK "Report saved to " + filename

Export Customer Data

' Export customer list to CSV
customers = FIND "customers" WHERE status = "active"

WRITE customers TO "exports/active-customers.csv" AS TABLE

' Email the export
SEND MAIL "manager@company.com", "Customer Export", "See attached file", "exports/active-customers.csv"

Save Meeting Notes

' Save notes from a conversation
notes = "# Meeting Notes\n\n"
notes = notes + "**Date:** " + FORMAT(NOW(), "YYYY-MM-DD HH:mm") + "\n"
notes = notes + "**Participants:** " + participants + "\n\n"
notes = notes + "## Discussion\n\n"
notes = notes + meeting_content + "\n\n"
notes = notes + "## Action Items\n\n"
notes = notes + action_items

filename = "meetings/" + FORMAT(NOW(), "YYYYMMDD") + "-" + meeting_topic + ".md"
WRITE notes TO filename

TALK "Meeting notes saved!"

Create Backup

' Backup current data
data = GET BOT MEMORY "important_data"
backup_name = "backups/data-" + FORMAT(NOW(), "YYYYMMDD-HHmmss") + ".json"
WRITE JSON_STRINGIFY(data) TO backup_name

TALK "Backup created: " + backup_name

Build Log File

' Append to daily log
log_line = FORMAT(NOW(), "HH:mm:ss") + " | " + user_id + " | " + action + " | " + details
log_file = "logs/" + FORMAT(NOW(), "YYYY-MM-DD") + ".log"

WRITE log_line + "\n" TO log_file APPEND

Generate HTML Page

' Create a simple HTML report
html = "<!DOCTYPE html>\n"
html = html + "<html><head><title>Report</title></head>\n"
html = html + "<body>\n"
html = html + "<h1>Daily Summary</h1>\n"
html = html + "<p>Generated: " + FORMAT(NOW(), "YYYY-MM-DD HH:mm") + "</p>\n"
html = html + "<ul>\n"

FOR EACH item IN summary_items
    html = html + "<li>" + item + "</li>\n"
NEXT

html = html + "</ul>\n"
html = html + "</body></html>"

WRITE html TO "reports/daily-summary.html"

Writing Different Formats

Plain Text

WRITE "Hello, World!" TO "greeting.txt"

Markdown

doc = "# Title\n\n## Section 1\n\nContent here.\n"
WRITE doc TO "document.md"

JSON

json_text = '{"name": "Test", "value": 123}'
WRITE json_text TO "data.json"

CSV (Manual)

csv = "name,email,phone\n"
csv = csv + "Alice,alice@example.com,555-0100\n"
csv = csv + "Bob,bob@example.com,555-0101\n"
WRITE csv TO "contacts.csv"

CSV (From Table)

' Write query results as CSV
data = FIND "contacts" WHERE active = true
WRITE data TO "contacts.csv" AS TABLE

Error Handling

ON ERROR RESUME NEXT

WRITE content TO "protected/file.txt"

IF ERROR THEN
    PRINT "Write failed: " + ERROR_MESSAGE
    TALK "Sorry, I couldn't save the file. Please try again."
ELSE
    TALK "File saved successfully!"
END IF

File Path Rules

PathDescription
file.txtRoot of bot’s storage
folder/file.txtSubdirectory (created if needed)
folder/sub/file.txtNested subdirectory
../file.txtNot allowed — no parent traversal
/absolute/pathNot allowed — paths are always relative

Overwrite vs Append

ModeBehavior
DefaultOverwrites existing file completely
APPENDAdds content to end of existing file
' Overwrite (default)
WRITE "New content" TO "file.txt"

' Append
WRITE "Additional content\n" TO "file.txt" APPEND

Configuration

Configure storage settings in config.csv:

name,value
drive-provider,seaweedfs
drive-url,http://localhost:8333
drive-bucket,my-bot
drive-write-timeout,60
drive-max-file-size,52428800

Implementation Notes

  • Implemented in Rust under src/file/mod.rs
  • Automatically creates parent directories
  • Uses UTF-8 encoding for text files
  • Maximum file size: 50MB (configurable)
  • Atomic writes to prevent corruption
  • Returns confirmation on success

  • READ — Load content from files
  • LIST — List files in a directory
  • DELETE FILE — Remove files
  • COPY — Copy files
  • MOVE — Move or rename files
  • UPLOAD — Upload files to storage

Summary

WRITE is the primary keyword for creating and saving files. Use it to generate reports, export data, create backups, build logs, and persist any content. Combined with AS TABLE for CSV exports and APPEND for log files, it provides flexible file creation capabilities for any bot workflow.

UPLOAD

The UPLOAD keyword transfers files from external URLs or local paths to the bot’s drive storage, enabling bots to collect documents, images, and other files from users or external sources.


Syntax

result = UPLOAD url
result = UPLOAD url TO "destination"
result = UPLOAD url TO "destination" AS "filename"
UPLOAD file_data TO "destination"

Parameters

ParameterTypeDescription
urlStringSource URL to download and upload
destinationStringTarget folder in bot’s storage
filenameStringCustom filename (optional)
file_dataBinaryFile data from user input or API response

Description

UPLOAD retrieves a file from a URL or accepts file data and stores it in the bot’s configured storage (drive bucket). It supports:

  • Downloading files from external URLs
  • Accepting file uploads from chat users
  • Storing API response attachments
  • Organizing files into folders
  • Automatic filename detection or custom naming

The destination path is relative to the bot’s storage root. Directories are created automatically if they don’t exist.


Examples

Basic URL Upload

' Download and store a file from URL
result = UPLOAD "https://example.com/report.pdf"
TALK "File saved as: " + result.filename

Upload to Specific Folder

' Upload to a specific directory
result = UPLOAD "https://cdn.example.com/image.png" TO "images/products"
TALK "Image stored at: " + result.path

Upload with Custom Filename

' Upload with a custom name
result = UPLOAD "https://api.example.com/export/data" TO "exports" AS "monthly-report.xlsx"
TALK "Report saved as: " + result.filename

Handle User File Upload

' When user sends a file via WhatsApp/chat
TALK "Please send me the document you'd like to upload."
HEAR user_file

IF user_file.type = "file" THEN
    result = UPLOAD user_file TO "user-uploads/" + user.id
    TALK "Got it! I've saved your file: " + result.filename
ELSE
    TALK "That doesn't look like a file. Please try again."
END IF

Upload from API Response

' Download attachment from external API
invoice_url = GET "https://api.billing.com/invoices/" + invoice_id + "/pdf"
result = UPLOAD invoice_url.download_url TO "invoices/" + customer_id

TALK "Invoice downloaded and saved!"
SEND MAIL customer_email, "Your Invoice", "Please find your invoice attached.", result.path

Return Value

UPLOAD returns an object with:

PropertyDescription
result.pathFull path in storage
result.filenameName of the saved file
result.sizeFile size in bytes
result.typeMIME type of the file
result.urlInternal URL to access the file

Common Use Cases

Collect User Documents

' Document collection flow
TALK "I need a few documents to process your application."

TALK "First, please upload your ID document."
HEAR id_doc
id_result = UPLOAD id_doc TO "applications/" + application_id + "/documents" AS "id-document"

TALK "Great! Now please upload proof of address."
HEAR address_doc
address_result = UPLOAD address_doc TO "applications/" + application_id + "/documents" AS "proof-of-address"

TALK "Thank you! I've received:"
TALK "✓ ID Document: " + id_result.filename
TALK "✓ Proof of Address: " + address_result.filename

Archive External Content

' Download and archive web content
urls = [
    "https://example.com/report-2024.pdf",
    "https://example.com/report-2025.pdf"
]

FOR EACH url IN urls
    result = UPLOAD url TO "archive/reports"
    TALK "Archived: " + result.filename
NEXT

TALK "All reports archived successfully!"

Profile Photo Upload

TALK "Would you like to update your profile photo? Send me an image."
HEAR photo

IF photo.type = "image" THEN
    result = UPLOAD photo TO "profiles" AS user.id + "-avatar"
    SET USER MEMORY "avatar_url", result.url
    TALK "Profile photo updated! Looking good! 📸"
ELSE
    TALK "Please send an image file."
END IF

Backup External Data

' Backup data from external service
backup_url = "https://api.service.com/export?format=json&date=" + FORMAT(NOW(), "YYYY-MM-DD")
SET HEADER "Authorization", "Bearer " + api_token

result = UPLOAD backup_url TO "backups" AS "backup-" + FORMAT(NOW(), "YYYYMMDD") + ".json"

TALK "Backup complete: " + FORMAT(result.size / 1024, "#,##0") + " KB"

Receipt Collection

' Expense report receipt upload
TALK "Please upload your receipt for the expense."
HEAR receipt

result = UPLOAD receipt TO "expenses/" + expense_id + "/receipts"

' Update expense record
UPDATE "expenses" SET receipt_path = result.path WHERE id = expense_id

TALK "Receipt attached to expense #" + expense_id

Supported File Types

CategoryExtensions
Documents.pdf, .docx, .doc, .txt, .md, .rtf
Spreadsheets.xlsx, .xls, .csv
Images.jpg, .jpeg, .png, .gif, .webp, .svg
Archives.zip, .tar, .gz, .rar
Audio.mp3, .wav, .ogg, .m4a
Video.mp4, .mov, .avi, .webm
Data.json, .xml, .yaml

Error Handling

ON ERROR RESUME NEXT

result = UPLOAD "https://example.com/large-file.zip" TO "downloads"

IF ERROR THEN
    PRINT "Upload failed: " + ERROR_MESSAGE
    TALK "Sorry, I couldn't download that file. The server might be unavailable."
ELSE IF result.size > 50000000 THEN
    TALK "Warning: This is a large file (" + FORMAT(result.size / 1048576, "#,##0") + " MB)"
ELSE
    TALK "File uploaded successfully!"
END IF

Validate File Type

HEAR user_file

allowed_types = ["application/pdf", "image/jpeg", "image/png"]

IF NOT CONTAINS(allowed_types, user_file.mime_type) THEN
    TALK "Sorry, I only accept PDF and image files."
ELSE
    result = UPLOAD user_file TO "uploads"
    TALK "File accepted!"
END IF

Size Limits

LimitDefaultConfigurable
Maximum file size50 MBYes
Maximum files per folder10,000Yes
Total storage per bot10 GBYes

Configuration

Configure upload settings in config.csv:

name,value
drive-provider,seaweedfs
drive-url,http://localhost:8333
drive-bucket,my-bot
upload-max-size,52428800
upload-allowed-types,pdf,docx,xlsx,jpg,png
upload-timeout,120

Security Considerations

  • Files are scanned for malware before storage
  • Executable files (.exe, .sh, .bat) are blocked by default
  • File paths are sanitized to prevent directory traversal
  • Original filenames are preserved but sanitized
  • Large files are chunked for reliable upload

Implementation Notes

  • Implemented in Rust under src/file/mod.rs
  • Uses streaming upload for large files
  • Supports resume for interrupted uploads
  • Automatic retry on network failures (up to 3 attempts)
  • Progress tracking available for large files
  • Deduplication based on content hash (optional)

  • DOWNLOAD — Download files to user
  • READ — Read file contents
  • WRITE — Write content to files
  • LIST — List files in storage
  • DELETE FILE — Remove files
  • COPY — Copy files within storage

Summary

UPLOAD is essential for collecting files from users and external sources. Use it to accept document uploads, archive web content, collect receipts and photos, and store API response attachments. Combined with folder organization and custom naming, it provides flexible file collection for any bot workflow.

DOWNLOAD

The DOWNLOAD keyword retrieves files from the bot’s storage and sends them to users or saves them to external locations, enabling bots to share documents, export data, and deliver files through chat channels.


Syntax

DOWNLOAD "filename"
DOWNLOAD "filename" TO user
DOWNLOAD "filename" AS "display_name"
url = DOWNLOAD "filename" AS LINK

Parameters

ParameterTypeDescription
filenameStringPath to the file in the bot’s storage
TO userFlagSend file to specific user (default: current user)
AS "name"StringCustom display name for the file
AS LINKFlagReturn a download URL instead of sending file

Description

DOWNLOAD retrieves a file from the bot’s configured storage (drive bucket) and delivers it to the user through their chat channel. It supports:

  • Sending files directly in chat (WhatsApp, Telegram, web, etc.)
  • Custom display names for downloaded files
  • Generating shareable download links
  • Sending files to specific users
  • Automatic MIME type detection

The file path is relative to the bot’s storage root. Use forward slashes for subdirectories.


Examples

Basic File Download

' Send a file to the current user
DOWNLOAD "documents/user-guide.pdf"
TALK "Here's the user guide you requested!"

Download with Custom Name

' Send file with a friendly display name
DOWNLOAD "reports/rpt-2025-01.pdf" AS "January 2025 Report.pdf"
' Get a shareable URL instead of sending directly
link = DOWNLOAD "exports/data.xlsx" AS LINK
TALK "Download your data here: " + link

' Link expires after 24 hours by default

Send to Specific User

' Send file to a different user
DOWNLOAD "contracts/agreement.pdf" TO manager_email
TALK "I've sent the contract to your manager for review."

Download After Processing

' Generate a report and send it
report_content = "# Sales Report\n\n" + sales_data
WRITE report_content TO "temp/report.md"

' Convert to PDF (if configured)
GENERATE PDF "temp/report.md" TO "temp/report.pdf"

DOWNLOAD "temp/report.pdf" AS "Sales Report.pdf"
TALK "Here's your sales report!"

Common Use Cases

Send Invoice

' Lookup and send customer invoice
invoice_path = "invoices/" + customer_id + "/" + invoice_id + ".pdf"

DOWNLOAD invoice_path AS "Invoice-" + invoice_id + ".pdf"
TALK "Here's your invoice. Let me know if you have any questions!"

Export Data

' Export user's data to file and send
user_data = FIND "orders" WHERE customer_id = user.id
WRITE user_data TO "exports/user-" + user.id + "-orders.csv" AS TABLE

DOWNLOAD "exports/user-" + user.id + "-orders.csv" AS "My Orders.csv"
TALK "Here's a complete export of your order history."

Share Meeting Notes

' Send meeting notes from earlier session
meeting_date = FORMAT(NOW(), "YYYY-MM-DD")
notes_file = "meetings/" + meeting_date + "-notes.md"

IF FILE_EXISTS(notes_file) THEN
    DOWNLOAD notes_file AS "Meeting Notes - " + meeting_date + ".md"
    TALK "Here are the notes from today's meeting!"
ELSE
    TALK "I don't have any meeting notes for today."
END IF

Provide Template

' Send a template file for user to fill out
TALK "I'll send you the application form. Please fill it out and send it back."
DOWNLOAD "templates/application-form.docx" AS "Application Form.docx"

Generate and Share Report

' Create report on demand
TALK "Generating your monthly report..."

' Build report content
report = "# Monthly Summary\n\n"
report = report + "**Period:** " + month_name + " " + year + "\n\n"
report = report + "## Key Metrics\n\n"
report = report + "- Revenue: $" + FORMAT(revenue, "#,##0.00") + "\n"
report = report + "- Orders: " + order_count + "\n"
report = report + "- New Customers: " + new_customers + "\n"

' Save and send
filename = "reports/monthly-" + FORMAT(NOW(), "YYYYMM") + ".md"
WRITE report TO filename
DOWNLOAD filename AS "Monthly Report - " + month_name + ".md"

Send Multiple Files

' Send several related files
files = ["contract.pdf", "terms.pdf", "schedule.pdf"]

TALK "I'm sending you the complete documentation package:"

FOR EACH file IN files
    DOWNLOAD "documents/" + file
    WAIT 1  ' Brief pause between files
NEXT

TALK "All documents sent! Please review and let me know if you have questions."

Return Values

Direct Download (default)

Returns a confirmation object:

PropertyDescription
result.sentBoolean indicating success
result.filenameName of file sent
result.sizeFile size in bytes

Returns a URL string:

link = DOWNLOAD "file.pdf" AS LINK
' Returns: "https://storage.example.com/download/abc123?expires=..."

Channel-Specific Behavior

ChannelBehavior
WhatsAppSends as document attachment
TelegramSends as document or media based on type
Web ChatTriggers browser download
EmailAttaches to email message
SMSSends download link (files not supported)

File Type Handling

File TypeDisplay
PDFDocument with preview
ImagesInline image display
AudioAudio player
VideoVideo player
OtherGeneric document icon
' Images display inline in most channels
DOWNLOAD "photos/product.jpg"

' PDFs show with document preview
DOWNLOAD "docs/manual.pdf"

Error Handling

ON ERROR RESUME NEXT

DOWNLOAD "reports/missing-file.pdf"

IF ERROR THEN
    PRINT "Download failed: " + ERROR_MESSAGE
    TALK "Sorry, I couldn't find that file. It may have been moved or deleted."
END IF

Check File Exists First

files = LIST "invoices/" + customer_id + "/"

found = false
FOR EACH file IN files
    IF file.name = invoice_id + ".pdf" THEN
        found = true
        EXIT FOR
    END IF
NEXT

IF found THEN
    DOWNLOAD "invoices/" + customer_id + "/" + invoice_id + ".pdf"
ELSE
    TALK "Invoice not found. Please check the invoice number."
END IF

When using AS LINK, you can configure link behavior:

' Default link (expires in 24 hours)
link = DOWNLOAD "file.pdf" AS LINK

' Custom expiration (in config.csv)
' download-link-expiry,3600  (1 hour)

Size Limits

LimitDefaultNotes
WhatsApp100 MBDocuments, 16 MB for media
Telegram50 MBStandard, 2 GB for premium
Web ChatNo limitBrowser handles download
Email25 MBTypical email limit
' For large files, use link instead
file_info = LIST "exports/large-file.zip"

IF file_info[0].size > 50000000 THEN
    link = DOWNLOAD "exports/large-file.zip" AS LINK
    TALK "This file is large. Download it here: " + link
ELSE
    DOWNLOAD "exports/large-file.zip"
END IF

Configuration

Configure download settings in config.csv:

name,value
drive-provider,seaweedfs
drive-url,http://localhost:8333
drive-bucket,my-bot
download-link-expiry,86400
download-link-base-url,https://files.mybot.com
download-max-size,104857600

Implementation Notes

  • Implemented in Rust under src/file/mod.rs
  • Uses streaming for large file transfers
  • Automatic MIME type detection
  • Supports range requests for resumable downloads
  • Files are served through secure signed URLs
  • Access logging for audit trails

  • UPLOAD — Upload files to storage
  • READ — Read file contents
  • WRITE — Write content to files
  • LIST — List files in storage
  • GENERATE PDF — Create PDF documents

Summary

DOWNLOAD is essential for delivering files to users through chat. Use it to send invoices, share reports, provide templates, and export data. Combined with AS LINK for large files and custom display names, it provides flexible file delivery for any bot workflow.

GENERATE PDF

The GENERATE PDF keyword creates PDF documents from HTML templates or Markdown content, enabling bots to produce professional reports, invoices, certificates, and other documents.

Note: This keyword uses spaces, not underscores. Write GENERATE PDF not GENERATE_PDF.


Syntax

result = GENERATE PDF template, data, "output.pdf"

Parameters

ParameterTypeDescription
templateStringPath to HTML template or Markdown file
dataObjectTemplate variables to substitute
outputStringOutput path for the generated PDF

Description

GENERATE PDF renders an HTML or Markdown template into a PDF document, substituting placeholders with provided values. The generated PDF is stored in the bot’s drive storage and can be downloaded, emailed, or processed further.

Use cases include:

  • Generating invoices and receipts
  • Creating reports and summaries
  • Producing certificates and credentials
  • Building contracts and agreements
  • Creating personalized documents

Examples

Basic PDF Generation

' Generate PDF from template with data
data = #{
    "title": "Invoice",
    "date": FORMAT(NOW(), "MMMM DD, YYYY")
}
result = GENERATE PDF "templates/invoice.html", data, "invoices/inv-001.pdf"

TALK "Invoice generated!"

With Template Variables

' Generate PDF with data substitution
data = #{
    "customer_name": customer.name,
    "customer_email": customer.email,
    "invoice_number": invoice_id,
    "date": FORMAT(NOW(), "MMMM DD, YYYY"),
    "items": order_items,
    "subtotal": order_subtotal,
    "tax": order_tax,
    "total": order_total
}

result = GENERATE PDF "templates/invoice.html", data, "invoices/inv-" + invoice_id + ".pdf"

TALK "Invoice #" + invoice_id + " generated!"

Generate and Download

' Create PDF and send to user
data = #{
    "title": "Monthly Report",
    "period": FORMAT(NOW(), "MMMM YYYY"),
    "data": report_data
}

result = GENERATE PDF "templates/report.html", data, "temp/report.pdf"

DOWNLOAD result.url AS "Monthly Report.pdf"
TALK "Here's your report!"

Generate and Email

' Create PDF and email it
data = #{
    "party_a": company_name,
    "party_b": customer_name,
    "effective_date": FORMAT(NOW(), "MMMM DD, YYYY"),
    "terms": contract_terms
}

result = GENERATE PDF "templates/contract.html", data, "contracts/" + contract_id + ".pdf"

SEND MAIL customer_email, "Your Contract", 
    "Please find attached your contract for review.",
    [result.localName]

TALK "Contract sent to " + customer_email

Template Format

HTML Template

<!DOCTYPE html>
<html>
<head>
    <style>
        body { font-family: Arial, sans-serif; }
        .header { text-align: center; margin-bottom: 20px; }
        .invoice-number { color: #666; }
        table { width: 100%; border-collapse: collapse; }
        th, td { border: 1px solid #ddd; padding: 8px; }
        .total { font-weight: bold; font-size: 1.2em; }
    </style>
</head>
<body>
    <div class="header">
        <h1>INVOICE</h1>
        <p class="invoice-number">{{invoice_number}}</p>
    </div>
    
    <p><strong>Date:</strong> {{date}}</p>
    <p><strong>Customer:</strong> {{customer_name}}</p>
    
    <table>
        <tr>
            <th>Item</th>
            <th>Quantity</th>
            <th>Price</th>
        </tr>
        {{#each items}}
        <tr>
            <td>{{this.name}}</td>
            <td>{{this.quantity}}</td>
            <td>${{this.price}}</td>
        </tr>
        {{/each}}
    </table>
    
    <p class="total">Total: ${{total}}</p>
</body>
</html>

Markdown Template

# {{title}}

**Date:** {{date}}
**Prepared for:** {{customer_name}}

## Summary

{{summary}}

## Details

{{#each items}}
- **{{this.name}}:** {{this.description}}
{{/each}}

---
Generated by General Bots

Template Placeholders

SyntaxDescription
{{variable}}Simple variable substitution
{{#each items}}...{{/each}}Loop over array
{{#if condition}}...{{/if}}Conditional rendering
{{#unless condition}}...{{/unless}}Negative conditional
{{this.property}}Access property in loop

Common Use Cases

Invoice Generation

' Generate a complete invoice
items = FIND "order_items" WHERE order_id = order.id

data = #{
    "invoice_number": "INV-" + FORMAT(order.id, "00000"),
    "date": FORMAT(NOW(), "MMMM DD, YYYY"),
    "due_date": FORMAT(DATEADD(NOW(), 30, "day"), "MMMM DD, YYYY"),
    "customer_name": customer.name,
    "customer_address": customer.address,
    "items": items,
    "subtotal": FORMAT(order.subtotal, "#,##0.00"),
    "tax": FORMAT(order.tax, "#,##0.00"),
    "total": FORMAT(order.total, "#,##0.00")
}

result = GENERATE PDF "templates/invoice.html", data, "invoices/" + order.id + ".pdf"

TALK "Invoice generated: " + result.localName

Certificate Generation

' Generate completion certificate
data = #{
    "recipient_name": user.name,
    "course_name": course.title,
    "completion_date": FORMAT(NOW(), "MMMM DD, YYYY"),
    "certificate_id": GUID(),
    "instructor_name": course.instructor
}

result = GENERATE PDF "templates/certificate.html", data, "certificates/" + user.id + "-" + course.id + ".pdf"

DOWNLOAD result.url AS "Certificate - " + course.title + ".pdf"
TALK "Congratulations! Here's your certificate!"

Report Generation

' Generate monthly sales report
sales_data = FIND "sales" WHERE 
    date >= DATEADD(NOW(), -30, "day")

summary = AGGREGATE sales_data SUM amount
count = AGGREGATE sales_data COUNT

data = #{
    "title": "Monthly Sales Report",
    "period": FORMAT(NOW(), "MMMM YYYY"),
    "total_sales": FORMAT(summary, "$#,##0.00"),
    "transaction_count": count,
    "sales_data": sales_data,
    "generated_at": FORMAT(NOW(), "YYYY-MM-DD HH:mm")
}

result = GENERATE PDF "templates/sales-report.html", data, "reports/sales-" + FORMAT(NOW(), "YYYYMM") + ".pdf"

TALK "Sales report generated!"

Contract Generation

' Generate service agreement
data = #{
    "contract_number": contract_id,
    "client_name": client.name,
    "client_company": client.company,
    "service_description": selected_service.description,
    "monthly_fee": FORMAT(selected_service.price, "$#,##0.00"),
    "start_date": FORMAT(start_date, "MMMM DD, YYYY"),
    "term_months": contract_term,
    "end_date": FORMAT(DATEADD(start_date, contract_term, "month"), "MMMM DD, YYYY")
}

result = GENERATE PDF "templates/service-agreement.html", data, "contracts/sa-" + contract_id + ".pdf"

TALK "Service agreement ready for signature!"

Return Value

Returns an object with generation details:

PropertyDescription
result.urlFull URL to the generated PDF (S3/MinIO path)
result.localNameLocal filename of the generated PDF

Error Handling

ON ERROR RESUME NEXT

data = #{
    "customer_name": customer_name,
    "total": order_total
}

result = GENERATE PDF "templates/invoice.html", data, "invoices/test.pdf"

IF ERROR THEN
    TALK "PDF generation failed: " + ERROR MESSAGE
    
    IF INSTR(ERROR MESSAGE, "template") > 0 THEN
        TALK "Template file not found."
    ELSE IF INSTR(ERROR MESSAGE, "storage") > 0 THEN
        TALK "Not enough storage space."
    ELSE
        TALK "Sorry, I couldn't generate the document. Please try again."
    END IF
ELSE
    TALK "PDF generated successfully!"
END IF

ON ERROR GOTO 0

Common Errors

ErrorCauseSolution
TEMPLATE_NOT_FOUNDTemplate file doesn’t existVerify template path
INVALID_TEMPLATETemplate has syntax errorsCheck template format
MISSING_VARIABLERequired placeholder not providedInclude all variables
STORAGE_FULLInsufficient spaceClean up storage
RENDER_ERRORHTML/CSS rendering issueSimplify template

Styling Tips

Supported CSS

  • Basic typography (fonts, sizes, colors)
  • Box model (margins, padding, borders)
  • Tables and layouts
  • Page breaks (page-break-before, page-break-after)
  • Print media queries (@media print)

Page Setup

<style>
    @page {
        size: A4;
        margin: 2cm;
    }
    
    .page-break {
        page-break-after: always;
    }
    
    @media print {
        .no-print { display: none; }
    }
</style>

Configuration

No specific configuration required. Uses bot’s standard drive settings from config.csv:

name,value
drive-provider,seaweedfs
drive-url,http://localhost:8333
drive-bucket,my-bot

Implementation Notes

  • Implemented in Rust under src/file/pdf.rs
  • Uses headless browser rendering for HTML
  • Supports embedded images (base64 or relative paths)
  • Handles Unicode and special characters
  • Maximum PDF size: 50 MB
  • Template caching for performance

  • MERGE PDF — Combine multiple PDFs
  • FILL — Fill templates with data (alternative approach)
  • READ — Read template content
  • DOWNLOAD — Send PDF to user
  • SEND MAIL — Email PDF as attachment
  • WRITE — Create template dynamically

Summary

GENERATE PDF creates professional PDF documents from HTML or Markdown templates with variable substitution. Use it for invoices, reports, certificates, contracts, and any document that needs a polished format. Templates support loops, conditionals, and styling for flexible document generation. Combine with DOWNLOAD to deliver PDFs to users or SEND MAIL to email them as attachments.

Syntax reminder: Always use GENERATE PDF (with space), not GENERATE_PDF.

Start Template

The start template is the simplest possible bot - a greeting flow that demonstrates the core interaction pattern of BASIC: greeting users and responding to their input.

Topic: Basic Greeting & Help Flow

This template is perfect for:

  • Learning BASIC fundamentals
  • Simple FAQ bots
  • Quick demos
  • Starting point for more complex bots

The Code

REM Basic greeting and help flow
SET user_name = "Guest"

TALK "Hello, " + user_name + "! How can I help you today?"
HEAR user_input

IF user_input = "help" THEN
    TALK "Sure, I can assist with account info, orders, or support."
ELSE
    TALK "Sorry, I didn't understand. Type 'help' for options."
END IF

Sample Dialogs

These conversations show how the start template works in real-world scenarios.

Dialog 1: User Asks for Help

Dialog 2: Unknown Input

Dialog 3: Personalized Greeting (Enhanced Version)

When you add user detection, the experience improves:

Keywords Used

KeywordPurpose
SETAssign a value to a variable
TALKSend a message to the user
HEARWait for and capture user input
IF/ELSEConditional branching based on input

How It Works

  1. Variable Setup: SET creates a variable to hold the user’s name
  2. Greeting: TALK sends the welcome message
  3. Input Capture: HEAR waits for user response
  4. Response Logic: IF/ELSE determines what to say back

Enhanced Version

Here’s the same template enhanced with LLM for natural understanding:

REM Smart greeting flow with LLM
SET user_name = "Guest"

TALK "Hello, " + user_name + "! How can I help you today?"
HEAR user_input

' Let LLM understand intent
intent = LLM "Classify this user message into one category: help, account, orders, support, other. Message: " + user_input

SWITCH intent
    CASE "help"
        TALK "I can assist with account info, orders, or support."
    CASE "account"
        TALK "Let me pull up your account information..."
    CASE "orders"
        TALK "I'll check on your recent orders..."
    CASE "support"
        TALK "Connecting you with our support team..."
    DEFAULT
        response = LLM "Respond helpfully to: " + user_input
        TALK response
END SWITCH

Customization Ideas

Add User Detection

' Get user info if available
user_name = GET BOT MEMORY "user_" + user_id + "_name"
IF user_name = "" THEN
    TALK "Hi there! What's your name?"
    HEAR user_name
    SET BOT MEMORY "user_" + user_id + "_name", user_name
END IF

TALK "Welcome back, " + user_name + "!"

Add Quick Reply Buttons

ADD SUGGESTION "Account Info"
ADD SUGGESTION "My Orders"
ADD SUGGESTION "Get Support"
TALK "What would you like help with?"
HEAR choice

Add Time-Based Greeting

hour = HOUR(NOW())
IF hour < 12 THEN
    greeting = "Good morning"
ELSE IF hour < 18 THEN
    greeting = "Good afternoon"
ELSE
    greeting = "Good evening"
END IF

TALK greeting + ", " + user_name + "!"

Default Template

The default template is the starter bot that comes with General Bots, providing essential utility tools like weather forecasts, email sending, SMS messaging, calculations, and translations.

Topic: Starter Bot with Essential Tools

This template is perfect for:

  • Quick start with General Bots
  • Basic utility functions
  • Learning BASIC syntax
  • Foundation for custom bots

Available Tools

The default template includes these ready-to-use tools:

ToolFileDescription
Weatherweather.basGet weather forecasts for any city
Send Emailsend-email.basSend emails to any address
Send SMSsend-sms.basSend text messages to mobile phones
Calculatecalculate.basPerform mathematical calculations
Translatetranslate.basTranslate text between languages

The Code: weather.bas

PARAM location AS STRING LIKE "New York" DESCRIPTION "City or location to get weather forecast"

DESCRIPTION "Get current weather forecast for any city or location"

lat = 40.7128
lon = -74.0060

location_lower = LCASE(location)

IF INSTR(location_lower, "new york") > 0 THEN
    lat = 40.7128
    lon = -74.0060
ELSE IF INSTR(location_lower, "london") > 0 THEN
    lat = 51.5074
    lon = -0.1278
ELSE IF INSTR(location_lower, "tokyo") > 0 THEN
    lat = 35.6762
    lon = 139.6503
ELSE IF INSTR(location_lower, "sao paulo") > 0 THEN
    lat = -23.5505
    lon = -46.6333
END IF

weather_url = "https://api.open-meteo.com/v1/forecast?latitude=" + lat + "&longitude=" + lon + "&current_weather=true"

weather_data = GET weather_url

IF weather_data.current_weather THEN
    current = weather_data.current_weather
    
    code = current.weathercode
    condition = "Clear"
    icon = "☀️"
    
    IF code = 0 THEN
        condition = "Clear sky"
        icon = "☀️"
    ELSE IF code >= 1 AND code <= 3 THEN
        condition = "Partly cloudy"
        icon = "⛅"
    ELSE IF code >= 51 AND code <= 67 THEN
        condition = "Rainy"
        icon = "🌧️"
    ELSE IF code >= 95 AND code <= 99 THEN
        condition = "Thunderstorm"
        icon = "⛈️"
    END IF
    
    TALK icon + " Weather for " + location + ":"
    TALK "Temperature: " + current.temperature + "°C"
    TALK "Condition: " + condition
    TALK "Wind: " + current.windspeed + " km/h"
ELSE
    TALK "Could not fetch weather for: " + location
END IF

Sample Dialogs

These conversations show how the default template works in real-world scenarios.

Dialog 1: Weather Forecast

Dialog 2: Send Email

Dialog 3: Translation

Dialog 4: Calculation

Template Structure

default.gbai/
├── default.gbdialog/
│   ├── calculate.bas      # Math calculations
│   ├── send-email.bas     # Email sending
│   ├── send-sms.bas       # SMS messaging
│   ├── translate.bas      # Text translation
│   └── weather.bas        # Weather forecasts
└── default.gbot/
    └── config.csv         # Bot configuration

Keywords Used

KeywordPurpose
PARAMDefine tool parameters
DESCRIPTIONTool description for AI
GETHTTP GET request
TALKSend message to user
SEND MAILSend email
SEND SMSSend text message
INSTRFind substring position
LCASEConvert to lowercase

Supported Cities (Weather)

The weather tool includes coordinates for these cities:

  • New York, Los Angeles, Chicago (USA)
  • London, Paris, Berlin, Madrid (Europe)
  • Tokyo, Beijing, Singapore, Mumbai, Dubai (Asia)
  • Sydney (Australia)
  • São Paulo, Rio de Janeiro (Brazil)
  • Toronto (Canada)

Customization Ideas

Add More Cities

ELSE IF INSTR(location_lower, "amsterdam") > 0 THEN
    lat = 52.3676
    lon = 4.9041
ELSE IF INSTR(location_lower, "moscow") > 0 THEN
    lat = 55.7558
    lon = 37.6173
END IF

Add Extended Forecast

' Get 7-day forecast
weather_url = weather_url + "&daily=temperature_2m_max,temperature_2m_min&forecast_days=7"

weather_data = GET weather_url

TALK "📅 7-Day Forecast for " + location + ":"
FOR i = 1 TO 7
    TALK "Day " + i + ": " + weather_data.daily.temperature_2m_max[i] + "°C / " + weather_data.daily.temperature_2m_min[i] + "°C"
NEXT

Add Email Templates

PARAM template AS STRING LIKE "meeting-reminder" DESCRIPTION "Email template to use"

IF template = "meeting-reminder" THEN
    subject = "Meeting Reminder"
    body = "Hi {name},\n\nThis is a reminder about our upcoming meeting.\n\nBest regards"
    body = REPLACE(body, "{name}", recipient_name)
END IF

SEND MAIL recipient, subject, body

Add SMS Confirmation

PARAM phone AS PHONE DESCRIPTION "Phone number with country code"
PARAM message AS STRING DESCRIPTION "Message to send"

DESCRIPTION "Send SMS with delivery confirmation"

SEND SMS phone, message

TALK "📱 SMS sent to " + phone
TALK "Message: " + LEFT(message, 50) + "..."

' Log the message
WITH smsLog
    timestamp = NOW()
    recipient = phone
    content = message
    status = "sent"
END WITH

SAVE "sms_log.csv", smsLog

Using as a Base Template

The default template is designed to be extended. Here’s how to build on it:

1. Copy the Template

cp -r templates/default.gbai packages/my-bot.gbai

2. Add Your Tools

Create new .bas files in the .gbdialog folder for your custom functionality.

3. Add a Start Script

Create start.bas to configure your bot:

ADD TOOL "weather"
ADD TOOL "send-email"
ADD TOOL "send-sms"
ADD TOOL "calculate"
ADD TOOL "translate"

' Add your custom tools
ADD TOOL "my-custom-tool"

CLEAR SUGGESTIONS

ADD SUGGESTION "weather" AS "Check weather"
ADD SUGGESTION "email" AS "Send email"
ADD SUGGESTION "translate" AS "Translate text"

BEGIN TALK
Welcome! I can help you with weather, emails, translations, and more.
END TALK

Authentication Template

The authentication template demonstrates secure user verification flows including login, registration, password validation, and session management.

Topic: User Authentication & Security

This template is perfect for:

  • User login systems
  • Account verification
  • Password recovery flows
  • Session management
  • Two-factor authentication

The Code

REM Authentication Flow with Retry Logic

PARAM username AS string LIKE "john.doe"
DESCRIPTION "Username or email for authentication"

PARAM password AS string LIKE "********"
DESCRIPTION "User's password (masked input)"

SET max_attempts = 3
SET attempts = 0

TALK "Welcome! Please enter your username:"
HEAR username

LABEL auth_loop

TALK "Enter your password:"
HEAR password AS PASSWORD   ' Masked input

' Verify credentials
user = FIND "users", "username='" + username + "'"

IF user = NULL THEN
    TALK "Username not found. Would you like to register? (yes/no)"
    HEAR register_choice
    IF register_choice = "yes" THEN
        GOTO registration
    ELSE
        TALK "Goodbye!"
        EXIT
    END IF
END IF

IF user.password = HASH(password) THEN
    SET BOT MEMORY "authenticated_user", username
    SET BOT MEMORY "session_start", NOW()
    TALK "Welcome back, " + user.name + "! You are now logged in."
    EXIT
ELSE
    SET attempts = attempts + 1
    IF attempts >= max_attempts THEN
        TALK "Too many failed attempts. Your account is temporarily locked."
        SEND MAIL user.email, "Security Alert", "Multiple failed login attempts detected."
        EXIT
    END IF
    TALK "Incorrect password. " + (max_attempts - attempts) + " attempts remaining."
    GOTO auth_loop
END IF

LABEL registration
TALK "Let's create your account. Enter your email:"
HEAR email
TALK "Create a password (min 8 characters):"
HEAR new_password AS PASSWORD

IF LEN(new_password) < 8 THEN
    TALK "Password too short. Please try again."
    GOTO registration
END IF

INSERT "users", username, email, HASH(new_password), NOW()
TALK "Account created! You can now log in."

Sample Dialogs

These conversations show how the authentication template works in real-world scenarios.

Dialog 1: Successful Login

Dialog 2: Failed Login with Retry

Dialog 3: Account Locked

Dialog 4: New User Registration

Keywords Used

KeywordPurpose
PARAMDefine expected input parameters
SETAssign values to variables
TALKSend messages to the user
HEARCapture user input
HEAR AS PASSWORDMasked password input
FINDQuery database for user
IF/ELSEConditional logic
GOTO/LABELFlow control for retry loop
HASHSecure password hashing
SET BOT MEMORYStore session data
SEND MAILSend security alerts
INSERTCreate new user record
EXITEnd the dialog

How It Works

  1. Username Input: Collects the username first
  2. User Lookup: Checks if user exists in database
  3. Password Verification: Compares hashed password
  4. Retry Logic: Allows 3 attempts before lockout
  5. Session Creation: Stores auth state in bot memory
  6. Registration: Offers new account creation if user not found

Security Features

Password Hashing

' Never store plain text passwords!
hashed = HASH(password)
INSERT "users", username, email, hashed

Rate Limiting

IF attempts >= max_attempts THEN
    SET BOT MEMORY "locked_" + username, NOW()
    TALK "Account locked for 15 minutes."
END IF

Two-Factor Authentication

' Send OTP after password verification
otp = RANDOM(100000, 999999)
SET BOT MEMORY "otp_" + username, otp
SEND MAIL email, "Your verification code", "Code: " + otp

TALK "Enter the 6-digit code sent to your email:"
HEAR user_otp

IF user_otp = GET BOT MEMORY "otp_" + username THEN
    TALK "Two-factor authentication successful!"
ELSE
    TALK "Invalid code."
END IF

Customization Ideas

Add “Forgot Password”

TALK "Forgot your password? (yes/no)"
HEAR forgot
IF forgot = "yes" THEN
    reset_token = RANDOM_STRING(32)
    SET BOT MEMORY "reset_" + username, reset_token
    SEND MAIL user.email, "Password Reset", "Click here: /reset/" + reset_token
    TALK "Password reset link sent to your email."
END IF

Session Timeout

session_start = GET BOT MEMORY "session_start"
IF DATEDIFF("minute", session_start, NOW()) > 30 THEN
    TALK "Session expired. Please log in again."
    SET BOT MEMORY "authenticated_user", ""
END IF

Social Login

TALK "Login with: 1) Password 2) Google 3) GitHub"
HEAR login_method

SWITCH login_method
    CASE "2"
        ' Redirect to OAuth
        url = GET "auth/google/redirect"
        TALK "Click to login: " + url
    CASE "3"
        url = GET "auth/github/redirect"
        TALK "Click to login: " + url
    DEFAULT
        ' Standard password flow
END SWITCH

Enrollment Template

The enrollment template demonstrates how to build a complete data collection workflow that gathers user information step-by-step, validates inputs, confirms details, and saves the data.

Topic: User Registration & Data Collection

This template is perfect for:

  • Customer onboarding flows
  • Event registrations
  • Lead capture forms
  • Survey collection
  • Application submissions

The Code

REM Enrollment Tool Example

PARAM name AS string          LIKE "Abreu Silva"
DESCRIPTION "Required full name of the individual."

PARAM birthday AS date        LIKE "23/09/2001"
DESCRIPTION "Required birth date of the individual in DD/MM/YYYY format."

PARAM email AS string         LIKE "abreu.silva@example.com"
DESCRIPTION "Required email address for contact purposes."

PARAM personalid AS integer   LIKE "12345678900"
DESCRIPTION "Required Personal ID number of the individual (only numbers)."

PARAM address AS string       LIKE "Rua das Flores, 123 - SP"
DESCRIPTION "Required full address of the individual."

DESCRIPTION "This is the enrollment process, called when the user wants to enrol."

REM Start enrollment
TALK "Welcome to the enrollment process! Let's get you registered."

TALK "First, what is your full name?"
HEAR name

TALK "Thank you. What is your birth date? (DD/MM/YYYY)"
HEAR birthday

TALK "What is your email address?"
HEAR email

TALK "Please provide your Personal ID number (numbers only):"
HEAR personalid

TALK "Finally, what is your full address?"
HEAR address

REM Validate and confirm
TALK "Please confirm your details:"
TALK "Name: " + name
TALK "Birth Date: " + birthday
TALK "Email: " + email
TALK "Personal ID: " + personalid
TALK "Address: " + address

TALK "Are these details correct? (yes/no)"
HEAR confirmation

IF confirmation = "yes" THEN
    SAVE "enrollments.csv", name, birthday, email, personalid, address
    TALK "Thank you! Your enrollment has been successfully submitted."
ELSE
    TALK "Let's start over with the correct information."
END IF

Sample Dialogs

These conversations show how the enrollment template works in real-world scenarios.

Dialog 1: Successful Enrollment

Dialog 2: User Corrects Information

Dialog 3: LLM-Assisted Natural Input

When using the LLM, users can provide information naturally:

Keywords Used

KeywordPurpose
PARAMDefine expected input parameters with types and examples
DESCRIPTIONProvide context for LLM understanding
TALKSend messages to the user
HEARWait for and capture user input
IF/ELSEConditional logic for confirmation
SAVEPersist data to CSV file

How It Works

  1. Parameter Definition: The PARAM declarations tell the LLM what information to collect
  2. Step-by-Step Collection: Each HEAR captures one piece of data
  3. Confirmation Loop: User reviews all data before submission
  4. Data Persistence: SAVE stores the validated data

Customization Ideas

Add Validation

HEAR email
IF NOT INSTR(email, "@") THEN
    TALK "Please enter a valid email address"
    HEAR email
END IF

Add to Database Instead of CSV

INSERT "users", name, birthday, email, personalid, address

Send Confirmation Email

SEND MAIL email, "Welcome!", "Your registration is complete, " + name

Sales Pipeline Template

The sales pipeline template provides a complete CRM (Customer Relationship Management) system for managing deals, tracking opportunities through sales stages, and generating revenue forecasts.

Topic: Sales Pipeline & Deal Management

This template is perfect for:

  • Sales teams tracking deals
  • Revenue forecasting
  • Pipeline management
  • Win/loss analysis
  • Sales performance reporting

The Code

ADD TOOL "create-deal"
ADD TOOL "update-stage"
ADD TOOL "list-deals"
ADD TOOL "deal-details"
ADD TOOL "pipeline-report"
ADD TOOL "forecast-revenue"

USE KB "sales-pipeline.gbkb"

SET CONTEXT "sales pipeline" AS "You are a sales assistant helping manage the sales pipeline. Help with creating new deals, updating deal stages, viewing pipeline status, generating sales forecasts, and analyzing win/loss rates."

CLEAR SUGGESTIONS

ADD SUGGESTION "newdeal" AS "Create a new deal"
ADD SUGGESTION "pipeline" AS "Show my pipeline"
ADD SUGGESTION "update" AS "Update a deal stage"
ADD SUGGESTION "forecast" AS "View sales forecast"
ADD SUGGESTION "report" AS "Generate pipeline report"

BEGIN TALK
**Sales Pipeline Manager**

I can help you with:
• Create new deals and opportunities
• View and manage your pipeline
• Update deal stages
• Generate sales forecasts
• Pipeline analytics and reports
• Track win/loss rates

Select an option or tell me what you need.
END TALK

BEGIN SYSTEM PROMPT
You are a sales pipeline assistant.

Pipeline stages:
- Lead: Initial contact, not qualified
- Qualified: Budget, authority, need, timeline confirmed
- Proposal: Quote sent
- Negotiation: Active discussions
- Closed Won: Successfully closed
- Closed Lost: Lost or no decision

Always encourage sales reps and provide actionable insights.
Confirm changes before saving.
Use currency format for amounts.
END SYSTEM PROMPT

Sample Dialogs

These conversations show how the sales pipeline template works in real-world scenarios.

Dialog 1: Creating a New Deal

Dialog 2: Viewing Pipeline

Dialog 3: Update Deal Stage

Dialog 4: Revenue Forecast

Keywords Used

KeywordPurpose
ADD TOOLRegister pipeline management tools
USE KBLoad sales methodology knowledge base
SET CONTEXTDefine sales assistant behavior
ADD SUGGESTIONCreate quick action buttons
BEGIN TALKWelcome message with options
BEGIN SYSTEM PROMPTSales stage definitions and guidelines

Pipeline Stages

StageWin ProbabilityDescription
new10%Initial contact, not qualified
qualified30%BANT criteria confirmed
proposal50%Quote or proposal sent
negotiation70%Active deal discussions
won100%Deal successfully closed
lost0%Deal lost or abandoned

Note: The new unified CRM uses crm_deals table with stages: new, qualified, proposal, negotiation, won, lost. Use department_id to filter by business unit (e.g., Comercial SP, Inside Sales, Enterprise).

Department Filtering

Filter deals by business unit using department_id:

' Filter deals by department
deals = FIND "crm_deals", "department_id = '" + departmentId + "' AND stage != 'won' AND stage != 'lost'"

' Get department stats
SELECT pd.name, COUNT(cd.id) AS total_deals, COALESCE(SUM(cd.value), 0) AS total_value
FROM people_departments pd
LEFT JOIN crm_deals cd ON cd.department_id = pd.id
WHERE pd.org_id = $1
GROUP BY pd.id, pd.name

Template Structure

sales-pipeline.gbai/
├── sales-pipeline.gbdialog/
│   ├── start.bas           # Main entry point
│   ├── create-deal.bas     # New deal creation
│   ├── update-stage.bas    # Stage progression
│   ├── list-deals.bas      # Pipeline view
│   ├── deal-details.bas    # Individual deal info
│   ├── pipeline-report.bas # Analytics reports
│   └── forecast-revenue.bas # Revenue forecasting
├── sales-pipeline.gbdrive/
│   └── templates/          # Proposal templates
├── sales-pipeline.gbkb/
│   └── sales-methodology.md # Sales best practices
└── sales-pipeline.gbot/
    └── config.csv          # Bot configuration

Create Deal Tool: create-deal.bas

Updated for CRM v2.5: Uses unified crm_deals table. Include department_id to assign to business unit.

PARAM company AS STRING LIKE "Acme Corp" DESCRIPTION "Company or account name"
PARAM value AS NUMBER LIKE 50000 DESCRIPTION "Deal value in dollars"
PARAM close_date AS DATE LIKE "2025-03-31" DESCRIPTION "Expected close date"
PARAM department_id AS STRING DESCRIPTION "Business unit ID (e.g., Comercial SP)" OPTIONAL
PARAM contact AS STRING DESCRIPTION "Primary contact name" OPTIONAL
PARAM notes AS STRING DESCRIPTION "Deal notes" OPTIONAL

DESCRIPTION "Create a new deal in the sales pipeline"

' Get department if not provided
IF NOT department_id THEN
    department_id = GET USER MEMORY("default_department")
END IF

' Generate deal ID
dealId = "DEAL-" + FORMAT(NOW(), "YYYY") + "-" + FORMAT(RANDOM(1000, 9999))

' Get sales rep info
salesRep = USERNAME
salesRepEmail = FROM

' Create deal record
WITH deal
    id = dealId
    company = company
    value = value
    expected_close = close_date
    contact_name = contact
    notes = notes
    stage = "lead"
    probability = 20
    owner = salesRep
    owner_email = salesRepEmail
    created_at = NOW()
    updated_at = NOW()
END WITH

SAVE "deals.csv", deal

' Log activity
WITH activity
    deal_id = dealId
    type = "created"
    description = "Deal created with value $" + FORMAT(value, "#,##0")
    user = salesRep
    timestamp = NOW()
END WITH

SAVE "deal_activities.csv", activity

TALK "✅ **Deal Created!**"
TALK "🏢 **Company:** " + company
TALK "💰 **Value:** $" + FORMAT(value, "#,##0")
TALK "📅 **Expected Close:** " + FORMAT(close_date, "MMMM DD, YYYY")
TALK "📊 **Stage:** Lead"
TALK "🎫 **Deal ID:** " + dealId
TALK ""
TALK "Good luck! 🍀"

RETURN dealId

Update Stage Tool: update-stage.bas

PARAM deal_id AS STRING LIKE "DEAL-2025-0142" DESCRIPTION "Deal ID or company name"
PARAM new_stage AS STRING LIKE "qualified" DESCRIPTION "New stage: lead, qualified, proposal, negotiation, closed_won, closed_lost"
PARAM reason AS STRING DESCRIPTION "Reason for stage change" OPTIONAL

DESCRIPTION "Update the stage of a deal in the pipeline"

' Find deal
deal = FIND "deals.csv", "id = '" + deal_id + "' OR LOWER(company) LIKE '%" + LOWER(deal_id) + "%'"

IF NOT deal THEN
    TALK "Deal not found. Please check the deal ID or company name."
    RETURN NULL
END IF

old_stage = deal.stage
new_stage_lower = LOWER(new_stage)

' Set probability based on stage
SELECT CASE new_stage_lower
    CASE "lead"
        probability = 20
    CASE "qualified"
        probability = 40
    CASE "proposal"
        probability = 50
    CASE "negotiation"
        probability = 80
    CASE "closed_won"
        probability = 100
    CASE "closed_lost"
        probability = 0
END SELECT

' Update deal
deal.stage = new_stage_lower
deal.probability = probability
deal.updated_at = NOW()

IF new_stage_lower = "closed_won" THEN
    deal.closed_date = NOW()
    deal.closed_value = deal.value
ELSE IF new_stage_lower = "closed_lost" THEN
    deal.closed_date = NOW()
    deal.lost_reason = reason
END IF

UPDATE "deals.csv", deal

' Log activity
WITH activity
    deal_id = deal.id
    type = "stage_change"
    description = "Stage changed: " + old_stage + " → " + new_stage_lower
    user = USERNAME
    timestamp = NOW()
END WITH

SAVE "deal_activities.csv", activity

' Format stage names
old_stage_display = PROPER(REPLACE(old_stage, "_", " "))
new_stage_display = PROPER(REPLACE(new_stage_lower, "_", " "))

TALK "✅ **Deal Updated!**"
TALK "🏢 **" + deal.company + "**"
TALK "📊 " + old_stage_display + " → **" + new_stage_display + "**"
TALK "💰 $" + FORMAT(deal.value, "#,##0")

IF new_stage_lower = "closed_won" THEN
    TALK ""
    TALK "🎉 Congratulations on closing the deal!"
ELSE IF new_stage_lower = "closed_lost" THEN
    TALK ""
    TALK "📝 Deal marked as lost. Keep pushing on the other opportunities!"
ELSE
    TALK ""
    TALK "Win probability: " + probability + "%"
END IF

RETURN deal.id

Forecast Revenue Tool: forecast-revenue.bas

PARAM period AS STRING LIKE "this quarter" DESCRIPTION "Forecast period: this month, this quarter, this year"

DESCRIPTION "Generate revenue forecast based on pipeline and probabilities"

' Determine date range
IF INSTR(LOWER(period), "month") > 0 THEN
    start_date = DATE(YEAR(NOW()), MONTH(NOW()), 1)
    end_date = DATEADD(DATEADD(start_date, 1, "month"), -1, "day")
    period_name = FORMAT(NOW(), "MMMM YYYY")
ELSE IF INSTR(LOWER(period), "quarter") > 0 THEN
    quarter = INT((MONTH(NOW()) - 1) / 3) + 1
    start_date = DATE(YEAR(NOW()), (quarter - 1) * 3 + 1, 1)
    end_date = DATEADD(DATEADD(start_date, 3, "month"), -1, "day")
    period_name = "Q" + quarter + " " + YEAR(NOW())
ELSE
    start_date = DATE(YEAR(NOW()), 1, 1)
    end_date = DATE(YEAR(NOW()), 12, 31)
    period_name = YEAR(NOW())
END IF

' Get deals closing in period
deals = FIND "deals.csv", "expected_close >= '" + FORMAT(start_date, "YYYY-MM-DD") + "' AND expected_close <= '" + FORMAT(end_date, "YYYY-MM-DD") + "' AND stage NOT IN ('closed_won', 'closed_lost')"

' Calculate forecasts by stage
weighted_total = 0
best_case = 0
committed = 0

stages = ["negotiation", "proposal", "qualified", "lead"]
stage_totals = []

FOR EACH stage IN stages
    stage_deals = FILTER(deals, "stage = '" + stage + "'")
    stage_value = 0
    stage_weighted = 0
    
    FOR EACH deal IN stage_deals
        stage_value = stage_value + deal.value
        stage_weighted = stage_weighted + (deal.value * deal.probability / 100)
    NEXT
    
    best_case = best_case + stage_value
    weighted_total = weighted_total + stage_weighted
    
    IF stage = "negotiation" THEN
        committed = committed + stage_weighted
    END IF
    
    stage_totals[stage] = {value: stage_value, weighted: stage_weighted, prob: deals[1].probability}
NEXT

' Get closed won in period
closed = FIND "deals.csv", "closed_date >= '" + FORMAT(start_date, "YYYY-MM-DD") + "' AND stage = 'closed_won'"
closed_value = 0
FOR EACH deal IN closed
    closed_value = closed_value + deal.closed_value
NEXT

' Get quota
quota = GET BOT MEMORY("quota_" + USERNAME)
IF NOT quota THEN quota = 200000

attainment = ((closed_value + weighted_total) / quota) * 100

TALK "📈 **" + period_name + " Revenue Forecast**"
TALK ""
TALK "**By Stage:**"
TALK "• Negotiation (80%): $" + FORMAT(stage_totals["negotiation"].weighted, "#,##0")
TALK "• Proposal (50%): $" + FORMAT(stage_totals["proposal"].weighted, "#,##0")
TALK "• Qualified (40%): $" + FORMAT(stage_totals["qualified"].weighted, "#,##0")
TALK "• Lead (20%): $" + FORMAT(stage_totals["lead"].weighted, "#,##0")
TALK ""
TALK "**Weighted Forecast:** $" + FORMAT(weighted_total, "#,##0")
TALK "**Best Case:** $" + FORMAT(best_case, "#,##0")
TALK "**Committed:** $" + FORMAT(committed, "#,##0")
TALK "**Already Closed:** $" + FORMAT(closed_value, "#,##0")
TALK ""
TALK "**Quota:** $" + FORMAT(quota, "#,##0")
TALK "**Attainment:** " + FORMAT(attainment, "#,##0") + "% (forecasted)"

IF attainment >= 100 THEN
    TALK ""
    TALK "🎯 You're on track to exceed quota!"
ELSE IF attainment >= 80 THEN
    TALK ""
    TALK "📊 You're close! Focus on advancing your top deals."
ELSE
    TALK ""
    TALK "⚠️ You need more pipeline coverage. Time to prospect!"
END IF

RETURN {weighted: weighted_total, best_case: best_case, attainment: attainment}

Customization Ideas

Add Deal Scoring

' Calculate deal score based on various factors
score = 0

' Company size score
IF deal.company_size > 1000 THEN
    score = score + 20
ELSE IF deal.company_size > 100 THEN
    score = score + 10
END IF

' Budget confirmed
IF deal.budget_confirmed THEN
    score = score + 25
END IF

' Decision maker engaged
IF deal.decision_maker THEN
    score = score + 25
END IF

' Timeline urgency
IF DATEDIFF(deal.expected_close, NOW(), "days") < 30 THEN
    score = score + 20
END IF

' Competitor involved
IF deal.competitor THEN
    score = score - 10
END IF

deal.score = score
TALK "Deal Score: " + score + "/100"

Add Activity Tracking

ADD TOOL "log-activity"

PARAM deal_id AS STRING DESCRIPTION "Deal ID"
PARAM activity_type AS STRING LIKE "call" DESCRIPTION "Type: call, email, meeting, demo, proposal"
PARAM notes AS STRING DESCRIPTION "Activity notes"

WITH activity
    deal_id = deal_id
    type = activity_type
    notes = notes
    user = USERNAME
    timestamp = NOW()
END WITH

SAVE "deal_activities.csv", activity

' Update deal's last activity date
UPDATE "deals.csv" SET last_activity = NOW() WHERE id = deal_id

TALK "✅ Activity logged for deal " + deal_id

Add Win/Loss Analysis

ADD TOOL "win-loss-report"

won = FIND "deals.csv", "stage = 'closed_won' AND closed_date >= '" + start_date + "'"
lost = FIND "deals.csv", "stage = 'closed_lost' AND closed_date >= '" + start_date + "'"

won_count = UBOUND(won)
lost_count = UBOUND(lost)
win_rate = (won_count / (won_count + lost_count)) * 100

won_value = 0
FOR EACH deal IN won
    won_value = won_value + deal.value
NEXT

TALK "📊 **Win/Loss Analysis**"
TALK ""
TALK "**Win Rate:** " + FORMAT(win_rate, "#0.0") + "%"
TALK "**Deals Won:** " + won_count + " ($" + FORMAT(won_value, "#,##0") + ")"
TALK "**Deals Lost:** " + lost_count
TALK ""
TALK "**Top Loss Reasons:**"
' Aggregate loss reasons...

Add Email Integration

' Send proposal email from pipeline
ADD TOOL "send-proposal"

PARAM deal_id AS STRING DESCRIPTION "Deal to send proposal for"

deal = FIND "deals.csv", "id = '" + deal_id + "'"

' Generate proposal from template
proposal = FILL "proposal-template.docx", deal

' Send email
SEND MAIL deal.contact_email, "Proposal for " + deal.company, 
    "Please find attached our proposal.\n\nBest regards,\n" + USERNAME,
    proposal

' Update deal stage
deal.stage = "proposal"
deal.proposal_sent = NOW()
UPDATE "deals.csv", deal

TALK "📧 Proposal sent to " + deal.contact_email
TALK "Deal moved to Proposal stage."

Best Practices

  1. Keep Deals Updated: Update deal stages promptly for accurate forecasting
  2. Log Activities: Track all customer interactions
  3. Use BANT: Qualify deals properly before advancing
  4. Clean Pipeline: Remove stale deals regularly
  5. Review Weekly: Check pipeline health and forecasts weekly

Chapter 5: Multi-Agent Orchestration

Build complete applications through natural conversation. Describe what you want, and the system creates it automatically.

The AutoTask system uses an LLM-powered intent classifier to understand your request and route it to the appropriate handler. Whether you need a full web application, a simple reminder, or automated monitoring, you describe it in plain language.

AutoTask Architecture


Intent Types

TypeExampleWhat Gets Created
APP_CREATE“create app for clinic”HTMX pages, tools, schedulers
TODO“call John tomorrow”Task saved to tasks table
MONITOR“alert when IBM changes”ON CHANGE event handler
ACTION“email all customers”Executes immediately
SCHEDULE“daily 9am summary”SET SCHEDULE automation
GOAL“increase sales 20%”Autonomous LLM loop with metrics
TOOL“when I say X, do Y”Voice/chat command

Quick Start


Architecture Overview

Bot Database Architecture

One bot equals one database. All applications within a bot share the same data tables, tools, and schedulers.


File Structure

PathDescription
.gbdrive/apps/{name}/Generated web application
.gbdrive/apps/{name}/index.htmlMain HTMX page
.gbdrive/apps/{name}/assets/CSS, images
.gbdialog/tables.basDatabase schema definitions
.gbdialog/tools/Voice and chat commands
.gbdialog/schedulers/Timed automations
.gbdialog/events/Event triggers (ON CHANGE, ON EMAIL)

Creating an Application


Modifying Your Application

Use Designer to change anything about your app through conversation.


Adding Automation


Keywords Reference

KeywordPurpose
TABLEDefine data structure
FINDSearch records
SAVECreate record
UPDATEModify record
DELETERemove record
TALKSend message
HEARWait for input
SET SCHEDULECreate automation
ON CHANGEMonitor for changes
ON EMAILTrigger on email received

Next Steps

Task Workflow

From request to working application, the task workflow handles every step automatically.

Task Workflow


The Three Steps

StepWhat Happens
1. DescribeYou tell the bot what you want in plain language
2. PlanSystem analyzes and creates an execution plan
3. ExecuteSteps run automatically, app is delivered

Example Workflow


Error Handling

When something goes wrong, the system asks what to do.


Approval Requests

Some actions require confirmation before executing.


Actions Requiring Approval

ActionReason
Bulk email sendsPrevents accidental spam
Data deletionPrevents data loss
External API callsCost and security
Schema changesDatabase integrity

Next Steps

App Generation

From natural language to complete application, the generation process handles database design, UI creation, and automation setup.

App Generation Flow


How It Works

You describe what you need. The system analyzes your request, identifies the domain, extracts entities, and generates everything automatically.


What Gets Created

ComponentLocationDescription
Database tablestables.basData schema with fields and types
Web pages/apps/{name}/HTMX-based user interface
Voice commands.gbdialog/tools/Activated by trigger phrases
Automations.gbdialog/schedulers/Timed tasks and reports

Modifying Your App

Use conversation to change any aspect of your application.


Multiple Apps, Shared Data

Create different applications that share the same database tables.


Generation Process

PhaseDescription
Intent AnalysisLLM identifies domain and required entities
Schema DesignTables and relationships defined
UI GenerationHTMX pages created for each entity
Tool CreationVoice commands for common actions
Scheduler SetupAutomations for reports and alerts

Next Steps

Data Model

Tables store your application data. When you describe what you need, the system creates table definitions automatically in tables.bas.

Data Model Flow


What Is a Table?

A table organizes related data into rows and columns, similar to a spreadsheet.

idnamephoneemail
1John Smith555-1234john@email.com
2Maria Garcia555-5678maria@email.com

Creating Tables

Describe what data you need to store.


Adding Fields

Add new fields to existing tables through conversation.


Field Types

TypeUse ForExample
STRINGShort textNames, phones
TEXTLong textNotes, descriptions
INTEGERWhole numbersAge, quantity
DECIMALNumbers with decimalsPrice, weight
BOOLEANYes/No valuesActive, completed
DATEDates onlyBirth date
DATETIMEDate and timeAppointment time

Searching Data

Use natural language to find records.


Linking Tables

Create relationships between tables.


Shared Data

All applications within a bot share the same tables. Change data in one app, and it updates everywhere.

ConceptDescription
One bot = one databaseAll apps share tables
Schema in tables.basSingle source of truth
Auto-syncChanges deploy automatically

TABLE Keyword

Tables are defined in .gbdialog/tables.bas using the TABLE keyword:

SyntaxDescription
TABLE nameStart table definition
field AS TYPEDefine a field
END TABLEEnd table definition

FIND Keyword

Query data using the FIND keyword:

SyntaxDescription
FIND * IN tableGet all records
FIND * IN table WHERE conditionFilter records
FIND field1, field2 IN tableSelect specific fields

Next Steps

Designer

Designer is an AI assistant that appears alongside your application. Use natural conversation to modify any aspect of your app without writing code.

Designer Workflow


How It Works

Designer understands the context of your application. When you make a request, it knows which app you are viewing, which page is active, and what tables exist.


What Designer Can Modify

CategoryExamples
StylesColors, layout, fonts, spacing
HTML PagesForms, lists, buttons, elements
DatabaseAdd fields, create tables
ToolsVoice commands, chat triggers
SchedulersAutomated tasks, reports

Editing Styles


Adding Form Fields


Creating Voice Commands


Creating Schedulers


Adding Database Fields


Context Awareness

Designer tracks your current location and correlates follow-up requests.

ContextHow It Is Used
Current appEdits apply to that app’s files
Current pageDefaults edits to that file
Recent changesCorrelates follow-up requests
Available tablesKnows your data structure
Existing toolsKnows voice commands already created

Handling Sensitive Information

When Designer needs API keys or credentials:


Preview Before Changes

Designer shows what will change before applying destructive operations.


Safety Features

FeatureDescription
Preview changesShows what will change before applying
Confirm destructive actionsAsks before deleting
Change historyAll changes are logged
Undo supportCan undo recent changes

Limitations

LimitationReason
Cannot delete entire appSafety protection
Cannot access external systems without credentialsSecurity
Cannot modify system filesIsolation
Cannot execute arbitrary codeSecurity

Available Keywords

Designer uses GB BASIC keywords to implement changes:

CategoryKeywords
CommunicationTALK, HEAR, SEND MAIL, SEND TEMPLATE
DataFIND, SAVE, UPDATE, DELETE
AutomationSET SCHEDULE, ON CHANGE, ON EMAIL
AILLM, USE KB
WebSCRAPE, GET, POST, RSS
MemorySET BOT MEMORY, GET BOT MEMORY
ControlIF/THEN/ELSE, FOR EACH/NEXT, WHILE/WEND

Undoing Changes


Tips for Better Results

Be Specific

Less EffectiveMore Effective
“improve the form”“add email validation to the form”
“change the colors”“change the main button to green”
“fix this”“increase the font size in the table”

Combine Requests


Next Steps

Agent Workspaces

General Bots supports an advanced “Agent Mode” UI where the interaction with an AI agent shifts from a simple chat interface to a fully realized multi-panel workspace. This design empowers users to monitor complex autonomous tasks in real-time, providing deep visibility into what the AI is doing.

Architectural Overview

When an application logic branch triggers a computationally heavy or open-ended action (like APP_CREATE or deep research), the backend kicks off an Agent Executor process. This executor brings up a dedicated, highly isolated LXC container for that specific session.

The Agent Executor communicates back to the frontend in real-time. This provides a transparent view of:

  1. Thought Processes: High-level textual reasoning of the agent.
  2. Terminal Output: Verbatim standard output and standard error from the LXC container.
  3. Browser Output: Visual previews of web applications being built or research being conducted, served via localhost proxies from within the container.

The Agent UI

The main interface pivots from a standard 1-panel conversation to a complex multi-panel grid when “Agent Mode” is toggled from the Chat interface.

This layout includes:

  • Left Sidebar: A collapsible “Agents & Workspaces” sidebar that summarizes the current state of active agents, their resource usage (quota), and provides drag-and-drop workspace organization.
  • Center Chat: A persistent interactive chat with the specific agent, allowing for ongoing refinement of the task.
  • Right Hand Split Screens:
    • Top Right: An active Browser Window. The agent can stream HTML rendering updates or host internal applications (localhost) from its LXC sandbox, exposing them visually to the user.
    • Bottom Right: A live Terminal feed streaming stdout and stderr directly from the bash environment of the underlying LXC container.

LXC Sandbox execution

To prevent dependency collisions, protect the host operating system, and offer clean slate environments for arbitrary execution, every agent session spins up a temporary Ubuntu 22.04 LXC container.

  1. When the agent intent classifier matches a heavy task (e.g. APP_CREATE), the backend initiates the ContainerSession struct.
  2. An lxc launch command instantiates a fast, lightweight container instance.
  3. A bash shell is opened inside this container, and its I/O streams are piped back to the TaskProgressEvent broadcast channel using Tokio.
  4. The user sees the bash output instantly in the bottom-right terminal panel.
  5. On completion or failure, the container is forcibly stopped and deleted (lxc delete --force).

This isolated environment gives agents the absolute freedom to execute package installations (like npm install), launch development servers, and write arbitrary code, entirely segregated from the primary BotServer.

Chapter 6: Channels & Connectivity

This chapter catalogs all external services that General Bots integrates with.

Overview

General Bots connects to external services for extended functionality. All service credentials should be stored in config.csv within the bot’s .gbot folder - never hardcoded in scripts.

Infrastructure services (database, storage, cache) are automatically managed by the Directory service (Zitadel).

Service Categories

CategoryServicesConfiguration Location
LLM ProvidersOpenAI, Groq, Anthropic, Azure OpenAIconfig.csv
WeatherOpenWeatherMapconfig.csv
Messaging ChannelsWhatsApp, Teams, Instagram, Telegramconfig.csv
StorageS3-Compatible (MinIO, etc.)Vault (automatic)
DirectoryZitadelVAULT_* environment variables
EmailStalwart / IMAP/SMTPVault (automatic)
CalendarCalDAV serversconfig.csv
DatabasePostgreSQLVault (automatic)
CacheRedis-compatibleVault (automatic)

Quick Reference

BASIC Keywords That Call External Services

KeywordServiceConfig Key
LLMLLM Providerllm-provider, llm-api-key
WEATHEROpenWeatherMapweather-api-key
SEND MAILSMTP ServerManaged by Directory service
SEND WHATSAPPWhatsApp Business APIwhatsapp-api-key, whatsapp-phone-number-id
SEND TEAMSMicrosoft Teamsteams-app-id, teams-app-password
SEND INSTAGRAMInstagram Graph APIinstagram-access-token, instagram-page-id
GET (with http/https URL)Any HTTP endpointN/A
IMAGEBotModels (local)botmodels-enabled, botmodels-url
VIDEOBotModels (local)botmodels-enabled, botmodels-url
AUDIOBotModels (local)botmodels-enabled, botmodels-url
SEEBotModels (local)botmodels-enabled, botmodels-url
FINDQdrant (local)Internal service
USE WEBSITEWeb crawlingN/A

Service Configuration Template

Add these to your config.csv:

key,value
llm-provider,openai
llm-api-key,YOUR_API_KEY
llm-model,claude-sonnet-4.5
weather-api-key,YOUR_OPENWEATHERMAP_KEY
whatsapp-api-key,YOUR_WHATSAPP_KEY
whatsapp-phone-number-id,YOUR_PHONE_ID
whatsapp-verify-token,YOUR_WEBHOOK_VERIFY_TOKEN
teams-app-id,YOUR_TEAMS_APP_ID
teams-app-password,YOUR_TEAMS_PASSWORD
instagram-access-token,YOUR_INSTAGRAM_TOKEN
instagram-page-id,YOUR_PAGE_ID
botmodels-enabled,true
botmodels-url,http://localhost:5000

# Human Handoff / CRM Features
crm-enabled,true
attendant-llm-tips,true
attendant-polish-message,true
attendant-smart-replies,true
attendant-auto-summary,true
attendant-sentiment-analysis,true

Auto-Managed Services

The following services are automatically configured by the Directory service (Zitadel):

ServiceWhat’s Managed
PostgreSQLConnection credentials, database creation
S3-Compatible StorageAccess keys, bucket policies
CacheConnection credentials
Stalwart EmailUser accounts, SMTP/IMAP access

You do not need to configure these services manually. The Directory service handles credential provisioning and rotation.

Security Notes

  1. Never hardcode credentials - Always use config.csv or GET BOT MEMORY
  2. Rotate keys regularly - Update config.csv and restart the bot
  3. Use least privilege - Only grant permissions needed by the bot
  4. Audit access - Monitor external API usage through logs
  5. Infrastructure credentials - Managed automatically by Directory service

See Also

Channel Integrations

This guide covers integrating messaging channels with General Bots, focusing on WhatsApp Business API integration using Twilio-purchased phone numbers.

Overview

General Bots supports multiple messaging channels through a unified API. This section focuses on WhatsApp Business API, the most widely used business messaging platform globally.

Supported Channels

ChannelStatusConfig Keys
WhatsApp✅ Production Readywhatsapp-api-key, whatsapp-phone-number-id
Twilio SMS✅ Production Readytwilio-account-sid, twilio-auth-token
Instagram✅ Production Readyinstagram-access-token, instagram-page-id
Microsoft Teams✅ Production Readyteams-app-id, teams-app-password

WhatsApp Business Integration

The most popular channel for business messaging. Complete integration guide: WhatsApp Quick Start

Quick Setup (5 minutes)

  1. Purchase a phone number from Twilio

    # Twilio Console > Phone Numbers > Buy a Number
    # Select: Voice capability (required for verification)
    # Example: +553322980098
    
  2. Create Meta App with WhatsApp

    # https://developers.facebook.com/apps/
    # Create App > Business > Add WhatsApp product
    
  3. Configure credentials in config.csv

    whatsapp-enabled,true
    whatsapp-api-key,EAAQdlso6aM8BOwlhc3yM6bbJkGyibQPGJd87zFDHtfaFoJDJPohMl2c5nXs4yYuuHwoXJWx0rQKo0VXgTwThPYzqLEZArOZBhCWPBUpq7YlkEJXFAgB6ZAb3eoUzZAMgNZCZA1sg11rT2G8e1ZAgzpRVRffU4jmMChc7ybcyIwbtGOPKZAXKcNoMRfUwssoLhDWr
    whatsapp-phone-number-id,1158433381968079
    whatsapp-business-account-id,390727550789228
    whatsapp-webhook-verify-token,4qIogZadggQ.BEoMeciXIdl_MlkV_1DTx8Z_i0bYPxtSJwKSbH0FKlY
    whatsapp-application-id,323250907549153
    

BASIC Keywords for WhatsApp

REM Send a message
SEND WHATSAPP TO "+5511999999999" WITH "Hello from General Bots!"

REM Handle incoming messages
ON WHATSAPP MESSAGE RECEIVED
  LET SENDER$ = GET WHATSAPP SENDER NUMBER
  LET MESSAGE$ = GET WHATSAPP MESSAGE BODY
  
  REM Echo message back
  SEND WHATSAPP TO SENDER$ WITH "You said: " + MESSAGE$
END ON

Credential Reference

CredentialFormatExamplePurpose
Access TokenEAAQ...EAAQdlso6aM8BOwl...API authentication
Phone Number ID16 digits1158433381968079Message sending endpoint
WABA ID15 digits390727550789228Business account identifier
Verify TokenCustom string4qIogZadggQ.BEoMeci...Webhook security
Application ID15 digits323250907549153App identifier

Phone Number Verification

Twilio numbers require voice call verification (not SMS):

  1. Configure Twilio webhook to capture verification calls

    <!-- TwiML for voice handling -->
    <?xml version="1.0" encoding="UTF-8"?>
    <Response>
      <Gather action="https://twimlets.com/voicemail?Email=your@email.com">
        <Say voice="alice">Please enter your verification code.</Say>
      </Gather>
    </Response>
    
  2. In Meta Business Suite: Select “Phone Call” verification method

  3. Enter the 6-digit code received via email

  4. Verification complete - number ready for WhatsApp

See: Webhook Configuration Guide

Advanced Configuration

Message Templates

For business-initiated messages outside the 24-hour window:

// Send template message
POST https://graph.facebook.com/v18.0/1158433381968079/messages
{
  "messaging_product": "whatsapp",
  "to": "5511999999999",
  "type": "template",
  "template": {
    "name": "hello_world",
    "language": { "code": "pt_BR" }
  }
}

Rate Limiting

WhatsApp enforces rate limits per tier:

TierMessages/DayMessages/Second
Tier 11,0001
Tier 210,0005
Tier 3100,00050
Tier 4Unlimited1,000

Implement rate limiting in your bot:

REM Simple rate limiting
LET LAST_SENT = 0
SUB SEND WHATSAPP WITH LIMIT TO NUMBER$, MESSAGE$
  LET NOW = TIMER
  IF NOW - LAST_SENT < 1 THEN
    WAIT 1 - (NOW - LAST_SENT)
  END IF
  SEND WHATSAPP TO NUMBER$ WITH MESSAGE$
  LAST_SENT = TIMER
END SUB

Webhook Security

Always verify webhook signatures:

// Node.js signature verification
const crypto = require('crypto');

function verifySignature(payload, signature, appSecret) {
  const expected = 'sha256=' + 
    crypto.createHmac('sha256', appSecret)
      .update(payload)
      .digest('hex');
  return crypto.timingSafeEqual(
    Buffer.from(signature),
    Buffer.from(expected)
  );
}

Complete Documentation

For detailed guides and examples:

Other Channels

Twilio SMS

Simple SMS integration using Twilio:

# config.csv
twilio-account-sid,ACxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
twilio-auth-token,your_auth_token_here
twilio-from-number,+15551234567
REM Send SMS
SEND SMS TO "+5511999999999" WITH "Hello via SMS!"

Instagram Direct Messages

Connect Instagram messaging:

# config.csv
instagram-access-token,EAAxxxx...
instagram-page-id,123456789012345
REM Send Instagram DM
SEND INSTAGRAM TO "1234567890" WITH "Hello via Instagram!"

Configuration Template

Complete channel configuration example:

# config.csv

# WhatsApp Business (Primary channel)
whatsapp-enabled,true
whatsapp-api-key,EAAQdlso6aM8BOwlhc3yM6bbJkGyibQPGJd87zFDHtfaFoJDJPohMl2c5nXs4yYuuHwoXJWx0rQKo0VXgTwThPYzqLEZArOZBhCWPBUpq7YlkEJXFAgB6ZAb3eoUzZAMgNZCZA1sg11rT2G8e1ZAgzpRVRffU4jmMChc7ybcyIwbtGOPKZAXKcNoMRfUwssoLhDWr
whatsapp-phone-number-id,1158433381968079
whatsapp-business-account-id,390727550789228
whatsapp-webhook-verify-token,4qIogZadggQ.BEoMeciXIdl_MlkV_1DTx8Z_i0bYPxtSJwKSbH0FKlY

# Twilio SMS (Backup channel)
twilio-enabled,false
twilio-account-sid,ACxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
twilio-auth-token,your_auth_token_here
twilio-from-number,+15551234567

# Instagram (Social channel)
instagram-enabled,false
instagram-access-token,EAAxxxx...
instagram-page-id,123456789012345

Troubleshooting

Common Issues

Issue: Phone number verification fails

  • Solution: Ensure “Phone Call” verification is selected (not SMS)
  • Solution: Verify Twilio webhook is configured correctly
  • See: Troubleshooting Guide

Issue: Messages not sending

  • Solution: Check access token validity
  • Solution: Verify phone number format: 5511999999999 (no +, no spaces)
  • Solution: Ensure webhook is subscribed to “messages” field

Issue: Rate limit errors

  • Solution: Implement rate limiting in your bot
  • Solution: Use message queues for bulk sending
  • See: Code Examples

Best Practices

  1. Never hardcode credentials - Always use config.csv
  2. Implement retry logic - Handle API failures gracefully
  3. Monitor rate limits - Respect platform limits
  4. Secure webhooks - Verify all incoming requests
  5. Test thoroughly - Use ngrok for local testing
  6. Log everything - Track message delivery and errors
  7. Use templates - Pre-approved templates for business-initiated messages
  8. Handle errors - Provide user-friendly error messages

Support

Next Steps

  1. Complete the Quick Start Guide
  2. Set up webhooks using Webhook Configuration
  3. Explore Code Examples for your use case
  4. Configure monitoring and error handling
  5. Test with your team before launching to users

For configuration of other services (LLM providers, databases, etc.), see Appendix B: External Services.

Service Catalog

This catalog provides detailed information about every external service that General Bots integrates with.

LLM Providers

OpenAI

PropertyValue
Service URLhttps://api.openai.com/v1
Config Keyllm-provider=openai
API Key Configllm-api-key (stored in Vault)
Documentationplatform.openai.com/docs
BASIC KeywordsLLM
Supported Modelsgpt-5, gpt-oss-120b, gpt-oss-20b

Groq

PropertyValue
Service URLhttps://api.groq.com/openai/v1
Config Keyllm-provider=groq
API Key Configllm-api-key (stored in Vault)
Documentationconsole.groq.com/docs
BASIC KeywordsLLM
Supported Modelsllama-4-scout, llama-4-maverick, qwen3, mixtral-8x22b

Anthropic

PropertyValue
Service URLhttps://api.anthropic.com/v1
Config Keyllm-provider=anthropic
API Key Configllm-api-key (stored in Vault)
Documentationdocs.anthropic.com
BASIC KeywordsLLM
Supported Modelsclaude-opus-4.5, claude-sonnet-4.5

Azure OpenAI

PropertyValue
Service URLhttps://{resource}.openai.azure.com/
Config Keyllm-provider=azure
API Key Configllm-api-key (stored in Vault)
Documentationlearn.microsoft.com/azure/ai-services/openai
BASIC KeywordsLLM

Google (Gemini)

PropertyValue
Service URLhttps://generativelanguage.googleapis.com/v1
Config Keyllm-provider=google
API Key Configllm-api-key (stored in Vault)
Documentationai.google.dev/docs
BASIC KeywordsLLM
Supported Modelsgemini-3-pro, gemini-2.5-pro, gemini-2.5-flash

xAI (Grok)

PropertyValue
Service URLhttps://api.x.ai/v1
Config Keyllm-provider=xai
API Key Configllm-api-key (stored in Vault)
Documentationdocs.x.ai
BASIC KeywordsLLM
Supported Modelsgrok-4

DeepSeek

PropertyValue
Service URLhttps://api.deepseek.com/v1
Config Keyllm-provider=deepseek
API Key Configllm-api-key (stored in Vault)
Documentationplatform.deepseek.com/docs
BASIC KeywordsLLM
Supported Modelsdeepseek-v3.1, deepseek-r3

Mistral AI

PropertyValue
Service URLhttps://api.mistral.ai/v1
Config Keyllm-provider=mistral
API Key Configllm-api-key (stored in Vault)
Documentationdocs.mistral.ai
BASIC KeywordsLLM
Supported Modelsmixtral-8x22b

Weather Services

OpenWeatherMap

PropertyValue
Service URLhttps://api.openweathermap.org/data/2.5
Config Keyweather-api-key
Documentationopenweathermap.org/api
BASIC KeywordsWEATHER
Free Tier1,000 calls/day
Required PlanFree or higher

Example Usage:

weather = WEATHER "Seattle"
TALK weather

Messaging Channels

WhatsApp Business API

PropertyValue
Service URLhttps://graph.facebook.com/v17.0
Config Keyswhatsapp-api-key, whatsapp-phone-number-id, whatsapp-business-account-id
Documentationdevelopers.facebook.com/docs/whatsapp
BASIC KeywordsSEND WHATSAPP, SEND FILE (WhatsApp)
Webhook URL/api/channels/whatsapp/webhook

Microsoft Teams

PropertyValue
Service URLhttps://smba.trafficmanager.net/apis
Config Keysteams-app-id, teams-app-password, teams-tenant-id
Documentationlearn.microsoft.com/microsoftteams/platform
BASIC KeywordsSEND TEAMS, SEND FILE (Teams)
Webhook URL/api/channels/teams/messages

Instagram Messaging

PropertyValue
Service URLhttps://graph.facebook.com/v17.0
Config Keysinstagram-access-token, instagram-page-id, instagram-account-id
Documentationdevelopers.facebook.com/docs/instagram-api
BASIC KeywordsSEND INSTAGRAM
Webhook URL/api/channels/instagram/webhook

Telegram

PropertyValue
Service URLhttps://api.telegram.org/bot{token}
Config Keystelegram-bot-token
Documentationcore.telegram.org/bots/api
BASIC KeywordsSEND TELEGRAM
Webhook URL/api/channels/telegram/webhook

Storage Services

S3-Compatible Storage

General Bots uses S3-compatible object storage. Configuration is automatically managed by the Directory service (Zitadel).

PropertyValue
Local DefaultMinIO on port 9000
ManagementDirectory service (automatic)
Console Port9001 (when using MinIO)
BASIC KeywordsGET (file retrieval)

Compatible Services:

  • MinIO (default local installation)
  • Backblaze B2
  • Wasabi
  • DigitalOcean Spaces
  • Cloudflare R2
  • Any S3-compatible provider

Storage credentials are provisioned and rotated automatically by the Directory service. No manual configuration required.


Directory Services

Zitadel (Identity Provider)

PropertyValue
Local DefaultPort 8080
Environment VariablesDIRECTORY_URL, DIRECTORY_CLIENT_ID, DIRECTORY_CLIENT_SECRET
Documentationzitadel.com/docs
PurposeUser authentication, SSO, OAuth2/OIDC, service credential management

The Directory service manages:

  • User authentication
  • Service credentials (database, storage, cache)
  • OAuth applications
  • Role-based access control

Email Services

Stalwart Mail Server

PropertyValue
Ports25 (SMTP), 993 (IMAPS), 587 (Submission)
ManagementDirectory service (automatic)
Documentationstalw.art/docs
BASIC KeywordsSEND MAIL

Email accounts are created and managed through the Directory service.

External IMAP/SMTP

PropertyValue
Config Keyssmtp-server, smtp-port, imap-server, imap-port, email-username, email-password
BASIC KeywordsSEND MAIL
Supported ProvidersGmail, Outlook, custom SMTP/IMAP

Gmail Configuration Example (in config.csv):

smtp-server,smtp.gmail.com
smtp-port,587
imap-server,imap.gmail.com
imap-port,993

Local Services (BotModels)

Image Generation

PropertyValue
Service URLhttp://localhost:5000 (default)
Config Keysbotmodels-enabled, botmodels-url
BASIC KeywordsIMAGE
RequiresBotModels service running

Video Generation

PropertyValue
Service URLhttp://localhost:5000 (default)
Config Keysbotmodels-enabled, botmodels-url
BASIC KeywordsVIDEO
RequiresBotModels service running, GPU recommended

Audio Generation (TTS)

PropertyValue
Service URLhttp://localhost:5000 (default)
Config Keysbotmodels-enabled, botmodels-url
BASIC KeywordsAUDIO
RequiresBotModels service running

Vision/Captioning

PropertyValue
Service URLhttp://localhost:5000 (default)
Config Keysbotmodels-enabled, botmodels-url
BASIC KeywordsSEE
RequiresBotModels service running

Internal Services

These services are deployed locally as part of the General Bots stack. All are managed by the Directory service:

ServiceDefault PortPurposeManagement
PostgreSQL5432Primary databaseVault
Qdrant6333Vector storage for KBVault
Cache6379CachingVault
Stalwart25, 993Email server (optional)Vault
BotModels5000AI model inferenceconfig.csv

Service Health Checks

All services can be checked via the monitoring API:

GET /api/monitoring/services

Response includes status for all configured external services.


Troubleshooting

Common Issues

  1. API Key Invalid - Verify key in config.csv, ensure no trailing whitespace
  2. Rate Limited - Check service quotas, implement caching with SET BOT MEMORY
  3. Connection Timeout - Verify network access to external URLs
  4. Service Unavailable - Check service status pages

Debug Logging

Enable trace logging to see external API calls:

RUST_LOG=trace ./botserver

Quick Start Guide

Get your WhatsApp Business bot up and running in 30 minutes with this streamlined setup guide.

Prerequisites Checklist

  • Twilio account with $10+ credit
  • Meta for Developers account
  • Meta Business Suite account
  • Publicly accessible webhook URL (use ngrok for testing)
  • Basic command line knowledge

30-Minute Setup

Step 1: Buy Twilio Number (5 minutes)

# Log into Twilio Console
# https://console.twilio.com/

# Navigate to: Phone Numbers > Buy a Number
# Select: Voice capability (required!)
# Purchase number
# Example: +553322980098

Tip: Choose a number from your target country for easier verification.

Step 2: Create Meta App (5 minutes)

# Go to Meta for Developers
# https://developers.facebook.com/apps/

# Click: Create App > Business type
# App name: "My WhatsApp Bot"
# Add product: WhatsApp
# Create WhatsApp Business Account (WABA)

Save these values:

WABA ID:          390727550789228
Application ID:   323250907549153
Phone Number ID:  (after verification)

Step 3: Configure Twilio Webhook (5 minutes)

Option A: TwiML Bin (Fastest)

<!-- Create TwiML Bin in Twilio Console -->
<?xml version="1.0" encoding="UTF-8"?>
<Response>
  <Gather action="https://twimlets.com/voicemail?Email=your-email@example.com" method="POST">
    <Say voice="alice">Please enter your verification code.</Say>
  </Gather>
</Response>

Option B: ngrok + Node.js (Recommended)

# Install dependencies
npm install express twilio body-parser

# Create server.js
const express = require('express');
const twilio = require('twilio');
const app = express();

app.use(require('body-parser').urlencoded({ extended: false }));

app.post('/twilio/voice', (req, res) => {
  const twiml = new twilio.twiml.VoiceResponse();
  twiml.redirect('https://twimlets.com/voicemail?Email=your-email@example.com');
  res.type('text/xml');
  res.send(twiml.toString());
});

app.listen(3000);
# Start ngrok
ngrok http 3000

# Update Twilio number webhook to:
# https://abc123.ngrok.io/twilio/voice

Step 4: Verify Phone Number (5 minutes)

# In Meta Business Suite:
# 1. WhatsApp Accounts > Add Phone Number
# 2. Enter: +553322980098
# 3. Select: "Phone Call" (NOT SMS!)
# 4. Click: Verify

# Meta will call your Twilio number
# Check your email for the verification code
# Enter code in Meta dashboard

Critical: Select “Phone Call” verification - Twilio numbers don’t support SMS!

Step 5: Get API Credentials (3 minutes)

# In Meta for Developers:
# 1. Your App > WhatsApp > API Setup
# 2. Click: "Temporary Access Token"
# 3. Copy token (starts with EAAQ...)
# 4. Note Phone Number ID from URL

Required credentials:

whatsapp-api-key,EAAQdlso6aM8BOwlhc3yM6bbJkGyibQPGJd87zFDHtfaFoJDJPohMl2c5nXs4yYuuHwoXJWx0rQKo0VXgTwThPYzqLEZArOZBhCWPBUpq7YlkEJXFAgB6ZAb3eoUzZAMgNZCZA1sg11rT2G8e1ZAgzpRVRffU4jmMChc7ybcyIwbtGOPKZAXKcNoMRfUwssoLhDWr
whatsapp-phone-number-id,1158433381968079
whatsapp-business-account-id,390727550789228
whatsapp-webhook-verify-token,4qIogZadggQ.BEoMeciXIdl_MlkV_1DTx8Z_i0bYPxtSJwKSbH0FKlY
whatsapp-application-id,323250907549153
whatsapp-enabled,true

Step 6: Configure Webhook (5 minutes)

# Start your webhook server
node server.js

# In Meta Developers:
# 1. WhatsApp > API Setup > Webhook > Edit
# 2. Webhook URL: https://your-domain.com/webhooks/whatsapp
# 3. Verify Token: 4qIogZadggQ.BEoMeciXIdl_MlkV_1DTx8Z_i0bYPxtSJwKSbH0FKlY
# 4. Click: Verify and Save
# 5. Subscribe to: messages

Step 7: Configure General Bots (2 minutes)

# Edit .gbot/config.csv
key,value
whatsapp-enabled,true
whatsapp-api-key,EAAQdlso6aM8BOwlhc3yM6bbJkGyibQPGJd87zFDHtfaFoJDJPohMl2c5nXs4yYuuHwoXJWx0rQKo0VXgTwThPYzqLEZArOZBhCWPBUpq7YlkEJXFAgB6ZAb3eoUzZAMgNZCZA1sg11rT2G8e1ZAgzpRVRffU4jmMChc7ybcyIwbtGOPKZAXKcNoMRfUwssoLhDWr
whatsapp-phone-number-id,1158433381968079
whatsapp-business-account-id,390727550789228
whatsapp-webhook-verify-token,4qIogZadggQ.BEoMeciXIdl_MlkV_1DTx8Z_i0bYPxtSJwKSbH0FKlY
whatsapp-application-id,323250907549153

Step 8: Test Your Bot (5 minutes)

# Send test message via API
curl -X POST \
  'https://graph.facebook.com/v18.0/1158433381968079/messages' \
  -H 'Authorization: Bearer EAAQdlso6aM8BOwl...' \
  -H 'Content-Type: application/json' \
  -d '{
    "messaging_product": "whatsapp",
    "to": "5511999999999",
    "type": "text",
    "text": {"body": "Hello from General Bots!"}
  }'

# Or use BASIC
REM Test your WhatsApp integration
SEND WHATSAPP TO "+5511999999999" WITH "Hello from General Bots!"

Your First WhatsApp Bot

Create a simple echo bot:

REM Simple WhatsApp Echo Bot
ON WHATSAPP MESSAGE RECEIVED
  LET SENDER$ = GET WHATSAPP SENDER NUMBER
  LET MESSAGE$ = GET WHATSAPP MESSAGE BODY
  
  LOG "Message from " + SENDER$ + ": " + MESSAGE$
  
  REM Echo back with acknowledgment
  SEND WHATSAPP TO SENDER$ WITH "You said: " + MESSAGE$
END ON

Common First-Time Mistakes

Don’t select SMS verification - Use “Phone Call” ❌ Don’t hardcode tokens - Use config.csv ❌ Don’t forget webhook subscriptions - Subscribe to “messages” ❌ Don’t use + in phone numbers - Format: 5511999999999 ❌ Don’t ignore rate limits - Max 1000 messages/second

Next Steps

  1. Create message templates for business-initiated conversations
  2. Set up persistent storage for conversation history
  3. Implement retry logic for failed messages
  4. Add monitoring for webhook health
  5. Review security best practices

Need Help?

Verification Checklist

  • Twilio number purchased with Voice capability
  • Meta app created with WhatsApp product
  • Phone number verified via phone call
  • Access token generated and saved
  • Webhook configured and verified
  • Webhook subscribed to “messages”
  • config.csv updated with all credentials
  • Test message sent successfully
  • Incoming webhook received
  • Bot replied to test message

All checked? Your WhatsApp bot is live!

Quick Reference: Essential Commands

# Test webhook connectivity
curl -X POST https://your-webhook.com/webhooks/whatsapp \
  -H "Content-Type: application/json" \
  -d '{"test":true}'

# Check Meta API status
curl https://developers.facebook.com/status/

# View Twilio call logs
# https://console.twilio.com/us1/develop/monitor/logs/calls

# Test access token
curl -X GET "https://graph.facebook.com/v18.0/me" \
  -H "Authorization: Bearer YOUR_TOKEN"

# Monitor bot logs
tail -f .gbot/logs/bot.log

Configuration Template

Copy this template and replace with your values:

# WhatsApp Business Configuration
whatsapp-enabled,true
whatsapp-api-key,YOUR_ACCESS_TOKEN_HERE
whatsapp-phone-number-id,YOUR_PHONE_NUMBER_ID_HERE
whatsapp-business-account-id,YOUR_WABA_ID_HERE
whatsapp-webhook-verify-token,YOUR_VERIFY_TOKEN_HERE
whatsapp-application-id,YOUR_APP_ID_HERE
whatsapp-from-number,+553322980098

# Optional: Advanced Settings
whatsapp-webhook-url,https://your-domain.com/webhooks/whatsapp
whatsapp-timeout,30000
whatsapp-retry-attempts,3
whatsapp-rate-limit,50

Time-Saving Tips

💡 Use ngrok for testing - No need to deploy to test webhooks 💡 Save all credentials immediately - Tokens won’t be shown again 💡 Test with your own number first - Verify everything works 💡 Enable debug logging - Troubleshoot issues faster 💡 Set up monitoring early - Catch problems before users do


Estimated total time: 30 minutes
Difficulty: Intermediate
Cost: ~$10/month (Twilio number + usage)

For detailed explanations, advanced configurations, and production deployment, see the complete documentation.

Webhook Configuration Guide

This guide provides detailed instructions for configuring webhooks for both Twilio (voice call handling) and Meta (WhatsApp message handling) in your General Bots integration.

Overview

The integration requires two separate webhook configurations:

  1. Twilio Voice Webhook - Handles incoming verification calls and captures verification codes
  2. Meta WhatsApp Webhook - Receives incoming WhatsApp messages and status updates

Twilio Webhook Configuration

Purpose

The Twilio webhook is critical during the initial phone number verification phase. Since Twilio numbers don’t support SMS verification, Meta must call your number and read a 6-digit code. Your webhook must:

  1. Answer the incoming call from Meta
  2. Capture the audio or DTMF tones (key presses)
  3. Forward the verification code to your email or logging system

Webhook URL Structure

POST https://your-domain.com/twilio/voice

Required HTTP Headers

Twilio sends these headers with every webhook request:

HeaderDescriptionExample
X-Twilio-SignatureRequest signature for securityRCYmLs...
Content-TypeAlways application/x-www-form-urlencoded-

Request Body Parameters

When a call comes in, Twilio POSTs these parameters:

ParameterDescriptionExample
CallSidUnique call identifierCA1234567890ABCDEF1234567890ABCDEF
FromCaller’s phone number+1234567890 (Meta’s verification number)
ToYour Twilio number+553322980098
CallStatusCurrent call statusringing
DirectionCall directioninbound

TwiML Response Format

Your webhook must respond with TwiML (Twilio Markup Language) XML:

<?xml version="1.0" encoding="UTF-8"?>
<Response>
  <Gather action="https://your-domain.com/twilio/gather" method="POST" numDigits="6">
    <Say voice="alice" language="pt-BR">
      Please enter your verification code followed by the pound sign.
    </Say>
  </Gather>
  <Redirect>https://twimlets.com/voicemail?Email=your-email@example.com</Redirect>
</Response>

Implementation Examples

Node.js/Express

const express = require('express');
const twilio = require('twilio');
const app = express();

app.post('/twilio/voice', (req, res) => {
  const twiml = new twilio.twiml.VoiceResponse();
  
  const gather = twiml.gather({
    action: '/twilio/gather',
    method: 'POST',
    numDigits: 6,
    timeout: 10
  });
  
  gather.say({ 
    voice: 'alice', 
    language: 'pt-BR' 
  }, 'Please enter your verification code followed by the pound key.');
  
  // Fallback to voicemail if no input
  twiml.redirect('https://twimlets.com/voicemail?Email=your-email@example.com');
  
  res.type('text/xml');
  res.send(twiml.toString());
});

app.post('/twilio/gather', (req, res) => {
  const verificationCode = req.body.Digits;
  
  console.log('WhatsApp Verification Code:', verificationCode);
  
  // Send email notification
  sendEmail({
    to: 'your-email@example.com',
    subject: 'WhatsApp Verification Code',
    body: `Your verification code is: ${verificationCode}`
  });
  
  const twiml = new twilio.twiml.VoiceResponse();
  twiml.say('Thank you. Your code has been received.');
  
  res.type('text/xml');
  res.send(twiml.toString());
});

app.listen(3000, () => {
  console.log('Twilio webhook server running on port 3000');
});

Python/Flask

from flask import Flask, request, Response
from twilio.twiml.voice_response import VoiceResponse, Gather
import smtplib

app = Flask(__name__)

@app.route('/twilio/voice', methods=['POST'])
def voice_webhook():
    response = VoiceResponse()
    
    gather = Gather(
        action='/twilio/gather',
        method='POST',
        num_digits=6,
        timeout=10
    )
    gather.say(
        'Please enter your verification code followed by the pound key.',
        voice='alice',
        language='pt-BR'
    )
    response.append(gather)
    
    # Fallback to voicemail
    response.redirect('https://twimlets.com/voicemail?Email=your-email@example.com')
    
    return Response(str(response), mimetype='text/xml')

@app.route('/twilio/gather', methods=['POST'])
def gather_webhook():
    verification_code = request.form.get('Digits')
    
    print(f'WhatsApp Verification Code: {verification_code}')
    
    # Send email notification
    send_email(
        to='your-email@example.com',
        subject='WhatsApp Verification Code',
        body=f'Your verification code is: {verification_code}'
    )
    
    response = VoiceResponse()
    response.say('Thank you. Your code has been received.')
    
    return Response(str(response), mimetype='text/xml')

def send_email(to, subject, body):
    # Implement email sending logic
    pass

if __name__ == '__main__':
    app.run(port=3000)

BASIC (General Bots)

REM Twilio Voice Webhook Handler
ON WEBHOOK POST TO "/twilio/voice" DO
  REM Create TwiML response
  LET TWIML$ = "<?xml version=""1.0"" encoding=""UTF-8""?>"
  TWIML$ = TWIML$ + "<Response>"
  TWIML$ = TWIML$ + "<Gather action=""https://your-domain.com/twilio/gather"" method=""POST"" numDigits=""6"">"
  TWIML$ = TWIML$ + "<Say voice=""alice"" language=""pt-BR"">"
  TWIML$ = TWIML$ + "Please enter your verification code followed by the pound sign."
  TWIML$ = TWIML$ + "</Say>"
  TWIML$ = TWIML$ + "</Gather>"
  TWIML$ = TWIML$ + "<Redirect>https://twimlets.com/voicemail?Email=your-email@example.com</Redirect>"
  TWIML$ = TWIML$ + "</Response>"
  
  REM Set response content type
  SET RESPONSE HEADER "Content-Type" TO "text/xml"
  PRINT TWIML$
END ON

REM Gather Handler (receives the DTMF input)
ON WEBHOOK POST TO "/twilio/gather" DO
  REM Get the digits entered
  LET CODE$ = GET FORM VALUE "Digits"
  
  REM Log the verification code
  LOG "WhatsApp Verification Code: " + CODE$
  
  REM Send email notification
  SEND MAIL TO "your-email@example.com" WITH SUBJECT "WhatsApp Verification Code" AND BODY "Your verification code is: " + CODE$
  
  REM Create confirmation TwiML
  LET TWIML$ = "<?xml version=""1.0"" encoding=""UTF-8""?>"
  TWIML$ = TWIML$ + "<Response>"
  TWIML$ = TWIML$ + "<Say>Thank you. Your code has been received.</Say>"
  TWIML$ = TWIML$ + "</Response>"
  
  SET RESPONSE HEADER "Content-Type" TO "text/xml"
  PRINT TWIML$
END ON

Configuring Twilio

  1. Navigate to your phone number

    • Go to Twilio Console > Phone Numbers > Active Numbers
    • Click on your purchased number
  2. Configure Voice Webhook

    • Find “Voice & Fax” section
    • Set “A Call Comes In” to your webhook URL
    • Select HTTP POST method
    • Example: https://your-domain.com/twilio/voice
  3. Save changes

    • Click “Save” to apply the configuration

Webhook Security

Verify that requests come from Twilio:

const twilio = require('twilio');
const client = twilio(process.env.TWILIO_ACCOUNT_SID, process.env.TWILIO_AUTH_TOKEN);

app.post('/twilio/voice', (req, res) => {
  const url = `https://${req.headers.host}${req.path}`;
  const signature = req.headers['x-twilio-signature'];
  
  if (client.validateRequest(url, req.body, signature)) {
    // Request is from Twilio, process it
    handleVoiceWebhook(req, res);
  } else {
    // Invalid signature
    res.status(403).send('Invalid signature');
  }
});

Meta WhatsApp Webhook Configuration

Purpose

The Meta webhook receives:

  • Incoming WhatsApp messages from users
  • Message delivery status updates
  • Message read receipts
  • Webhook verification requests

Webhook URL Structure

POST https://your-domain.com/webhooks/whatsapp

Required HTTP Headers

HeaderDescriptionExample
X-Hub-Signature-256HMAC SHA-256 signaturesha256=...

Webhook Verification

When you first configure the webhook, Meta sends a GET request to verify your URL:

GET https://your-domain.com/webhooks/whatsapp?hub.verify_token=YOUR_TOKEN&hub.challenge=CHALLENGE_STRING

Your webhook must respond with the challenge:

app.get('/webhooks/whatsapp', (req, res) => {
  const mode = req.query['hub.mode'];
  const token = req.query['hub.verify_token'];
  const challenge = req.query['hub.challenge'];
  
  const VERIFY_TOKEN = '4qIogZadggQ.BEoMeciXIdl_MlkV_1DTx8Z_i0bYPxtSJwKSbH0FKlY';
  
  if (mode === 'subscribe' && token === VERIFY_TOKEN) {
    console.log('Webhook verified');
    res.status(200).send(challenge);
  } else {
    res.sendStatus(403);
  }
});

Message Payload Structure

Meta sends JSON payloads with message data:

{
  "object": "whatsapp_business_account",
  "entry": [{
    "id": "390727550789228",
    "changes": [{
      "value": {
        "messaging_product": "whatsapp",
        "metadata": {
          "display_phone_number": "+553322980098",
          "phone_number_id": "1158433381968079"
        },
        "contacts": [{
          "profile": {
            "name": "John Doe"
          },
          "wa_id": "5511999999999"
        }],
        "messages": [{
          "from": "5511999999999",
          "id": "wamid.HBgLNTE1OTk5OTk5OTk5FQIAERgSMzg1QTlCNkE2RTlFRTdFNDdF",
          "timestamp": "1704067200",
          "text": {
            "body": "Hello, how can I help you?"
          },
          "type": "text"
        }]
      },
      "field": "messages"
    }]
  }]
}

Implementation Examples

Node.js/Express

app.post('/webhooks/whatsapp', (req, res) => {
  try {
    const data = req.body;
    
    // Check if this is a WhatsApp message
    if (data.object === 'whatsapp_business_account') {
      data.entry.forEach(entry => {
        entry.changes.forEach(change => {
          if (change.field === 'messages') {
            const message = change.value.messages[0];
            const from = message.from;
            const body = message.text.body;
            const messageId = message.id;
            
            console.log(`Received message from ${from}: ${body}`);
            
            // Process the message
            processWhatsAppMessage(from, body, messageId);
          }
        });
      });
    }
    
    res.status(200).send('OK');
  } catch (error) {
    console.error('Webhook error:', error);
    res.status(500).send('Error');
  }
});

async function processWhatsAppMessage(from, body, messageId) {
  // Implement your bot logic here
  const response = await generateResponse(body);
  
  // Send reply
  await sendWhatsAppMessage(from, response);
}

Python/Flask

@app.route('/webhooks/whatsapp', methods=['POST'])
def whatsapp_webhook():
    try:
        data = request.get_json()
        
        if data.get('object') == 'whatsapp_business_account':
            for entry in data.get('entry', []):
                for change in entry.get('changes', []):
                    if change.get('field') == 'messages':
                        message = change['value']['messages'][0]
                        from_number = message['from']
                        body = message['text']['body']
                        message_id = message['id']
                        
                        print(f"Received message from {from_number}: {body}")
                        
                        # Process the message
                        process_whatsapp_message(from_number, body, message_id)
        
        return 'OK', 200
    except Exception as e:
        print(f'Webhook error: {e}')
        return 'Error', 500

def process_whatsapp_message(from_number, body, message_id):
    # Implement your bot logic here
    response = generate_response(body)
    
    # Send reply
    send_whatsapp_message(from_number, response)

BASIC (General Bots)

REM Meta WhatsApp Webhook Handler
ON WEBHOOK POST TO "/webhooks/whatsapp" DO
  REM Get the JSON payload
  LET PAYLOAD$ = GET REQUEST BODY
  
  REM Parse the JSON (requires JSON parser library)
  LET OBJ = PARSE JSON PAYLOAD$
  
  REM Check if this is a WhatsApp message
  IF GET JSON PATH OBJ, "object" = "whatsapp_business_account" THEN
    REM Get the message
    LET MESSAGE = GET JSON PATH OBJ, "entry[0].changes[0].value.messages[0]"
    
    REM Extract message details
    LET FROM$ = GET JSON PATH MESSAGE, "from"
    LET BODY$ = GET JSON PATH MESSAGE, "text.body"
    LET ID$ = GET JSON PATH MESSAGE, "id"
    
    REM Log the message
    LOG "WhatsApp message from " + FROM$ + ": " + BODY$
    
    REM Process the message asynchronously
    SPAWN PROCESS WHATSAPP MESSAGE FROM$, BODY$, ID$
  END IF
  
  REM Respond with 200 OK
  PRINT "OK"
  SET RESPONSE STATUS TO 200
END ON

REM Message processor
SUB PROCESS WHATSAPP MESSAGE FROM$, BODY$, ID$
  REM Generate a response
  LET RESPONSE$ = GENERATE RESPONSE BODY$
  
  REM Send the reply
  SEND WHATSAPP TO FROM$ WITH RESPONSE$
END SUB

Configuring Meta

  1. Navigate to WhatsApp API Setup

    • Go to Meta for Developers > Your App > WhatsApp > API Setup
  2. Edit Webhook

    • Click “Edit” next to Webhook
    • Enter your webhook URL: https://your-domain.com/webhooks/whatsapp
    • Enter your Verify Token: 4qIogZadggQ.BEoMeciXIdl_MlkV_1DTx8Z_i0bYPxtSJwKSbH0FKlY
    • Click “Verify and Save”
  3. Subscribe to Webhook Fields

    • Subscribe to: messages
    • This ensures you receive all incoming messages

Webhook Security

Implement signature verification:

const crypto = require('crypto');

app.post('/webhooks/whatsapp', (req, res) => {
  const signature = req.headers['x-hub-signature-256'];
  const payload = JSON.stringify(req.body);
  const appSecret = 'YOUR_APP_SECRET'; // From Meta dashboard
  
  const expectedSignature = 'sha256=' + crypto
    .createHmac('sha256', appSecret)
    .update(payload)
    .digest('hex');
  
  if (signature !== expectedSignature) {
    console.error('Invalid webhook signature');
    return res.status(403).send('Invalid signature');
  }
  
  // Process the webhook
  processWebhook(req.body);
  res.status(200).send('OK');
});

Testing Webhooks

Using Ngrok for Local Development

  1. Install ngrok

    npm install -g ngrok
    
  2. Start your local server

    node server.js
    
  3. Start ngrok

    ngrok http 3000
    
  4. Use the ngrok URL

    • Your webhook URL: https://abc123.ngrok.io/webhooks/whatsapp

Testing Twilio Webhook

Use Twilio’s webhook debugger:

curl -X POST \
  'https://your-domain.com/twilio/voice' \
  -H 'Content-Type: application/x-www-form-urlencoded' \
  -d 'CallSid=CA123&From=+1234567890&To=+553322980098&CallStatus=ringing&Direction=inbound'

Testing Meta Webhook

Use Meta’s webhook testing tool:

curl -X POST \
  'https://your-domain.com/webhooks/whatsapp' \
  -H 'Content-Type: application/json' \
  -H 'X-Hub-Signature-256: sha256=...' \
  -d '{
    "object": "whatsapp_business_account",
    "entry": [{
      "id": "390727550789228",
      "changes": [{
        "value": {
          "messaging_product": "whatsapp",
          "messages": [{
            "from": "5511999999999",
            "text": {"body": "Test message"}
          }]
        },
        "field": "messages"
      }]
    }]
  }'

Production Considerations

High Availability

  • Deploy webhooks behind a load balancer
  • Implement retry logic for failed deliveries
  • Use a message queue (RabbitMQ, Redis) for async processing
  • Monitor webhook health and set up alerts

Performance

  • Respond to webhooks quickly (< 3 seconds)
  • Process heavy operations asynchronously
  • Use worker queues for message processing
  • Implement rate limiting to prevent abuse

Monitoring

  • Log all webhook requests and responses
  • Track delivery success rates
  • Monitor response times
  • Set up alerts for failures
  • Use tools like Sentry, Datadog, or New Relic

Troubleshooting

Common Issues

Problem: Webhook verification fails

  • Ensure verify token matches exactly
  • Check that your endpoint returns the challenge
  • Verify your URL is publicly accessible

Problem: Messages not received

  • Check webhook logs for errors
  • Verify subscription to messages field
  • Ensure your server is online and responding

Problem: Invalid signature errors

  • Verify your app secret is correct
  • Check that you’re computing the hash correctly
  • Ensure you’re using the raw request body

Problem: Timeout errors

  • Optimize your webhook handler
  • Move heavy processing to background jobs
  • Increase server capacity if needed

Debugging Tools

  • Twilio Debugger: View all Twilio webhook attempts
  • Meta Webhook Debugging: Enable in app settings
  • Ngrok Inspector: Inspect requests in real-time
  • Webhook.site: Test webhooks without a server

Next Steps

  • Set up persistent storage for message history
  • Implement message queue for reliability
  • Add webhook retry logic
  • Configure monitoring and alerting
  • Set up automated testing

For more information on webhook security, see Security Considerations.

Teams Channel

SMS Providers

Attendance Queue Module

Human-attendant queue management for hybrid bot/human support workflows, plus CRM automations for follow-ups, collections, scheduling, and sales.

Overview

The attendance queue module manages handoffs from bot to human agents, tracking conversation queues, attendant availability, and real-time assignment. It also provides automated CRM workflows that run without human intervention.


Configuration

Create attendant.csv in your bot’s .gbai folder:

id,name,channel,preferences,department
att-001,John Smith,whatsapp,sales,commercial
att-002,Jane Doe,web,support,customer-service
att-003,Bob Wilson,all,technical,engineering
att-004,Maria Santos,whatsapp,collections,finance

Queue Status

StatusDescription
waitingUser waiting for attendant
assignedAttendant assigned, not yet active
activeConversation in progress
resolvedConversation completed
abandonedUser left before assignment

Attendant Status

StatusDescription
onlineAvailable for new conversations
busyCurrently handling conversations
awayTemporarily unavailable
offlineNot working

CRM Automations

The attendant module includes built-in CRM automations that handle common business workflows automatically.

Follow-Up Automation

Automated follow-up sequences for leads and customers.

' follow-up.bas
' Automated follow-up workflow

SET SCHEDULE "follow-ups", "0 9 * * 1-5"

' Find leads needing follow-up
leads_1_day = FIND "leads", "status='new' AND DATEDIFF(NOW(), last_contact) = 1"
leads_3_day = FIND "leads", "status='contacted' AND DATEDIFF(NOW(), last_contact) = 3"
leads_7_day = FIND "leads", "status='contacted' AND DATEDIFF(NOW(), last_contact) = 7"

' 1-day follow-up: Thank you message
FOR EACH lead IN leads_1_day
    SEND TEMPLATE lead.phone, "follow_up_thanks", lead.name, lead.interest
    UPDATE "leads", "id=" + lead.id, "contacted", NOW()
    INSERT "activities", lead.id, "follow_up", "1-day thank you sent", NOW()
NEXT lead

' 3-day follow-up: Value proposition
FOR EACH lead IN leads_3_day
    SEND TEMPLATE lead.phone, "follow_up_value", lead.name, lead.interest
    UPDATE "leads", "id=" + lead.id, "nurturing", NOW()
    INSERT "activities", lead.id, "follow_up", "3-day value prop sent", NOW()
NEXT lead

' 7-day follow-up: Special offer
FOR EACH lead IN leads_7_day
    SEND TEMPLATE lead.phone, "follow_up_offer", lead.name, "10%"
    UPDATE "leads", "id=" + lead.id, "offer_sent", NOW()
    INSERT "activities", lead.id, "follow_up", "7-day offer sent", NOW()
    
    ' Alert sales team for hot leads
    IF lead.score >= 70 THEN
        attendant = FIND "attendants", "department='commercial' AND status='online'"
        IF attendant THEN
            SEND MAIL attendant.email, "Hot Lead Follow-up: " + lead.name, "Lead " + lead.name + " received 7-day offer. Score: " + lead.score
        END IF
    END IF
NEXT lead

PRINT "Follow-ups completed: " + UBOUND(leads_1_day) + " 1-day, " + UBOUND(leads_3_day) + " 3-day, " + UBOUND(leads_7_day) + " 7-day"

Collections Automation (Cobranças)

Automated payment reminders and collection workflow.

' collections.bas
' Automated payment collection workflow

SET SCHEDULE "collections", "0 8 * * 1-5"

' Find overdue invoices by age
due_today = FIND "invoices", "status='pending' AND due_date = CURDATE()"
overdue_3 = FIND "invoices", "status='pending' AND DATEDIFF(NOW(), due_date) = 3"
overdue_7 = FIND "invoices", "status='pending' AND DATEDIFF(NOW(), due_date) = 7"
overdue_15 = FIND "invoices", "status='pending' AND DATEDIFF(NOW(), due_date) = 15"
overdue_30 = FIND "invoices", "status='pending' AND DATEDIFF(NOW(), due_date) >= 30"

' Due today: Friendly reminder via WhatsApp
FOR EACH invoice IN due_today
    customer = FIND "customers", "id=" + invoice.customer_id
    SEND TEMPLATE customer.phone, "payment_due_today", customer.name, invoice.id, invoice.amount
    INSERT "collection_log", invoice.id, "reminder_due_today", NOW()
NEXT invoice

' 3 days overdue: First collection notice
FOR EACH invoice IN overdue_3
    customer = FIND "customers", "id=" + invoice.customer_id
    SEND TEMPLATE customer.phone, "payment_overdue_3", customer.name, invoice.id, invoice.amount
    SEND MAIL customer.email, "Pagamento Pendente - Fatura #" + invoice.id, "Sua fatura está vencida há 3 dias. Valor: R$ " + invoice.amount
    UPDATE "invoices", "id=" + invoice.id, "first_notice_sent", NOW()
    INSERT "collection_log", invoice.id, "first_notice", NOW()
NEXT invoice

' 7 days overdue: Second notice with urgency
FOR EACH invoice IN overdue_7
    customer = FIND "customers", "id=" + invoice.customer_id
    SEND TEMPLATE customer.phone, "payment_overdue_7", customer.name, invoice.id, invoice.amount
    UPDATE "invoices", "id=" + invoice.id, "second_notice_sent", NOW()
    INSERT "collection_log", invoice.id, "second_notice", NOW()
    
    ' Notify collections team
    SEND MAIL "cobranca@empresa.com", "Cobrança 7 dias: " + customer.name, "Cliente: " + customer.name + "\nFatura: " + invoice.id + "\nValor: R$ " + invoice.amount
NEXT invoice

' 15 days overdue: Final notice before action
FOR EACH invoice IN overdue_15
    customer = FIND "customers", "id=" + invoice.customer_id
    late_fee = invoice.amount * 0.02
    interest = invoice.amount * 0.01 * 15
    total_due = invoice.amount + late_fee + interest
    
    SEND TEMPLATE customer.phone, "payment_final_notice", customer.name, invoice.id, total_due
    UPDATE "invoices", "id=" + invoice.id, late_fee, interest, total_due, "final_notice_sent", NOW()
    INSERT "collection_log", invoice.id, "final_notice", NOW()
    
    ' Assign to human attendant for follow-up call
    attendant = FIND "attendants", "department='finance' AND status='online'"
    IF attendant THEN
        INSERT "queue", invoice.customer_id, attendant.id, "collection_call", "high", NOW()
    END IF
NEXT invoice

' 30+ days overdue: Escalate to collections
FOR EACH invoice IN overdue_30
    IF invoice.status <> "collections" THEN
        customer = FIND "customers", "id=" + invoice.customer_id
        UPDATE "invoices", "id=" + invoice.id, "collections", NOW()
        UPDATE "customers", "id=" + customer.id, "suspended"
        
        SEND MAIL "juridico@empresa.com", "Inadimplência 30+ dias: " + customer.name, "Cliente enviado para cobrança jurídica.\n\nCliente: " + customer.name + "\nFatura: " + invoice.id + "\nValor total: R$ " + invoice.total_due
        INSERT "collection_log", invoice.id, "sent_to_collections", NOW()
    END IF
NEXT invoice

PRINT "Collections processed: " + UBOUND(due_today) + " due today, " + UBOUND(overdue_30) + " sent to collections"

Scheduling Automation (Agendamentos)

Automated appointment scheduling and reminders.

' scheduling.bas
' Automated appointment scheduling and reminders

SET SCHEDULE "appointment-reminders", "0 7 * * *"

' Find appointments for today and tomorrow
today_appointments = FIND "appointments", "DATE(scheduled_at) = CURDATE() AND status='confirmed'"
tomorrow_appointments = FIND "appointments", "DATE(scheduled_at) = DATE_ADD(CURDATE(), INTERVAL 1 DAY) AND status='confirmed'"

' Send day-before reminders
FOR EACH appt IN tomorrow_appointments
    customer = FIND "customers", "id=" + appt.customer_id
    staff = FIND "staff", "id=" + appt.staff_id
    
    appt_time = FORMAT(appt.scheduled_at, "HH:mm")
    appt_date = FORMAT(appt.scheduled_at, "DD/MM/YYYY")
    
    SEND TEMPLATE customer.phone, "appointment_reminder_24h", customer.name, appt.service, appt_date, appt_time, staff.name
    UPDATE "appointments", "id=" + appt.id, "reminder_24h_sent", NOW()
NEXT appt

' Send same-day reminders (2 hours before)
FOR EACH appt IN today_appointments
    IF DATEDIFF_HOURS(appt.scheduled_at, NOW()) <= 2 AND appt.reminder_2h_sent IS NULL THEN
        customer = FIND "customers", "id=" + appt.customer_id
        staff = FIND "staff", "id=" + appt.staff_id
        
        appt_time = FORMAT(appt.scheduled_at, "HH:mm")
        
        SEND TEMPLATE customer.phone, "appointment_reminder_2h", customer.name, appt.service, appt_time
        UPDATE "appointments", "id=" + appt.id, "reminder_2h_sent", NOW()
        
        ' Notify staff
        SEND TEMPLATE staff.phone, "staff_appointment_alert", staff.name, customer.name, appt.service, appt_time
    END IF
NEXT appt

' Check for no-shows (30 min past appointment time)
past_appointments = FIND "appointments", "scheduled_at < DATE_SUB(NOW(), INTERVAL 30 MINUTE) AND status='confirmed'"
FOR EACH appt IN past_appointments
    customer = FIND "customers", "id=" + appt.customer_id
    UPDATE "appointments", "id=" + appt.id, "no_show"
    INSERT "activities", appt.customer_id, "no_show", "Missed appointment: " + appt.service, NOW()
    
    ' Send reschedule offer
    SEND TEMPLATE customer.phone, "missed_appointment", customer.name, appt.service
NEXT appt

PRINT "Reminders sent: " + UBOUND(tomorrow_appointments) + " for tomorrow, " + UBOUND(today_appointments) + " for today"

Sales Automation (Vendas)

Automated sales pipeline and lead scoring.

' sales-automation.bas
' Automated sales pipeline management

SET SCHEDULE "sales-automation", "0 8,14,18 * * 1-5"

' Score and prioritize leads
new_leads = FIND "leads", "score IS NULL OR score = 0"
FOR EACH lead IN new_leads
    score = 0
    
    ' Score based on source
    IF lead.source = "website" THEN score = score + 20
    IF lead.source = "referral" THEN score = score + 30
    IF lead.source = "campaign" THEN score = score + 15
    
    ' Score based on company size
    IF lead.company_size = "enterprise" THEN score = score + 25
    IF lead.company_size = "mid-market" THEN score = score + 20
    IF lead.company_size = "small" THEN score = score + 10
    
    ' Score based on engagement
    page_views = FIND "analytics", "lead_id=" + lead.id + " AND type='page_view'"
    score = score + MIN(UBOUND(page_views) * 2, 20)
    
    ' Score based on email opens
    email_opens = FIND "email_tracking", "lead_id=" + lead.id + " AND opened=true"
    score = score + MIN(UBOUND(email_opens) * 5, 25)
    
    UPDATE "leads", "id=" + lead.id, score, NOW()
NEXT lead

' Auto-assign hot leads to sales reps
hot_leads = FIND "leads", "score >= 70 AND assigned_to IS NULL"
FOR EACH lead IN hot_leads
    ' Round-robin assignment
    available_reps = FIND "attendants", "department='commercial' AND status='online'"
    IF UBOUND(available_reps) > 0 THEN
        ' Get rep with fewest active leads
        rep = available_reps[0]
        min_leads = 999
        FOR EACH r IN available_reps
            rep_leads = FIND "leads", "assigned_to='" + r.id + "' AND status NOT IN ('converted', 'lost')"
            IF UBOUND(rep_leads) < min_leads THEN
                min_leads = UBOUND(rep_leads)
                rep = r
            END IF
        NEXT r
        
        UPDATE "leads", "id=" + lead.id, rep.id, NOW()
        
        ' Notify sales rep via WhatsApp
        SEND TEMPLATE rep.phone, "new_hot_lead", rep.name, lead.name, lead.company, lead.score
        
        ' Create follow-up task
        CREATE TASK "Contact hot lead: " + lead.name, rep.email, NOW()
    END IF
NEXT lead

' Move stale opportunities
stale_opportunities = FIND "opportunities", "DATEDIFF(NOW(), last_activity) > 14 AND stage NOT IN ('closed_won', 'closed_lost')"
FOR EACH opp IN stale_opportunities
    owner = FIND "attendants", "id=" + opp.owner_id
    
    ' Send reminder to owner
    SEND TEMPLATE owner.phone, "stale_opportunity", owner.name, opp.name, opp.amount, DATEDIFF(NOW(), opp.last_activity)
    
    ' Create urgent task
    CREATE TASK "URGENT: Update stale opportunity - " + opp.name, owner.email, NOW()
    
    INSERT "activities", opp.id, "stale_alert", "Opportunity marked as stale", NOW()
NEXT opp

' Generate daily pipeline report
pipeline = FIND "opportunities", "stage NOT IN ('closed_won', 'closed_lost')"
total_value = AGGREGATE "SUM", pipeline, "amount"
weighted_value = 0
FOR EACH opp IN pipeline
    weighted_value = weighted_value + (opp.amount * opp.probability / 100)
NEXT opp

report = "📊 Pipeline Diário\n\n"
report = report + "Total Pipeline: R$ " + FORMAT(total_value, "#,##0.00") + "\n"
report = report + "Valor Ponderado: R$ " + FORMAT(weighted_value, "#,##0.00") + "\n"
report = report + "Oportunidades Ativas: " + UBOUND(pipeline) + "\n"
report = report + "Leads Quentes: " + UBOUND(hot_leads)

SEND MAIL "vendas@empresa.com", "Pipeline Diário - " + FORMAT(NOW(), "DD/MM/YYYY"), report

PRINT "Sales automation completed. Hot leads assigned: " + UBOUND(hot_leads)

REST API Endpoints

Queue Management

GET /api/queue

List conversations in queue.

POST /api/queue/assign

Assign conversation to attendant.

{
    "session_id": "uuid",
    "attendant_id": "uuid"
}

POST /api/queue/transfer

Transfer conversation between attendants.

{
    "session_id": "uuid",
    "from_attendant_id": "uuid",
    "to_attendant_id": "uuid",
    "reason": "Specialist needed"
}

Attendant Management

GET /api/attendants

List all attendants with stats.

PUT /api/attendants/{id}/status

Update attendant status.

{
    "status": "online"
}

CRM Automation

GET /api/automation/status

Check automation job status.

POST /api/automation/trigger/

Manually trigger an automation job.


BASIC Keywords

Transfer to Human

' Transfer to any available human
TRANSFER TO HUMAN

' Transfer to specific department
TRANSFER TO HUMAN "sales"

' Transfer with priority
TRANSFER TO HUMAN "support", "high"

' Transfer with context
TRANSFER TO HUMAN "technical", "normal", "Customer needs help with API integration"

Create Lead

' Create lead from conversation
CREATE LEAD name, email, phone, source

' Create lead with company info
CREATE LEAD name, email, phone, "website", company, "enterprise"

Schedule Appointment

' Schedule appointment
BOOK customer_email, service, date, time, staff_id

' Schedule with duration
BOOK customer_email, "Consultation", "2025-01-20", "14:00", staff_id, 60

WhatsApp Templates

Configure these templates in your WhatsApp Business account:

Template NamePurposeVariables
follow_up_thanks1-day follow-upname, interest
follow_up_value3-day value propositionname, interest
follow_up_offer7-day special offername, discount
payment_due_todayPayment due remindername, invoice_id, amount
payment_overdue_33-day overdue noticename, invoice_id, amount
payment_overdue_77-day overdue noticename, invoice_id, amount
payment_final_notice15-day final noticename, invoice_id, total
appointment_reminder_24hDay-before remindername, service, date, time, staff
appointment_reminder_2h2-hour remindername, service, time
missed_appointmentNo-show reschedulename, service
new_hot_leadHot lead alert for salesrep_name, lead_name, company, score
stale_opportunityStale deal reminderrep_name, deal_name, amount, days

See Also

LLM Providers

General Bots supports multiple Large Language Model (LLM) providers, both cloud-based services and local deployments. This guide helps you choose the right provider for your use case.

Overview

LLMs are the intelligence behind General Bots’ conversational capabilities. You can configure:

  • Cloud Providers — External APIs (OpenAI, Anthropic, Google, etc.)
  • Local Models — Self-hosted models via llama.cpp
  • Hybrid — Use local for simple tasks, cloud for complex reasoning

Cloud Providers

OpenAI (GPT Series)

The most widely known LLM provider, offering the GPT-5 flagship model.

ModelContextBest ForSpeed
GPT-51MAll-in-one advanced reasoningMedium
GPT-oss 120B128KOpen-weight, agent workflowsMedium
GPT-oss 20B128KCost-effective open-weightFast

Configuration (config.csv):

name,value
llm-provider,openai
llm-model,gpt-5

Strengths:

  • Most advanced all-in-one model
  • Excellent general knowledge
  • Strong code generation
  • Good instruction following

Considerations:

  • API costs can add up
  • Data sent to external servers
  • Rate limits apply

Anthropic (Claude Series)

Known for safety, helpfulness, and extended thinking capabilities.

ModelContextBest ForSpeed
Claude Opus 4.5200KMost capable, complex reasoningSlow
Claude Sonnet 4.5200KBest balance of capability/speedFast

Configuration (config.csv):

name,value
llm-provider,anthropic
llm-model,claude-sonnet-4.5

Strengths:

  • Extended thinking mode for multi-step tasks
  • Excellent at following complex instructions
  • Strong coding abilities
  • Better at refusing harmful requests

Considerations:

  • Premium pricing
  • Newer provider, smaller ecosystem

Google (Gemini & Vertex AI)

Google’s multimodal AI models with strong reasoning capabilities. General Bots natively supports both the public AI Studio API and Enterprise Vertex AI.

ModelContextBest ForSpeed
Gemini 1.5 Pro2MComplex reasoning, benchmarksMedium
Gemini 1.5 Flash1MFast multimodal tasksFast

Configuration for AI Studio (Public API):

name,value
llm-provider,google
llm-model,gemini-1.5-pro
llm-url,https://generativelanguage.googleapis.com
llm-key,AIza...

Configuration for Vertex AI (Enterprise):

name,value
llm-provider,vertex
llm-model,gemini-1.5-pro
llm-url,https://us-central1-aiplatform.googleapis.com
llm-key,~/.vertex.json

Note: The bots will handle the Google OAuth2 JWT authentication internally if you provide the path or the raw JSON to a Service Account.

Strengths:

  • Largest context window (2M tokens)
  • Native multimodal (text, image, video, audio)
  • Vertex AI support enables enterprise VPC/IAM integration

Considerations:

  • Different endpoints for public vs enterprise deployments

xAI (Grok Series)

Integration with real-time data from X platform.

ModelContextBest ForSpeed
Grok 4128KReal-time research, analysisFast

Configuration (config.csv):

name,value
llm-provider,xai
llm-model,grok-4

Strengths:

  • Real-time data access from X
  • Strong research and analysis
  • Good for trend analysis

Considerations:

  • Newer provider
  • X platform integration focus

Groq

Ultra-fast inference using custom LPU hardware. Offers open-source models at high speed.

ModelContextBest ForSpeed
Llama 4 Scout10MLong context, multimodalVery Fast
Llama 4 Maverick1MComplex tasksVery Fast
Qwen3128KEfficient MoE architectureExtremely Fast

Configuration (config.csv):

name,value
llm-provider,groq
llm-model,llama-4-scout

Strengths:

  • Fastest inference speeds (500+ tokens/sec)
  • Competitive pricing
  • Open-source models
  • Great for real-time applications

Considerations:

  • Rate limits on free tier
  • Models may be less capable than GPT-5/Claude

Mistral AI

European AI company offering efficient, open-weight models.

ModelContextBest ForSpeed
Mixtral-8x22B64KMulti-language, codingFast

Configuration (config.csv):

name,value
llm-provider,mistral
llm-model,mixtral-8x22b

Strengths:

  • European data sovereignty (GDPR)
  • Excellent code generation
  • Open-weight models available
  • Competitive pricing
  • Proficient in multiple languages

Considerations:

  • Smaller context than competitors
  • Less brand recognition

DeepSeek

Known for efficient, capable models with exceptional reasoning.

ModelContextBest ForSpeed
DeepSeek-V3.1128KGeneral purpose, optimized costFast
DeepSeek-R3128KReasoning, math, scienceMedium

Configuration (config.csv):

name,value
llm-provider,deepseek
llm-model,deepseek-r3
llm-server-url,https://api.deepseek.com

Strengths:

  • Extremely cost-effective
  • Strong reasoning (R1 model)
  • Rivals proprietary leaders in performance
  • Open-weight versions available (MIT/Apache 2.0)

Considerations:

  • Data processed in China
  • Newer provider

Amazon Bedrock

AWS managed service for foundation models, supporting Claude, Llama, Titan, and others.

ModelContextBest ForSpeed
Claude 3.5 Sonnet200KHigh capability tasksFast
Llama 3.1 70B128KOpen-weight performanceFast

Configuration (config.csv):

name,value
llm-provider,bedrock
llm-model,anthropic.claude-3-5-sonnet-20240620-v1:0
llm-url,https://bedrock-runtime.us-east-1.amazonaws.com/model/anthropic.claude-3-5-sonnet-20240620-v1:0/invoke
llm-key,YOUR_BEDROCK_API_KEY

Strengths:

  • Native AWS integration
  • Enterprise-grade security
  • Multiple model families in one API

Azure OpenAI

Enterprise-grade deployment of OpenAI models hosted on Microsoft Azure.

ModelContextBest ForSpeed
GPT-4o128KAdvanced multimodalFast

Configuration (config.csv):

name,value
llm-provider,azureclaude
llm-model,gpt-4o
llm-url,https://YOUR_RESOURCE.openai.azure.com/openai/deployments/YOUR_DEPLOYMENT/chat/completions?api-version=2024-02-15-preview
llm-key,YOUR_AZURE_API_KEY

Strengths:

  • High enterprise compliance (HIPAA, SOC2)
  • Azure VNet integration
  • Guaranteed provisioned throughput available

Cerebras

Ultra-fast inference powered by Wafer-Scale Engine hardware, specifically tuned for open-source models like Llama.

ModelContextBest ForSpeed
Llama 3.1 70B8KHigh-speed general tasksExtremely Fast

Configuration (config.csv):

name,value
llm-provider,cerebras
llm-model,llama3.1-8b
llm-url,https://api.cerebras.ai/v1/chat/completions
llm-key,YOUR_CEREBRAS_API_KEY

Strengths:

  • Highest tokens-per-second available
  • Excellent for real-time agent loops

Zhipu AI (GLM)

High-capability bilingual models (English/Chinese) directly competing with state-of-the-art global models.

ModelContextBest ForSpeed
GLM-4128KGeneral purposeMedium
GLM-4-Long1MLong document analysisMedium

Configuration (config.csv):

name,value
llm-provider,glm
llm-model,glm-4
llm-url,https://open.bigmodel.cn/api/paas/v4/chat/completions
llm-key,YOUR_ZHIPU_API_KEY

Strengths:

  • Excellent bilingual performance
  • Large context windows (up to 1M)

Local Models

Run models on your own hardware for privacy, cost control, and offline operation.

Setting Up Local LLM

General Bots uses llama.cpp server for local inference:

name,value
llm-provider,local
llm-server-url,http://localhost:8081
llm-model,DeepSeek-R3-Distill-Qwen-1.5B

For High-End GPU (24GB+ VRAM)

ModelSizeVRAMQuality
Llama 4 Scout 17B Q818GB24GBExcellent
Qwen3 72B Q442GB48GB+Excellent
DeepSeek-R3 32B Q420GB24GBVery Good

For Mid-Range GPU (12-16GB VRAM)

ModelSizeVRAMQuality
Qwen3 14B Q815GB16GBVery Good
GPT-oss 20B Q412GB16GBVery Good
DeepSeek-R3-Distill 14B Q48GB12GBGood
Gemma 3 27B Q416GB16GBGood

For Small GPU or CPU (8GB VRAM or less)

ModelSizeVRAMQuality
DeepSeek-R3-Distill 1.5B Q41GB4GBBasic
Gemma 2 9B Q45GB8GBAcceptable
Gemma 3 27B Q210GB8GBAcceptable

Model Download URLs

Add models to installer.rs data_download_list:

#![allow(unused)]
fn main() {
// Qwen3 14B - Recommended for mid-range GPU
"https://huggingface.co/Qwen/Qwen3-14B-GGUF/resolve/main/qwen3-14b-q4_k_m.gguf"

// DeepSeek R1 Distill - For CPU or minimal GPU
"https://huggingface.co/unsloth/DeepSeek-R3-Distill-Qwen-1.5B-GGUF/resolve/main/DeepSeek-R3-Distill-Qwen-1.5B-Q4_K_M.gguf"

// GPT-oss 20B - Good balance for agents
"https://huggingface.co/openai/gpt-oss-20b-GGUF/resolve/main/gpt-oss-20b-q4_k_m.gguf"

// Gemma 3 27B - For quality local inference
"https://huggingface.co/google/gemma-3-27b-it-GGUF/resolve/main/gemma-3-27b-it-q4_k_m.gguf"
}

Embedding Models

For vector search, you need an embedding model:

name,value
embedding-provider,local
embedding-server-url,http://localhost:8082
embedding-model,bge-small-en-v1.5

Recommended embedding models:

ModelDimensionsSizeQuality
bge-small-en-v1.5384130MBGood
bge-base-en-v1.5768440MBBetter
bge-large-en-v1.510241.3GBBest
nomic-embed-text768550MBGood

Hybrid Configuration

Use different models for different tasks:

name,value
llm-provider,anthropic
llm-model,claude-sonnet-4.5
llm-fast-provider,groq
llm-fast-model,llama-3.3-70b
llm-fallback-provider,local
llm-fallback-model,DeepSeek-R3-Distill-Qwen-1.5B
embedding-provider,local
embedding-model,bge-small-en-v1.5

Model Selection Guide

By Use Case

Use CaseRecommendedWhy
Customer supportClaude Sonnet 4.5Best at following guidelines
Code generationDeepSeek-R3, Claude Sonnet 4.5Specialized for code
Document analysisGemini Pro2M context window
Real-time chatGroq Llama 3.3Fastest responses
Privacy-sensitiveLocal DeepSeek-R3No external data transfer
Cost-sensitiveDeepSeek, Local modelsLowest cost per token
Complex reasoningClaude Opus, Gemini ProBest reasoning ability
Real-time researchGrokLive data access
Long contextGemini Pro, ClaudeLargest context windows

By Budget

BudgetRecommended Setup
FreeLocal models only
Low ($10-50/mo)Groq + Local fallback
Medium ($50-200/mo)DeepSeek-V3.1 + Claude Sonnet 4.5
High ($200+/mo)GPT-5 + Claude Opus 4.5
EnterprisePrivate deployment + premium APIs

Configuration Reference

config.csv Parameters

All LLM configuration belongs in config.csv, not environment variables:

ParameterDescriptionExample
llm-providerProvider nameopenai, anthropic, local
llm-modelModel identifiergpt-5
llm-server-urlAPI endpoint (local only)http://localhost:8081
llm-server-ctx-sizeContext window size128000
llm-temperatureResponse randomness (0-2)0.7
llm-max-tokensMaximum response length4096
llm-cache-enabledEnable semantic cachingtrue
llm-cache-ttlCache time-to-live (seconds)3600

API Keys

API keys are stored in Vault, not in config files or environment variables:

# Store API key in Vault
vault kv put gbo/llm/openai api_key="sk-..."
vault kv put gbo/llm/anthropic api_key="sk-ant-..."
vault kv put gbo/llm/google api_key="AIza..."

Reference in config.csv:

name,value
llm-provider,openai
llm-model,gpt-5
llm-api-key,vault:gbo/llm/openai/api_key

Security Considerations

Cloud Providers

  • API keys stored in Vault, never in config files
  • Consider data residency requirements (EU: Mistral)
  • Review provider data retention policies
  • Use separate keys for production/development

Local Models

  • All data stays on your infrastructure
  • No internet required after model download
  • Full control over model versions
  • Consider GPU security for sensitive deployments

Performance Optimization

Caching

Enable semantic caching to reduce API calls:

name,value
llm-cache-enabled,true
llm-cache-ttl,3600
llm-cache-similarity-threshold,0.92

Batching

For bulk operations, use batch APIs when available:

name,value
llm-batch-enabled,true
llm-batch-size,10

Context Management

Optimize context window usage with episodic memory:

name,value
episodic-memory-enabled,true
episodic-memory-threshold,4
episodic-memory-history,2
episodic-memory-auto-summarize,true

See Episodic Memory for details.

Troubleshooting

Common Issues

API Key Invalid

  • Verify key is stored correctly in Vault
  • Check if key has required permissions
  • Ensure billing is active on provider account

Model Not Found

  • Check model name spelling
  • Verify model is available in your region
  • Some models require waitlist access

Rate Limits

  • Implement exponential backoff
  • Use caching to reduce calls
  • Consider upgrading API tier

Local Model Slow

  • Check GPU memory usage
  • Reduce context size
  • Use quantized models (Q4 instead of F16)

Logging

Enable LLM logging for debugging:

name,value
llm-log-requests,true
llm-log-responses,false
llm-log-timing,true

2025 Model Comparison

ModelCreatorTypeStrengths
GPT-5OpenAIProprietaryMost advanced all-in-one
Claude Opus/Sonnet 4.5AnthropicProprietaryExtended thinking, complex reasoning
Gemini 1.5/3 ProGoogleProprietaryBenchmarks, reasoning, 2M context
Grok 4xAIProprietaryReal-time X data
Claude / LlamaAmazon BedrockManaged APIEnterprise AWS integration
GPT-4o / GPT-5Azure OpenAIManaged APIEnterprise compliance, Azure VNet
Llama / Open ModelsCerebrasHardware CloudExtreme inference speed
GLM-4Zhipu AIProprietaryEnglish/Chinese bilingual, up to 1M context
DeepSeek-V3.1/R1DeepSeekOpen (MIT/Apache)Cost-optimized, reasoning
Llama 4MetaOpen-weight10M context, multimodal
Qwen3AlibabaOpen (Apache)Efficient MoE
Mixtral-8x22BMistralOpen (Apache)Multi-language, coding
GPT-ossOpenAIOpen (Apache)Agent workflows
Gemma 2/3GoogleOpen-weightLightweight, efficient

Next Steps

Storage Services

Directory Services

Chapter 7: User Interface

Customize your bot’s appearance and build complete productivity suites.

Overview

Themes control colors, fonts, logos, and overall visual presentation of your bot interface.

Quick Start

# In config.csv
name,value
theme-color1,#0d2b55
theme-color2,#fff9c2
theme-title,My Bot
theme-logo,https://example.com/logo.svg

Theme Structure

mybot.gbai/
└── mybot.gbtheme/
    └── style.css

Configuration Options

SettingDescriptionExample
theme-color1Primary color#0d2b55
theme-color2Secondary color#fff9c2
theme-titleBot name in headerMy Assistant
theme-logoLogo URLhttps://...

CSS Customization

Create style.css in your .gbtheme folder:

:root {
  --primary: #0d2b55;
  --secondary: #fff9c2;
}

.chat-header {
  background: var(--primary);
}

.user-message {
  background: var(--secondary);
}

Chapter Contents

See Also

General Bots Suite - User Manual

The Complete Productivity Workspace

AI-native productivity suite for modern teams


Welcome to General Bots Suite

General Bots Suite is your all-in-one workspace that combines communication, productivity, and AI assistance. All your productivity tools in one place - simple and powerful.

What makes it special:

  • Everything works together seamlessly
  • AI helps you with every task
  • No complex setup required
  • Works in your browser

Part I: Getting Started

Opening the Suite

  1. Open your web browser (Chrome, Firefox, Safari, or Edge)
  2. Go to your General Bots address (example: http://your-company.bot:9000)
  3. The Suite loads automatically - you’ll see the workspace in seconds

Your First Look

When the Suite opens, you see:

Suite Main Layout

The Apps Menu

Click the nine-dot grid (⋮⋮⋮) in the top right to see all applications:

IconAppWhat It Does
💬ChatTalk with your AI assistant
📁DriveStore and manage your files
TasksLLM-powered intelligent task execution
MailSend and receive emails
📅CalendarSchedule meetings and events
🎥MeetVideo calls and meetings
📝PaperWrite documents with AI help
🔍ResearchAI-powered search
📊AnalyticsView reports and statistics

Part II: Chat - Your AI Assistant

What Chat Does

Chat is your personal AI assistant. Ask questions, get help with tasks, or have a conversation - just like talking to a helpful colleague.

How to Use Chat

Sending a Message:

  1. Click in the message box at the bottom
  2. Type your question or request
  3. Press Enter or click the button

Example Conversations:

You: What meetings do I have today?
Bot: You have 2 meetings scheduled:
     • 10:00 AM - Team Standup
     • 2:00 PM - Project Review

You: Remind me to call John tomorrow at 3pm
Bot: Done! I've created a reminder for tomorrow at 3:00 PM 
     to call John.

You: Write a professional email to decline a meeting
Bot: Here's a draft:

     Subject: Unable to Attend Meeting
     
     Dear [Name],
     
     Thank you for the invitation. Unfortunately, I have 
     a scheduling conflict and won't be able to attend.
     
     Would it be possible to reschedule or receive a 
     summary of the discussion?
     
     Best regards

Voice Input

  1. Click the 🎤 microphone button
  2. Speak your message
  3. Click again to stop
  4. Your words appear as text automatically

Quick Suggestions

Below the message box, you’ll see suggestion chips - common actions you can click:

[📊 Show my tasks] [📧 Check email] [📅 Today's schedule] [❓ Help]

Keyboard Shortcuts for Chat

ShortcutAction
EnterSend message
Shift+EnterNew line (without sending)
ArrowEdit last message
/Show command menu

Part III: Drive - File Management

What Drive Does

Drive is your file storage - like Google Drive or OneDrive. Store documents, images, spreadsheets, and any file you need.

The Drive Interface

Drive Interface

Creating and Uploading

Upload Files:

  1. Click + New button
  2. Select Upload Files
  3. Choose files from your computer
  4. Or: Drag files directly into Drive

Create New Folder:

  1. Click + New
  2. Select New Folder
  3. Type the folder name
  4. Press Enter

Working with Files

Open a file: Double-click it

Select files: Click the checkbox beside the file name

Multiple selection: Hold Ctrl (or Cmd on Mac) and click files

Right-click menu options:

  • 📂 Open
  • ⬇️ Download
  • ✏️ Rename
  • 📋 Copy
  • 📁 Move to…
  • ⭐ Add to Starred
  • 🔗 Share
  • 🗑 Delete

View Options

ButtonViewBest For
Grid viewImages and visual files
List viewDocuments and details

Keyboard Shortcuts for Drive

ShortcutAction
Ctrl+UUpload files
Ctrl+NNew folder
DeleteMove to trash
Ctrl+CCopy
Ctrl+VPaste
EnterOpen selected

Part IV: Tasks - LLM-Powered Execution

What Tasks Does

Tasks revolutionizes how you work. Instead of manually tracking to-do items, you describe what you want to accomplish in natural language, and the LLM compiles your intent into an executable plan with automatic step-by-step execution.

The Tasks Interface

Tasks Interface

Creating an Task

  1. Describe your intent in the text area (e.g., “Build a CRM for Deloitte with client management”)
  2. Choose execution mode:
    • Semi-Automatic (recommended) - Runs automatically, pauses for high-risk steps
    • Supervised - Pauses before each step for your approval
    • Fully Automatic - Runs everything without stopping
    • Dry Run - Simulates execution without making changes
  3. Set priority: Critical, High, Medium, Low, or Background
  4. Click 🚀 Compile & Plan

Pro tip: Be specific about outcomes! Instead of “make something”, write “Create a sales dashboard with revenue charts by region and export to PDF”

Understanding the Plan

After compilation, you’ll see:

ElementWhat It Shows
ConfidenceHow confident the LLM is (aim for 80%+)
Risk LevelNone / Low / Medium / High / Critical
DurationEstimated execution time
CostAPI and compute costs
StepsOrdered execution plan with keywords

Execution Modes

ModeBest For
Semi-AutomaticMost tasks - automatic with safety pauses
SupervisedLearning or sensitive operations
Fully AutomaticTrusted, tested workflows
Dry RunTesting before real execution

Monitoring Tasks

  • Running - Currently executing (shows progress bar)
  • Pending Approval - Waiting for you to approve a high-risk step
  • Waiting Decision - Needs your input to continue
  • Completed - Successfully finished

Approvals & Decisions

High-impact actions pause for your approval:

  • Sending mass emails
  • Modifying databases
  • Deploying to production
  • Actions exceeding cost thresholds

Click ✅ Review & Approve to see details and continue.

Creating Tasks from Chat

In Chat, just say:

You: I need to build a customer portal for Acme Corp
Bot: I'll create an Task for that. Here's the plan:
     - 5 steps, estimated 3 hours
     - Risk: Low
     Should I execute this plan?
You: Yes, go ahead
Bot: 🚀 Task started!

Part V: Mail - Email Management

What Mail Does

Mail connects to your email accounts so you can read, write, and organize emails without leaving the Suite.

The Mail Interface

Mail Interface

Reading Email

  1. Click on Mail in the Apps menu
  2. Click any email in the list to read it
  3. The full email appears on the right

Composing Email

  1. Click ✏ Compose
  2. Fill in the fields:
    • To: recipient’s email
    • Subject: what it’s about
    • Body: your message
  3. Click Send

AI-Assisted Writing:

You: Help me write an email to reschedule tomorrow's meeting
Bot: Here's a draft:

     To: [recipient]
     Subject: Request to Reschedule Meeting
     
     Hi [Name],
     
     I hope this message finds you well. Would it be 
     possible to reschedule our meeting tomorrow? 
     I have an unexpected conflict.
     
     Please let me know what times work for you 
     later this week.
     
     Thank you for understanding.

Email Folders

FolderPurpose
InboxNew and unread messages
SentEmails you’ve sent
DraftsUnfinished emails
TrashDeleted emails (emptied after 30 days)

Email Actions

ButtonAction
ReplyRespond to sender
Reply AllRespond to everyone
ForwardSend to someone else
DeleteMove to Trash
ArchiveRemove from Inbox but keep

Part VI: Calendar - Scheduling

What Calendar Does

Calendar shows your schedule, meetings, and events. Plan your day, week, or month at a glance.

The Calendar Interface

Calendar Interface

Creating an Event

Method 1: Click and Create

  1. Click on a day/time slot
  2. Enter event details
  3. Click Save

Method 2: Ask the AI

You: Schedule a team meeting for next Tuesday at 2pm
Bot: Event created:
     📅 Team Meeting
     🕐 Tuesday, March 19 at 2:00 PM
     ⏱ Duration: 1 hour

Event Details

When creating an event, you can set:

  • Title - What the event is
  • Date & Time - When it happens
  • Duration - How long it lasts
  • Location - Where (room or video link)
  • Attendees - Who to invite
  • Reminder - When to notify you
  • Repeat - Daily, weekly, monthly

Calendar Views

ViewShowsBest For
DayHour by hourDetailed daily planning
Week7 daysSeeing your week ahead
MonthFull monthLong-term planning

Keyboard Navigation

KeyAction
Previous/Next period
TJump to Today
DDay view
WWeek view
MMonth view

Part VII: Meet - Video Calls

What Meet Does

Meet lets you have video calls with one person or many. Share your screen, record meetings, and get AI transcriptions.

Starting a Meeting

Start Instant Meeting:

  1. Click Meet in Apps menu
  2. Click Start Meeting
  3. Share the link with others

Schedule for Later:

You: Schedule a video call with the team for tomorrow at 10am
Bot: Meeting scheduled:
     🎥 Team Video Call
     📅 Tomorrow at 10:00 AM
     🔗 Link: meet.bot/abc-defg-hij
     
     Shall I send invites to the team?

The Meeting Interface

Meet Interface

Meeting Controls

ButtonFunction
🎤 MuteTurn microphone on/off
📹 VideoTurn camera on/off
🖥 ShareShare your screen
🔴 RecordRecord the meeting
📝 TranscribeGet live captions
💬 ChatOpen meeting chat
👥 ParticipantsSee who’s in the call
📞 EndLeave the meeting

Screen Sharing

  1. Click 🖥 Share
  2. Choose what to share:
    • Entire Screen - Everything you see
    • Window - One application
    • Tab - One browser tab
  3. Click Share
  4. Click Stop Sharing when done

AI Features in Meetings

Live Transcription:

  • Enable with the 📝 Transcribe button
  • Words appear as people speak
  • Great for accessibility and note-taking

Meeting Summary: After the meeting, ask:

You: Summarize today's project meeting
Bot: Meeting Summary:
     
     Duration: 45 minutes
     Participants: You, John, Sarah
     
     Key Points:
     • Project deadline moved to April 15
     • John will handle client communication
     • Sarah completing design by Friday
     
     Action Items:
     • [You] Review budget proposal
     • [John] Send client update
     • [Sarah] Share design mockups

Part VIII: Paper - AI Writing

What Paper Does

Paper is your writing space with AI assistance. Write documents, notes, reports - and let AI help you write better.

The Paper Interface

Paper Interface

Creating a Document

  1. Click + New in the sidebar
  2. Choose a template:
    • Blank - Start fresh
    • Meeting Notes - Pre-formatted for meetings
    • To-Do List - Checkbox format
    • Research - Sections for sources

Formatting Toolbar

ButtonFunctionShortcut
BBoldCtrl+B
IItalicCtrl+I
UUnderlineCtrl+U
H1Heading 1Ctrl+1
H2Heading 2Ctrl+2
Bullet listCtrl+Shift+8
Numbered listCtrl+Shift+7
🔗Insert linkCtrl+K
📷Insert image-

AI Writing Assistant ✨

Click the AI ✨ button or type /ai for AI help:

Commands:

/ai improve     → Make the text better
/ai shorter     → Make it more concise  
/ai longer      → Expand with more detail
/ai formal      → Make it professional
/ai friendly    → Make it casual
/ai translate   → Translate to another language
/ai summarize   → Create a summary

Example:

You wrote: "The thing we need to do is make the stuff better"

/ai formal

AI suggests: "Our objective is to enhance the quality of 
             our deliverables to meet higher standards."

Auto-Save

Paper saves automatically as you type. Look for:

  • “Saving…” - Currently saving
  • “Saved” - All changes saved
  • “Offline” - Will save when connected

What Research Does

Research is like having a research assistant. Search the web, your documents, and knowledge bases - then get AI-synthesized answers.

The Research Interface

Research Interface

Search Modes

ModeIconSearches
All🌐Everything
Academic📚Research papers, journals
Code💻Documentation, code examples
Internal🏠Your company’s knowledge base

Using Research

  1. Type your question in the search box
  2. Select a focus mode (optional)
  3. Press Enter
  4. Read the AI-synthesized answer
  5. Click sources to see original content

Collections

Save important searches and sources:

  1. Click + New Collection
  2. Name it (e.g., “Q1 Project Research”)
  3. Add sources by clicking Save to Collection
  4. Access anytime from the sidebar

Pro Tips

Be specific:

  • ❌ “marketing”
  • ✅ “B2B SaaS marketing strategies for startups under 50 employees”

Use follow-up questions:

Search: What is machine learning?
Follow-up: How is it different from deep learning?
Follow-up: What are practical business applications?

Part X: Analytics - Reports & Insights

What Analytics Does

Analytics shows you reports about usage, conversations, and performance. Understand how the bot is being used and what’s working.

The Analytics Interface

Analytics Interface

Key Metrics

MetricWhat It Means
MessagesTotal conversations
Success Rate% of questions answered well
Avg Response TimeHow fast the bot replies
UsersNumber of people using the bot
Popular TopicsWhat people ask about most

Time Ranges

Select different periods to analyze:

  • Last Hour
  • Last 6 Hours
  • Last 24 Hours
  • Last 7 Days
  • Last 30 Days
  • Custom Range

Exporting Data

Click Export to download reports as:

  • CSV - For spreadsheets
  • PDF - For sharing
  • JSON - For developers

Part XI: Designer - Visual Dialog Builder

What Designer Does

Designer lets you create bot conversations visually. Drag and drop blocks to build dialogs without coding.

The Designer Interface

Designer Interface

Building a Dialog

Step 1: Drag Blocks

  • Drag from Toolbox to Canvas
  • Blocks snap to grid

Step 2: Connect Blocks

  • Drag from output port (●) to input port
  • Lines show conversation flow

Step 3: Configure Properties

  • Click a block
  • Edit settings in Properties panel

Step 4: Export

  • Click Export to .bas
  • Save your dialog file

Block Types

BlockIconPurposeExample
TALK💬Bot speaks“Welcome! How can I help?”
HEAR👂Wait for user inputStore response in name
SET📝Set a variabletotal = price * quantity
IF🔀Make decisionsIf age > 18 then…
FOR🔄Repeat for itemsFor each item in cart…
SWITCH🔃Multiple choicesSwitch on category…
CALL📞Call another dialogCall “checkout”
SEND MAIL📧Send emailSend confirmation
SAVE💾Save dataSave to database
WAITPauseWait 5 seconds

Example: Simple Greeting Dialog

The Designer canvas shows flow diagrams like the one in the interface above. A simple greeting dialog flows from a TALK node (“What’s your name?”) to a HEAR node (capturing the name as a string variable) to another TALK node (“Nice to meet you, {name}!”).

Generated Code:

TALK "What's your name?"
HEAR name AS STRING
TALK "Nice to meet you, " + name + "!"

Keyboard Shortcuts in Designer

ShortcutAction
Ctrl+SSave
Ctrl+OOpen file
Ctrl+ZUndo
Ctrl+YRedo
Ctrl+CCopy block
Ctrl+VPaste block
DeleteDelete selected
EscapeDeselect

Part XII: Sources - Prompts & Templates

What Sources Does

Sources is your library of prompts, templates, tools, and AI models. Find and use pre-built components to extend your bot.

The Sources Interface

Sources Interface

Tabs Explained

TabContainsUse For
PromptsPre-written AI instructionsStarting conversations
TemplatesComplete bot packagesFull solutions
MCP ServersExternal tool connectionsIntegrations
LLM ToolsAI functionsExtending capabilities
ModelsAI model optionsChoosing AI provider

Using a Prompt

  1. Browse or search prompts
  2. Click on a prompt card
  3. Click Use to apply it
  4. Customize if needed

Installing a Template

  1. Go to Templates tab
  2. Find a template (e.g., “CRM Contacts”)
  3. Click Install
  4. Configure settings
  5. Template is now active

Available Models

ModelProviderBest For
Claude Sonnet 4.5AnthropicGeneral tasks, coding
Claude Opus 4.5AnthropicComplex analysis
Gemini ProGoogleLong documents
Llama 3.3MetaOpen source, privacy

Part XIII: Tools - System Utilities

Compliance Scanner

Compliance Scanner Interface

What It Checks:

  • Hardcoded passwords
  • Exposed API keys
  • SQL injection risks
  • Deprecated keywords
  • Security best practices

Part XIV: Settings

Accessing Settings

  1. Click your avatar (top right)
  2. Select Settings

Setting Categories

Profile:

  • Display name
  • Avatar image
  • Email address
  • Language preference

Notifications:

  • Email notifications
  • Desktop alerts
  • Sound preferences

Appearance:

  • Theme (Light/Dark/Auto)
  • Accent color
  • Font size

Privacy:

  • Data retention
  • Conversation history
  • Usage analytics

Connections:

  • Email accounts
  • Calendar sync
  • Cloud storage

Part XV: Keyboard Shortcuts Reference

Global Shortcuts

ShortcutAction
Alt+1Open Chat
Alt+2Open Drive
Alt+3Open Tasks
Alt+4Open Mail
Alt+5Open Calendar
EscapeClose dialog/menu
/Focus search
Ctrl+KCommand palette

Common Shortcuts

ShortcutAction
Ctrl+SSave
Ctrl+ZUndo
Ctrl+YRedo
Ctrl+CCopy
Ctrl+VPaste
Ctrl+ASelect all
Ctrl+FFind

Part XVI: Tips & Best Practices

Daily Workflow

Morning:

  1. Open Suite
  2. Check Chat for overnight messages
  3. Review Tasks for the day
  4. Check Calendar for meetings

During Work:

  • Use Chat for quick questions
  • Upload files to Drive
  • Update Tasks as you complete them
  • Take notes in Paper

End of Day:

  • Mark completed tasks done
  • Archive old emails
  • Review tomorrow’s calendar

Productivity Tips

In Chat:

  • Be specific in your questions
  • Use follow-up questions
  • Say “summarize” for long responses

In Drive:

  • Use folders to organize
  • Star important files
  • Use search instead of browsing

In Tasks:

  • Break big tasks into smaller ones
  • Set realistic due dates
  • Use categories to organize

In Mail:

  • Process emails once
  • Archive instead of delete
  • Use AI for drafting

Getting Help

Ask the Bot:

You: How do I upload a file?
You: What keyboard shortcuts are there?
You: Help me with tasks

Resources:

  • This manual
  • In-app help (click ?)
  • Support team

Appendix A: Troubleshooting

Common Issues

Suite won’t load:

  • Refresh the page (F5 or Ctrl+R)
  • Clear browser cache
  • Try a different browser

Files won’t upload:

  • Check file size (max 100MB)
  • Check internet connection
  • Try a smaller file first

Bot not responding:

  • Wait a few seconds
  • Refresh the page
  • Check internet connection

Video/audio not working:

  • Allow camera/microphone in browser
  • Check device permissions
  • Try different browser

Error Messages

MessageSolution
“Connection lost”Check internet, refresh page
“File too large”Reduce file size
“Permission denied”Contact administrator
“Session expired”Log in again

Appendix B: Glossary

TermDefinition
BotAI assistant that responds to your messages
DialogA conversation flow or script
HTMXTechnology that makes pages interactive
KBKnowledge Base - stored information
MCPModel Context Protocol - tool connections
SuiteThe complete workspace application
TemplatePre-built bot configuration


© General Bots - Built with ❤️ and AI

For the latest documentation, visit the online manual

Admin vs User Views

The General Bots Suite separates functionality into two distinct interfaces: the User View for personal productivity and the Admin View for organization management. This separation ensures users only see features relevant to their role while administrators have access to system-wide controls.

Overview

ViewAccessPurpose
User ViewAll authenticated usersPersonal settings, files, tasks, calendar
Admin ViewUsers with admin roleOrganization management, user provisioning, DNS

User View

The User View is the default interface for all authenticated users. It provides access to personal productivity tools and settings.

Accessing User Settings

  1. Click your avatar in the top-right corner
  2. Select Settings

User Settings Sections

Profile

  • Display name and avatar
  • Email address
  • Language and timezone

Security

  • Change password
  • Two-factor authentication (2FA)
  • Active sessions management
  • Trusted devices

Appearance

  • Theme selection (dark, light, blue, purple, green, orange)
  • Accent color
  • Font size preferences

Notifications

  • Email notification preferences
  • Desktop alerts
  • Sound settings

Storage

  • View storage quota usage
  • Manage connected storage providers
  • Clear cache

Integrations

  • API keys for external access
  • Webhook configurations
  • Connected OAuth providers

Privacy

  • Data visibility settings
  • Online status preferences
  • Data export and account deletion

User API Endpoints

All user endpoints use the /api/user/ prefix:

EndpointMethodDescription
/api/user/profileGET, PUTUser profile data
/api/user/passwordPOSTChange password
/api/user/security/2fa/statusGET2FA status
/api/user/security/2fa/enablePOSTEnable 2FA
/api/user/security/sessionsGETActive sessions
/api/user/notifications/preferencesGET, PUTNotification settings
/api/user/storageGETStorage quota
/api/user/api-keysGET, POST, DELETEAPI key management
/api/user/webhooksGET, POST, DELETEWebhook management
/api/user/data/exportPOSTRequest data export

Admin View

The Admin View provides organization-wide management capabilities. Access requires the admin role.

Accessing Admin Panel

  1. Click your avatar in the top-right corner
  2. Select Admin Panel

If you don’t see “Admin Panel”, you don’t have administrator privileges.

Admin Panel Sections

Dashboard

  • Quick statistics (users, groups, bots, storage)
  • System health overview
  • Recent activity feed
  • Quick action buttons

Users

  • View all organization users
  • Create new users
  • Edit user details and roles
  • Disable or delete accounts
  • Reset user passwords

Groups

  • Create and manage groups
  • Assign users to groups
  • Set group permissions
  • Manage group invitations

Bots

  • View deployed bots
  • Bot configuration management
  • Usage statistics per bot

DNS

  • Register custom hostnames
  • Manage DNS records
  • SSL certificate status

Audit Log

  • View all system events
  • Filter by user, action, or date
  • Export audit reports

Organization Billing (Admin-level)

  • Organization subscription status
  • Usage across all users
  • Payment methods for organization
  • Invoice history

Admin API Endpoints

All admin endpoints use the /api/admin/ prefix and require admin role:

EndpointMethodDescription
/api/admin/dashboardGETDashboard statistics
/api/admin/usersGET, POSTList/create users
/api/admin/users/:idGET, PUT, DELETEManage specific user
/api/admin/groupsGET, POSTList/create groups
/api/admin/groups/:idGET, PUT, DELETEManage specific group
/api/admin/botsGETList organization bots
/api/admin/dnsGET, POST, DELETEDNS management
/api/admin/auditGETAudit log entries
/api/admin/stats/*GETVarious statistics
/api/admin/healthGETSystem health status
/api/admin/activity/recentGETRecent activity feed

Permission Levels

The system uses role-based access control (RBAC):

RoleUser ViewAdmin ViewDescription
guestLimitedRead-only chat access
userStandard user features
managerPartialCan view monitoring
adminFull system access

Checking User Role

In BASIC scripts, check the user’s role:

role = user.role

IF role = "admin" THEN
    TALK "Welcome, administrator!"
ELSE
    TALK "Welcome, " + user.name
END IF

Desktop App Considerations

When running the Suite as a desktop application (via Tauri), additional features become available:

Desktop-Only Features

  • Local file system access
  • Rclone-based file synchronization
  • System tray integration
  • Native notifications

Sync Feature The desktop app can sync local folders with cloud Drive using rclone:

  1. Configure remote in Settings → Storage → Sync
  2. Select local folder to sync
  3. Start/stop sync from Drive sidebar

Note: Sync controls (/files/sync/start, /files/sync/stop) communicate with the local rclone process on the desktop. These features are not available in the web-only version.

Security Best Practices

For Users

  • Enable 2FA on your account
  • Review active sessions regularly
  • Use strong, unique passwords
  • Revoke unused API keys

For Administrators

  • Follow principle of least privilege
  • Review audit logs regularly
  • Rotate service account credentials
  • Monitor for unusual activity
  • Keep user list current (remove departed employees)

UI Structure

The botserver UI system provides two interface implementations designed for different deployment scenarios. Choose the right interface based on your use case and performance requirements.

General Bots

Directory Layout

ui/
├── suite/       # Full-featured interface
│   ├── index.html
│   ├── base.html
│   ├── home.html
│   ├── default.gbui
│   ├── single.gbui
│   ├── designer.html
│   ├── editor.html
│   ├── settings.html
│   ├── js/
│   │   ├── htmx-app.js
│   │   ├── theme-manager.js
│   │   └── vendor/
│   ├── css/
│   │   ├── app.css
│   │   ├── apps-extended.css
│   │   ├── components.css
│   │   └── global.css
│   ├── public/
│   ├── partials/
│   ├── auth/
│   ├── attendant/
│   ├── chat/
│   │   ├── chat.html
│   │   ├── chat.css
│   │   └── projector.html
│   ├── drive/
│   ├── mail/
│   ├── tasks/
│   ├── calendar/
│   ├── meet/
│   ├── paper/
│   ├── research/
│   ├── analytics/
│   ├── sources/
│   ├── tools/
│   └── monitoring/
│
└── minimal/     # Lightweight interface
    ├── index.html
    └── js/

Suite Interface

The Suite interface (ui/suite/) delivers a comprehensive, desktop-class experience with multi-application integration. It includes Chat, Drive, Tasks, and Mail modules in a unified workspace.

The Suite interface provides multi-application integration with seamless navigation between modules, rich interactions for complex workflows, and responsive design that adapts across desktop, tablet, and mobile form factors. Customizable GBUI templates allow you to choose between default.gbui for the full layout or single.gbui for a chat-focused experience. Tauri integration enables native desktop packaging for distribution outside the browser.

The Suite interface is best suited for enterprise deployments requiring full functionality, power users working with multiple services simultaneously, desktop application distribution via Tauri builds, and multi-service integrations where context switching between modules matters.

You can access the Suite interface via web at http://localhost:9000/suite or as a native desktop application using the botui Tauri app (see BotUI Desktop).

Minimal Interface

The Minimal interface (ui/minimal/) prioritizes speed and simplicity. It loads fast, uses minimal resources, and focuses on essential chat interactions.

This lightweight interface provides core chat and basic interactions only, fast loading with minimal dependencies, and low resource usage suitable for constrained environments. The design supports easy embedding into existing applications and takes a mobile-first approach to responsive layout.

The Minimal interface excels for mobile web access, embedded chatbots in external websites, low-bandwidth environments, quick access terminals and kiosks, and scenarios where simplicity matters more than features.

Access the Minimal interface at the root URL http://localhost:9000 where it is served by default, explicitly at http://localhost:9000/minimal, or embedded via iframe or WebView in your own applications.

Configuration

Server Configuration

UI paths are configured in several locations throughout the codebase.

The main server configuration in src/main.rs sets the static path:

#![allow(unused)]
fn main() {
let static_path = std::path::Path::new("./ui/suite");
}

The UI server module at src/core/ui_server/mod.rs defines its own path:

#![allow(unused)]
fn main() {
let static_path = PathBuf::from("./ui/suite");
}

For Tauri desktop builds, tauri.conf.json specifies the frontend distribution:

{
  "build": {
    "frontendDist": "./ui/suite"
  }
}

Asset Serving Strategy

BotUI supports two methods for serving static assets:

  1. FileSystem (Default): Reads files from ./ui/ directory at runtime. Best for development as changes are reflected immediately.
  2. Embedded (embed-ui): Compiles all assets into the binary using rust-embed. Best for CI/CD and single-file distribution.

To enable embedded assets:

cargo build -p botui --features embed-ui

The CI pipeline automatically enables this feature, producing a standalone botui binary that requires no external ui/ folder.

Routing

Both interfaces can be served simultaneously with different routes:

#![allow(unused)]
fn main() {
Router::new()
    .route("/", get(serve_minimal))
    .route("/minimal", get(serve_minimal))
    .route("/suite", get(serve_suite))
}

The minimal interface serves at root by default, providing faster loading for most users who need quick chat interactions.

API Compliance

The Minimal UI implements full compliance with the Bot Core API. Both interfaces support the same backend endpoints, ensuring consistent functionality regardless of which interface you choose.

EndpointMethodPurpose
/wsWebSocketReal-time messaging
/api/authGETAuthentication
/api/sessionsGET/POSTSession management
/api/sessions/current/messagePOSTSend message (current session)
/api/sessions/current/historyGETMessage history (current session)
/api/sessions/:idGETSession details
/api/sessions/:id/historyGETMessage history by ID
/api/sessions/:id/startPOSTStart session
/api/sessions/:id/endPOSTEnd session
/api/voice/startPOSTVoice input start
/api/voice/stopPOSTVoice input stop
/api/voice/statusGETVoice status

Note: The frontend uses /api/sessions/current/* endpoints which resolve to the active session automatically.

Both interfaces use the same WebSocket message types for communication. TEXT (1) handles regular text messages, VOICE (2) handles voice messages, CONTINUE (3) continues interrupted responses, CONTEXT (4) manages context changes, and SYSTEM (5) delivers system messages.

Performance Characteristics

Suite Interface

MetricTypical Value
Initial load~500KB
Time to interactive~1.5s
Memory usage~80MB
Best forFull productivity

Minimal Interface

MetricTypical Value
Initial load~50KB
Time to interactive~200ms
Memory usage~20MB
Best forQuick interactions

Browser Support

Both interfaces support modern browsers with full functionality:

BrowserMinimum VersionWebSocketVoice
Chrome90+
Firefox88+
Safari14+
Edge90+
Mobile Chrome90+
Mobile Safari14+

Switching Interfaces

Users can switch between interfaces by navigating to the appropriate URL. For programmatic switching, update the ui_server/mod.rs to change the default:

#![allow(unused)]
fn main() {
// Serve minimal at root (default)
match fs::read_to_string("ui/minimal/index.html")

// Or serve suite at root
match fs::read_to_string("ui/suite/index.html")
}

Troubleshooting

If you encounter 404 errors, clear your browser cache, rebuild the project with cargo clean && cargo build, and verify the files exist in the ui/suite/ or ui/minimal/ directories.

For Tauri build failures, check that tauri.conf.json has the correct frontendDist path and ensure ui/suite/index.html exists.

When static files aren’t loading, verify the ServeDir configuration in the router and check that subdirectories (js, css, public) exist with their expected contents.

Debug commands can help diagnose issues:

# Verify UI structure
ls -la ui/suite/
ls -la ui/minimal/

# Test interfaces
curl http://localhost:9000/
curl http://localhost:9000/suite/

# Check static file serving
curl http://localhost:9000/js/htmx-app.js

Customization

GBUI Templates

The Suite interface uses GBUI templates for layout customization. The default.gbui template provides the full multi-app layout with sidebar navigation, while single.gbui offers a streamlined chat-focused view. Edit these files to customize the interface structure without modifying core code.

CSS Theming

Both interfaces support CSS customization through their respective stylesheets. The Suite interface provides more extensive theming options through CSS custom properties, allowing you to adjust colors, spacing, and typography to match your brand.

Future Enhancements

Planned improvements include dynamic UI selection based on device capabilities to automatically serve the most appropriate interface, progressive enhancement from minimal to suite as users need additional features, service worker implementation for offline support, and WebAssembly components for high-performance features that require client-side computation.

See Also

single.gbui - Simplified Chat Interface

The single.gbui template provides a streamlined, single-page chat interface focused on conversation without distractions.

Overview

Location: ui/suite/single.gbui

This minimalist chat interface delivers a clean, focused chat experience with WebSocket real-time messaging, dark mode support, mobile-responsive design, and fast loading under 50KB.

Features

Core Components

The interface consists of four main components. The header displays the bot name, status, and connection indicator with minimal branding. The messages area provides an auto-scrolling message list with clear user and bot message distinction, timestamps, and smooth animations. The input area offers a single-line text input with a send button, Enter key support, and auto-focus on load. The typing indicator shows a three-dot animation when the bot is processing a response.

Design Philosophy

The single.gbui template embraces minimalism by eliminating unnecessary UI elements. Speed is prioritized so the interface loads instantly and works on slow connections. Accessibility features include keyboard navigation and screen reader support. Visual clarity comes from a clear hierarchy that guides users naturally through the conversation.

Responsive Behavior

Desktop

On desktop displays, the interface uses a centered container with 800px maximum width for comfortable reading, ample whitespace, and optimal line lengths for extended conversations.

Mobile

On mobile devices, the layout expands to full width with larger touch targets meeting the 44px minimum requirement. The input remains bottom-aligned and adjusts appropriately when the virtual keyboard appears.

Styling

The interface uses minimal inline CSS for maximum performance:

/* Core styles only */
body {
  font-family: system-ui, -apple-system, sans-serif;
  margin: 0;
  height: 100vh;
  display: flex;
  flex-direction: column;
}

.chat-container {
  flex: 1;
  display: flex;
  flex-direction: column;
  max-width: 800px;
  margin: 0 auto;
  width: 100%;
}

Dark Mode

Automatic dark mode activates based on system preference:

@media (prefers-color-scheme: dark) {
  :root {
    --background: #111827;
    --text: #f9fafb;
    --message-bot: #374151;
  }
}

WebSocket Integration

Connection handling is simplified for reliability:

const ws = new WebSocket('ws://localhost:9000/ws');

ws.onmessage = (event) => {
  const data = JSON.parse(event.data);
  addMessage(data.content, 'bot');
};

function sendMessage() {
  const message = input.value.trim();
  if (message) {
    ws.send(JSON.stringify({
      type: 'message',
      content: message
    }));
    addMessage(message, 'user');
    input.value = '';
  }
}

Use Cases

Embedded Widget

The single.gbui template is perfect for embedding in existing websites:

<iframe src="http://localhost:9000/ui/suite/single.gbui" 
        width="400" 
        height="600">
</iframe>

Kiosk Mode

The interface works well for public terminals with no navigation elements, focus on conversation, and easy reset between users.

Mobile-First

Optimization for mobile devices includes fast loading, minimal data usage, and touch-friendly controls.

Customization

Change Colors

Edit the CSS variables to match your brand:

:root {
  --primary: #3b82f6;     /* Your brand color */
  --background: #ffffff;   /* Background */
  --text: #1f2937;        /* Text color */
}

Modify Welcome Message

Update the initial bot message in the HTML:

<div class="message bot">
  <div class="message-content">
    Your custom welcome message here
  </div>
</div>

Insert a logo in the header:

<header class="header">
  <img src="logo.png" alt="Logo" height="32">
  <span>Bot Name</span>
</header>

Performance

The single.gbui template achieves first paint in under 100ms and becomes interactive within 200ms. Total size stays under 50KB with no external dependencies since everything is inline.

Accessibility

The template uses semantic HTML structure throughout, ARIA labels on interactive elements, full keyboard navigation support, proper focus management, and high contrast mode support for users who need it.

Browser Support

The interface works on all modern browsers including Chrome 90+, Firefox 88+, Safari 14+, Edge 90+, and their mobile counterparts. It degrades gracefully on older browsers, maintaining core functionality.

See Also

Next Step

For terminal users, see Console Mode.

Console Mode

The botserver console mode provides a text-based interface for monitoring your bot’s operation directly in the terminal.

Starting Console Mode

# Start botserver with console UI
./botserver --console

Console Interface

The console displays real-time information about your running botserver instance:

╔════════════════════════════════════════════════════════════╗
║                    botserver Console                       ║
╠════════════════════════════════════════════════════════════╣
║ Status: Running                                            ║
║ Uptime: 2h 34m 12s                                         ║
║ Port: 8080                                                 ║
║                                                            ║
║ Components:                                                ║
║   PostgreSQL: ✓ Connected                                  ║
║   Cache:      ✓ Connected                                  ║
║   Storage:    ✓ Connected                                  ║
║   Vectors:    ✓ Connected                                  ║
║                                                            ║
║ Active Sessions: 12                                        ║
║ Messages Today: 1,234                                      ║
║                                                            ║
║ Press 'q' to quit, 'r' to refresh                          ║
╚════════════════════════════════════════════════════════════╝

Console Features

Status Overview

The status overview displays the server’s current state including whether it is running or stopped, an uptime counter showing how long the server has been active, the port the server is listening on, and health checks for all connected components.

Session Information

Session information provides visibility into current activity with a count of active sessions, the total number of messages processed today, and recent activity indicators that show when the last interactions occurred.

Component Status

Real-time status monitoring covers all infrastructure components including database connectivity to PostgreSQL, cache service status, storage availability for file operations, and vector database connection status for semantic search functionality.

Keyboard Controls

KeyAction
qQuit console mode
rForce refresh display
cClear console
hShow help

Console Output

The console provides basic logging output showing timestamped events as they occur:

[2024-01-15 10:23:45] Server started on port 8080
[2024-01-15 10:23:46] Database connected
[2024-01-15 10:23:47] Cache initialized
[2024-01-15 10:23:48] Storage mounted
[2024-01-15 10:24:01] New session: abc123
[2024-01-15 10:24:15] Message processed

Using Console Mode

Development

Console mode is particularly useful during development for monitoring component initialization, tracking connection status, observing error messages as they occur, and watching session activity in real time.

Production

In production environments, console mode helps with quick status checks when you need immediate visibility, basic monitoring of system health, and troubleshooting connection issues without accessing the web interface.

Limitations

Console mode provides basic monitoring only and is not intended for detailed analytics. For comprehensive data analysis, query PostgreSQL directly for session data. System logs contain detailed error information for debugging. The cache service provides its own statistics interface. Application logs offer the most complete picture for troubleshooting complex issues.

Terminal Requirements

Console mode supports any terminal with basic text output capabilities. UTF-8 support is recommended to properly render box drawing characters. A minimum width of 80 columns is recommended for optimal display. The console works over SSH connections, making it suitable for remote server monitoring.

Tips

Console mode operates in read-only fashion and does not accept bot commands. For interactive bot testing, use the web interface available at http://localhost:9000. The display refreshes automatically every few seconds to show current status. Output is buffered for performance to avoid slowing down the server during high activity periods.

Troubleshooting

Console Not Updating

If the console stops updating, check terminal compatibility with your emulator, ensure the process has proper permissions to write to the terminal, and verify that all components are running and responsive.

Display Issues

Display problems can often be resolved by trying a different terminal emulator. Check that your terminal encoding is set to UTF-8 for proper character rendering. If text appears cut off, resize the terminal window to provide adequate width for the display.

Summary

Console mode provides a simple, lightweight way to monitor botserver status without needing a web browser. It’s ideal for quick checks and basic monitoring, but for full functionality including interactive bot testing and detailed analytics, use the web interface.

Monitoring Dashboard

The Monitoring Dashboard is the live operations homepage for your General Bots deployment. It provides real-time visibility into system health, active sessions, and resource utilization through an animated, interactive SVG visualization.

Live System Architecture

Live Monitoring Dashboard

The dashboard displays botserver at the center orchestrating all interactions, with animated data packets flowing between components:

  • Left Side (Data Layer): PostgreSQL, Qdrant vector database, and MinIO storage
  • Right Side (Services): BotModels AI, Cache, and Vault security
  • Center: botserver core with pulsing rings indicating activity
  • Top: Real-time metrics panels for sessions, messages, and response time
  • Bottom: Resource utilization bars and activity ticker

Accessing the Dashboard

The monitoring dashboard is the default homepage when accessing Suite:

http://localhost:9000/monitoring

Or from within Suite:

  1. Click the apps menu (⋮⋮⋮)
  2. Select Monitoring

Real-Time Metrics

Active Sessions Panel

Displays current conversation sessions:

Active Sessions: 12
Peak Today: 47
Avg Duration: 8m 32s
Trend: ↑ +3 in last hour

Messages Panel

Shows message throughput:

Today: 1,234 messages
This Hour: 89
Avg Response: 1.2s
Rate: 14.8 msg/min

Resource Utilization

Real-time system resources:

ResourceCurrentThreshold
CPU65%Warning > 80%
Memory72%Warning > 85%
GPU45%Warning > 90%
Disk28%Warning > 90%

Service Health Status

Each service has a status indicator:

ServiceStatusHealth Check
PostgreSQL🟢 RunningConnection pool, query latency
Qdrant🟢 RunningVector count, search time
MinIO🟢 RunningStorage usage, object count
BotModels🟢 RunningToken usage, response time
Cache🟢 RunningHit rate, memory usage
Vault🟢 RunningSeal status, policy count

Status Indicators

StatusColorAnimation
Running🟢 GreenGentle pulse
Warning🟡 AmberFast pulse
Stopped🔴 RedNo animation

Live Data Endpoints

The dashboard pulls real data from these HTMX endpoints:

EndpointIntervalData
/api/monitoring/metric/sessions5sSession count, trend
/api/monitoring/metric/messages10sMessage count, rate
/api/monitoring/metric/response_time10sAvg response time
/api/monitoring/resources/bars15sCPU, memory, GPU, disk
/api/monitoring/services/status30sService health JSON
/api/monitoring/activity/latest5sActivity ticker text
/api/monitoring/bots/active30sActive bot list

API Access

Full Status Endpoint

GET /api/monitoring/status

Returns complete system status:

{
  "sessions": {
    "active": 12,
    "peak_today": 47,
    "avg_duration_seconds": 512
  },
  "messages": {
    "today": 1234,
    "this_hour": 89,
    "avg_response_ms": 1200
  },
  "resources": {
    "cpu_percent": 65,
    "memory_percent": 72,
    "gpu_percent": 45,
    "disk_percent": 28
  },
  "services": {
    "postgresql": "running",
    "qdrant": "running",
    "cache": "running",
    "drive": "running",
    "botmodels": "running",
    "vault": "running"
  }
}

Active Bots Endpoint

GET /api/monitoring/bots

Returns list of deployed bots with metrics:

{
  "bots": [
    {
      "name": "default",
      "status": "active",
      "sessions_today": 34,
      "messages_today": 567,
      "avg_response_ms": 980
    },
    {
      "name": "support",
      "status": "active",
      "sessions_today": 12,
      "messages_today": 234,
      "avg_response_ms": 1100
    }
  ]
}

Historical Data

GET /api/monitoring/history?period=24h

Returns time-series data for charting.

Prometheus Export

GET /api/monitoring/prometheus

Returns metrics in Prometheus format for external monitoring systems.


View Modes

Toggle between two views using the grid button or press V:

Live View (Default)

The animated SVG visualization showing the complete system topology with flowing data packets. This is the recommended view for operations dashboards.

Grid View

Traditional panel-based layout with detailed metrics:

  • Sessions Panel: Active, peak, average duration
  • Messages Panel: Counts, rates, response times
  • Resources Panel: Progress bars with thresholds
  • Services Panel: Health status for each component
  • Bots Panel: List of active bots with metrics

Keyboard Shortcuts

ShortcutAction
VToggle between Live and Grid view
RRefresh all metrics immediately
FToggle fullscreen mode
?Show keyboard shortcuts

Alert Configuration

Configure alert thresholds in config.csv:

name,value
alert-cpu-threshold,80
alert-memory-threshold,85
alert-disk-threshold,90
alert-response-time-ms,5000
alert-email,admin@example.com

When thresholds are exceeded:

  1. Service status turns amber/red
  2. Alert notification sent to configured email
  3. Activity ticker shows alert message

Console Mode Monitoring

For terminal-based monitoring or headless servers:

./botserver --console --monitor

Output:

[MONITOR] 2025-01-15 14:32:00
Sessions: 12 active (peak: 47)
Messages: 1,234 today (89/hour)
CPU: 65% | MEM: 72% | GPU: 45%
Services: 6/6 running
Latest: User enrolled in Computer Science course

Component Health Details

ComponentMetrics MonitoredWarning Signs
PostgreSQLConnection count, query rate, replication lag> 80 connections, queries > 100ms
QdrantVector count, search latency, memory> 50ms search, > 80% memory
MinIOStorage usage, object count, bandwidth> 80% storage, high error rate
BotModelsToken usage, response latency, queue depth> 2s response, queue > 10
VaultSeal status, policy count, auth failuresSealed, repeated auth failures
CacheHit rate, memory usage, evictions< 80% hit rate, frequent evictions

Best Practices

  1. Keep monitoring visible — Use a dedicated screen or dashboard monitor for operations
  2. Set appropriate thresholds — Configure alerts before issues become critical
  3. Watch data flow — Animated packets indicate active communication between components
  4. Monitor trends — The session trend indicator (↑/↓) shows direction of change
  5. Use historical data — Query /api/monitoring/history for trend analysis
  6. Enable Prometheus export — Integrate with existing monitoring infrastructure

Troubleshooting

Dashboard Not Loading

  1. Check browser console for errors
  2. Verify /api/monitoring/status returns data
  3. Ensure WebSocket connection is established
  4. Refresh the page

Metrics Showing “–”

  1. Wait 5-10 seconds for initial data load
  2. Check network tab for failed API requests
  3. Verify all services are running
  4. Check botserver logs for errors

Animations Stuttering

  1. Close unused browser tabs
  2. Enable hardware acceleration in browser settings
  3. Use Grid view for lower resource usage
  4. Check if system CPU is overloaded

Service Showing Red

  1. Check service-specific logs in botserver-stack/logs/
  2. Verify Vault is unsealed
  3. Check database connection limits
  4. Restart the affected service

See Also

HTMX Architecture

Overview

General Bots Suite uses HTMX for its user interface - a modern approach that delivers the interactivity of a single-page application without the complexity of JavaScript frameworks like React, Vue, or Angular.

Why HTMX?

  • Simpler code, easier maintenance
  • Server-rendered HTML (fast, SEO-friendly)
  • Progressive enhancement
  • No build step required
  • Smaller payload than SPA frameworks

How HTMX Works

Traditional Web vs HTMX

Traditional (Full Page Reload):

User clicks → Browser requests full page → Server returns entire HTML → Browser replaces everything

HTMX (Partial Update):

User clicks → HTMX requests fragment → Server returns HTML snippet → HTMX updates only that part

Core Concept

HTMX extends HTML with attributes that define:

  1. What triggers the request (hx-trigger)
  2. Where to send it (hx-get, hx-post)
  3. What to update (hx-target)
  4. How to update it (hx-swap)

HTMX Attributes Reference

Request Attributes

AttributePurposeExample
hx-getGET request to URLhx-get="/api/tasks"
hx-postPOST requesthx-post="/api/tasks"
hx-putPUT requesthx-put="/api/tasks/1"
hx-patchPATCH requesthx-patch="/api/tasks/1"
hx-deleteDELETE requesthx-delete="/api/tasks/1"

Trigger Attributes

AttributePurposeExample
hx-triggerEvent that triggers requesthx-trigger="click"
Load on pagehx-trigger="load"
Periodic pollinghx-trigger="every 5s"
Keyboard eventhx-trigger="keyup changed delay:300ms"

Target & Swap Attributes

AttributePurposeExample
hx-targetElement to updatehx-target="#results"
hx-swapHow to insert contenthx-swap="innerHTML"
hx-swap="outerHTML"
hx-swap="beforeend"
hx-swap="afterbegin"

Suite Architecture

File Structure

ui/suite/
├── index.html          # Main entry point with navigation
├── base.html           # Base template
├── home.html           # Home page
├── default.gbui        # Full desktop layout
├── single.gbui         # Simple chat layout
├── designer.html       # Visual dialog designer
├── editor.html         # Code editor
├── settings.html       # User settings
├── css/
│   ├── app.css         # Application styles
│   ├── apps-extended.css # Extended app styles
│   ├── components.css  # UI components
│   └── global.css      # Global styles
├── js/
│   ├── htmx-app.js     # HTMX application logic
│   ├── theme-manager.js # Theme switching
│   └── vendor/         # Third-party libraries
├── partials/           # Reusable HTML fragments
├── auth/               # Authentication views
├── attendant/          # Attendant interface
├── chat/
│   ├── chat.html       # Chat component
│   ├── chat.css        # Chat styles
│   └── projector.html  # Projector view
├── drive/              # File manager
├── tasks/              # Task manager
├── mail/               # Email client
├── calendar/           # Calendar view
├── meet/               # Video meetings
├── paper/              # Document editor
├── research/           # AI search
├── analytics/          # Dashboards
├── sources/            # Prompts & templates
├── tools/              # Developer tools
└── monitoring/         # System monitoring

Loading Pattern

The Suite uses lazy loading - components load only when needed:

<!-- Main navigation in index.html -->
<a href="#chat" 
   data-section="chat"
   hx-get="/ui/suite/chat/chat.html" 
   hx-target="#main-content"
   hx-swap="innerHTML">
    Chat
</a>

When user clicks “Chat”:

  1. HTMX requests /ui/suite/chat/chat.html
  2. Server returns the Chat HTML fragment
  3. HTMX inserts it into #main-content
  4. Only Chat code loads, not entire app

Component Patterns

1. Load on Page View

<!-- Tasks load immediately when component is shown -->
<div id="task-list"
     hx-get="/api/tasks"
     hx-trigger="load"
     hx-swap="innerHTML">
    <div class="loading">Loading tasks...</div>
</div>

2. Form Submission

<!-- Add task form -->
<form hx-post="/api/tasks"
      hx-target="#task-list"
      hx-swap="afterbegin"
      hx-on::after-request="this.reset()">
    <input type="text" name="text" placeholder="New task..." required>
    <button type="submit">Add</button>
</form>

Flow:

  1. User types task, clicks Add
  2. HTMX POSTs form data to /api/tasks
  3. Server creates task, returns HTML for new task item
  4. HTMX inserts at beginning of #task-list
  5. Form resets automatically

3. Click Actions

<!-- Task item with actions -->
<div class="task-item" id="task-123">
    <input type="checkbox" 
           hx-patch="/api/tasks/123"
           hx-vals='{"completed": true}'
           hx-target="#task-123"
           hx-swap="outerHTML">
    <span>Review quarterly report</span>
    <button hx-delete="/api/tasks/123"
            hx-target="#task-123"
            hx-swap="outerHTML"
            hx-confirm="Delete this task?">
        🗑
    </button>
</div>

4. Search with Debounce

<!-- Search input with 300ms delay -->
<input type="text" 
       name="q"
       placeholder="Search..."
       hx-get="/api/search"
       hx-trigger="keyup changed delay:300ms"
       hx-target="#search-results"
       hx-indicator="#search-spinner">

<span id="search-spinner" class="htmx-indicator">🔄</span>
<div id="search-results"></div>

Flow:

  1. User types in search box
  2. After 300ms of no typing, HTMX sends request
  3. Spinner shows during request
  4. Results replace #search-results content

5. Real-time Updates (WebSocket)

<!-- Chat with WebSocket -->
<div id="chat-app" hx-ext="ws" ws-connect="/ws">
    <div id="messages"
         hx-get="/api/sessions/current/history"
         hx-trigger="load"
         hx-swap="innerHTML">
    </div>
    
    <form ws-send>
        <input name="content" type="text">
        <button type="submit">Send</button>
    </form>
</div>

Flow:

  1. WebSocket connects on load
  2. History loads via HTMX GET
  3. New messages sent via WebSocket (ws-send)
  4. Server pushes updates to all connected clients

6. Polling for Updates

<!-- Analytics that refresh every 30 seconds -->
<div class="metric-card"
     hx-get="/api/analytics/messages/count"
     hx-trigger="load, every 30s"
     hx-swap="innerHTML">
    <!-- Content updates automatically -->
</div>

7. Infinite Scroll

<!-- File list with infinite scroll -->
<div id="file-list">
    <!-- Files here -->
    
    <div hx-get="/api/files?page=2"
         hx-trigger="revealed"
         hx-swap="afterend">
        Loading more...
    </div>
</div>

API Response Patterns

Server Returns HTML Fragments

The server doesn’t return JSON - it returns ready-to-display HTML:

Request:

GET /api/tasks

Response:

<div class="task-item" id="task-1">
    <input type="checkbox">
    <span>Review quarterly report</span>
</div>
<div class="task-item" id="task-2">
    <input type="checkbox">
    <span>Update documentation</span>
</div>

Swap Strategies

StrategyEffect
innerHTMLReplace contents of target
outerHTMLReplace entire target element
beforeendAppend inside target (at end)
afterbeginPrepend inside target (at start)
beforebeginInsert before target
afterendInsert after target
deleteDelete target element
noneDon’t swap (for side effects)

CSS Integration

Loading Indicators

/* Hidden by default */
.htmx-indicator {
    display: none;
}

/* Shown during request */
.htmx-request .htmx-indicator {
    display: inline-block;
}

/* Or when indicator IS the requesting element */
.htmx-request.htmx-indicator {
    display: inline-block;
}

Transition Effects

/* Fade in new content */
.htmx-settling {
    opacity: 0;
}

.htmx-swapping {
    opacity: 0;
    transition: opacity 0.2s ease-out;
}

JavaScript Integration

HTMX Events

// After any HTMX swap
document.body.addEventListener('htmx:afterSwap', (e) => {
    console.log('Content updated:', e.detail.target);
});

// Before request
document.body.addEventListener('htmx:beforeRequest', (e) => {
    console.log('Sending request to:', e.detail.pathInfo.path);
});

// After request completes
document.body.addEventListener('htmx:afterRequest', (e) => {
    if (e.detail.successful) {
        console.log('Request succeeded');
    } else {
        console.error('Request failed');
    }
});

// On WebSocket message
document.body.addEventListener('htmx:wsAfterMessage', (e) => {
    console.log('Received:', e.detail.message);
});

Triggering HTMX from JavaScript

// Trigger an HTMX request programmatically
htmx.trigger('#task-list', 'load');

// Make an AJAX request
htmx.ajax('GET', '/api/tasks', {
    target: '#task-list',
    swap: 'innerHTML'
});

// Process new HTMX content
htmx.process(document.getElementById('new-content'));

Designer Page Architecture

The visual dialog designer uses a hybrid approach:

Canvas Management (JavaScript)

// State managed in JavaScript
const state = {
    nodes: new Map(),      // Node data
    connections: [],       // Connections between nodes
    zoom: 1,               // Canvas zoom level
    pan: { x: 0, y: 0 }    // Canvas position
};

File Operations (HTMX)

<!-- Load file via HTMX -->
<button hx-get="/api/v1/designer/files"
        hx-target="#file-list-content">
    Open File
</button>

<!-- Save via HTMX -->
<button hx-post="/api/v1/designer/save"
        hx-include="#designer-data">
    Save
</button>

Drag-and-Drop (JavaScript)

// Toolbox items are draggable
toolboxItems.forEach(item => {
    item.addEventListener('dragstart', (e) => {
        e.dataTransfer.setData('nodeType', item.dataset.nodeType);
    });
});

// Canvas handles drop
canvas.addEventListener('drop', (e) => {
    const nodeType = e.dataTransfer.getData('nodeType');
    createNode(nodeType, e.clientX, e.clientY);
});

Performance Considerations

1. Minimize Request Size

Return only what’s needed:

<!-- Good: Return just the updated row -->
<tr id="row-123">...</tr>

<!-- Bad: Return entire table -->
<table>...</table>

2. Use Appropriate Triggers

<!-- Don't poll too frequently -->
hx-trigger="every 30s"  <!-- Good for dashboards -->
hx-trigger="every 1s"   <!-- Too frequent! -->

<!-- Debounce user input -->
hx-trigger="keyup changed delay:300ms"  <!-- Good -->
hx-trigger="keyup"                       <!-- Too many requests -->

3. Lazy Load Heavy Content

<!-- Load tab content only when tab is clicked -->
<div role="tabpanel" 
     hx-get="/api/heavy-content"
     hx-trigger="intersect once">
</div>

4. Use hx-boost for Navigation

<!-- Boost all links in nav -->
<nav hx-boost="true">
    <a href="/page1">Page 1</a>  <!-- Now uses HTMX -->
    <a href="/page2">Page 2</a>
</nav>

Security

CSRF Protection

HTMX automatically includes CSRF tokens:

<meta name="csrf-token" content="abc123...">
// Configure HTMX to send CSRF token
document.body.addEventListener('htmx:configRequest', (e) => {
    e.detail.headers['X-CSRF-Token'] = document.querySelector('meta[name="csrf-token"]').content;
});

Content Security

  • Server validates all inputs
  • HTML is sanitized before rendering
  • Authentication checked on every request

Comparison: HTMX vs React

AspectHTMXReact
Learning CurveLow (HTML attributes)High (JSX, hooks, state)
Bundle Size~14KB~40KB + app code
Build StepNoneRequired
Server LoadMore (renders HTML)Less (returns JSON)
Client LoadLessMore
SEOExcellentRequires SSR
ComplexitySimpleComplex
Best ForContent sites, dashboardsComplex SPAs, offline apps

Further Reading

Suite - Full Desktop Interface

Complete productivity suite with integrated applications

Suite Interface Screen

Overview

The Suite provides a complete desktop interface with multiple integrated applications for web, desktop, and mobile platforms. It serves as the primary interface for General Bots, combining AI-powered chat with productivity tools.


Features

Multi-Application Layout

The Suite includes integrated applications:

AppPurpose
💬 ChatAI assistant conversations
📁 DriveFile management
⚡ TasksLLM-powered intelligent execution
✉ MailEmail client
📅 CalendarScheduling
🎥 MeetVideo calls
🎬 PlayerMedia viewer
📝 PaperAI writing
🔍 ResearchAI search
📊 AnalyticsDashboards

Responsive Design

The Suite adapts to any screen size:

  • Desktop (>1024px): Full multi-panel layout with persistent navigation
  • Tablet (768-1024px): Collapsible sidebar with touch-optimized controls
  • Mobile (<768px): Single column with bottom navigation and swipe gestures

Theme Support

  • Light and dark mode
  • Custom color schemes via .gbtheme
  • Automatic system preference detection

Header Bar

The header provides quick access to:

  • Logo and branding
  • App selector grid
  • Theme switcher
  • User menu and settings

App Launcher

Click the grid icon (⋮⋮⋮) to access all applications in a dropdown menu.

Keyboard Shortcuts

ShortcutAction
Alt+1Chat
Alt+2Drive
Alt+3Tasks
Alt+4Mail
Alt+5Calendar
Alt+6Meet
EscClose menus

Usage

As Desktop App

Use the botui Tauri application for native desktop experience:

# From the botui directory
cd botui
cargo tauri dev
# Opens Suite in native window

Or build the standalone app:

cd botui
cargo tauri build
# Creates distributable app in target/release

As Web Interface

Default template for browser access:

./botserver
# Browse to http://localhost:9000
# Loads Suite interface

As Mobile PWA

Install as Progressive Web App:

  1. Open in mobile browser
  2. Add to home screen
  3. Launches as standalone app

Example Workflows

Quick Task Creation

File Sharing

Meeting Scheduling


Performance

MetricTarget
Initial Load< 200KB
WebSocket Latency< 50ms
Touch Response60fps animations
Offline SupportService worker caching

Browser Support

  • Chrome/Edge 90+
  • Firefox 88+
  • Safari 14+
  • Mobile browsers (iOS Safari, Chrome Mobile)

Configuration

Customize Suite behavior in config.csv:

key,value
theme-color1,#0d2b55
theme-color2,#e3f2fd
theme-title,My Company Suite
theme-logo,https://example.com/logo.svg
suite-default-app,chat
suite-sidebar-collapsed,false

See Also

Vibe — AI Development Environment

Chat-driven coding, deployment, and infrastructure management

Vibe Interface Screen

Overview

Vibe is the integrated development environment inside General Bots Suite. Describe what you want to build in plain language and Mantis AI agents generate task nodes, write code, run commands, and deploy — all from a single interface.


Features

Chat-Driven Development

Type a request in the chat panel. Mantis #1 classifies the intent via POST /api/autotask/classify, generates a plan, and creates task nodes on the canvas.

Canvas (Task Nodes)

Each task is represented as a node on the horizontal canvas showing:

  • File count, estimated time, token usage
  • Status (Planning → In Progress → Done)
  • Sub-tasks (expandable file list)
  • Details button — fetches full task info from GET /api/autotask/tasks/:id
  • Delete button — removes node from canvas

Canvas state is persisted in localStorage (vibe-canvas-nodes) and restored on page load.

Command Palette

Press Cmd+K (or Ctrl+K) to open the command palette:

CommandAction
New fileOpens editor panel
Open terminalOpens terminal panel
Git statusOpens git panel
Database schemaOpens database panel
Clear canvasRemoves all task nodes
DeployTriggers deployment

Press Escape to close.

Monaco Editor

Full code editor with:

  • File tree sidebar → GET /api/editor/files
  • Click to open files → GET /api/editor/file/*path
  • Ctrl+S to save → POST /api/editor/file/*path
  • Syntax highlighting for Rust, JS, HTML, CSS, TOML

Terminal

Embedded xterm.js terminal connected via WebSocket → /api/terminal/ws.

Create, list, and kill terminal sessions via POST /api/terminal/create, GET /api/terminal/list, POST /api/terminal/kill.

Database Tool

  • ER diagram of all tables
  • Table viewer with pagination → GET /api/database/table/:name/data
  • SQL query builder → POST /api/database/query
  • Row insert/update/delete → POST /api/database/table/:name/row

Git Integration

  • Status and diff viewer → GET /api/git/status, GET /api/git/diff/:file
  • Commit → POST /api/git/commit
  • Push → POST /api/git/push
  • Branch management → GET /api/git/branches, POST /api/git/branch/:name
  • Log → GET /api/git/log

Deployment

Click Deploy to trigger POST /api/bots/:id/deploy. Real-time progress streams via the task progress WebSocket, shown in the chat panel.


Enabling Vibe

Vibe is always available in the suite — no feature gate required. Access it from the desktop icon or via http://localhost:3000/suite/vibe.


See Also

  • Tasks — AutoTask system that powers Vibe
  • Designer — Visual bot designer
  • Drive — File storage backing the editor

Chat - AI Assistant

Your intelligent conversation partner

Chat Interface Screen

Overview

Chat is the heart of General Bots Suite - your AI-powered assistant that understands context, remembers conversations, and helps you get things done. Built with WebSocket for real-time communication and HTMX for seamless updates.


Features

Real-Time Messaging

Messages are sent and received instantly via WebSocket connection.

Voice Input

Click the microphone button to speak your message:

  1. Click 🎤 to start recording
  2. Speak your message clearly
  3. Click again to stop
  4. Message converts to text automatically

Quick Suggestions

Pre-built action chips for common requests:

ChipAction
📊 TasksShow your task list
📧 Check mailDisplay unread emails
📅 ScheduleToday’s calendar
❓ HelpAvailable commands

Message History

  • Auto-loads previous messages on page open
  • Scroll up to load older messages
  • Click “Scroll to bottom” button to return to latest

Markdown Support

Bot responses support full Markdown rendering:

  • Bold and italic text
  • code snippets and code blocks
  • Bullet and numbered lists
  • Links and images
  • Tables

Keyboard Shortcuts

ShortcutAction
EnterSend message
Shift+EnterNew line (without sending)
(Up arrow)Edit last message
/Open command menu
EscapeCancel current action

API Endpoints

WebSocket Connection

ws://your-server:9000/ws

Message Types:

  • TEXT (1) - Regular text messages
  • VOICE (2) - Voice messages
  • CONTINUE (3) - Continue interrupted responses
  • CONTEXT (4) - Context changes
  • SYSTEM (5) - System messages

REST Endpoints

EndpointMethodDescription
/api/sessionsGETList sessions
/api/sessions/current/messagePOSTSend message (current session)
/api/sessions/current/historyGETGet chat history (current session)
/api/sessions/:idGETGet session details
/api/sessions/:id/historyGETGet chat history by ID
/api/sessions/:id/startPOSTStart session
/api/sessions/:id/endPOSTEnd session
/api/voice/startPOSTStart voice recording
/api/voice/stopPOSTStop voice recording
/api/voice/statusGETGet voice status

Note: The frontend uses /api/sessions/current/* endpoints which resolve to the active session automatically.


Example Conversations

Getting Information

Creating Tasks

Drafting Emails


Accessibility

  • Full keyboard navigation
  • Screen reader announcements for new messages
  • High contrast mode support
  • Adjustable font sizes
  • ARIA labels on all interactive elements

Troubleshooting

Messages Not Sending

  1. Check connection status indicator
  2. Verify WebSocket is connected
  3. Try refreshing the page
  4. Check browser console for errors

Voice Not Working

  1. Allow microphone permissions in browser
  2. Check device microphone settings
  3. Try a different browser
  4. Ensure HTTPS connection (required for voice)

History Not Loading

  1. Check network connection
  2. Verify API endpoint is accessible
  3. Clear browser cache
  4. Check for JavaScript errors

See Also

Drive - File Management

Your cloud storage workspace

Drive Interface Screen

Overview

Drive is your personal cloud storage within General Bots Suite. Upload, organize, and share files with a familiar interface. Built with HTMX for smooth interactions and SeaweedFS for reliable object storage.


Features

Upload Files

Drag and Drop:

  1. Drag files from your computer
  2. Drop anywhere in the file area
  3. Upload progress shows automatically

Click to Upload:

  1. Click + New button
  2. Select Upload Files or Upload Folder
  3. Choose files from file picker

File Operations

ActionHow to Access
OpenDouble-click file
DownloadRight-click > Download
RenameRight-click > Rename
CopyRight-click > Copy
MoveRight-click > Move to
StarRight-click > Star
ShareRight-click > Share
DeleteRight-click > Delete

View Modes

ModeDescription
GridLarge thumbnails with previews
ListDetailed table with columns
  • Breadcrumb: Click any folder in the path to jump back
  • Sidebar: Quick access to My Drive, Starred, Recent, Trash
  • Search: Find files by name or content

Labels & Organization

LabelIconUse For
Work🔵Professional files
Personal🟢Private documents
Projects🟡Project-specific files
Custom🟣Create your own

File Sync (Desktop Only)

The desktop app provides bidirectional file synchronization between your local machine and cloud Drive using rclone.

Requirements:

  • General Bots desktop app (Tauri)
  • rclone installed on your system

Setup:

  1. Install rclone: https://rclone.org/install/
  2. Open Drive in the desktop app
  3. Click SettingsSync
  4. Configure your sync folder (default: ~/GeneralBots)
  5. Click Start Sync

Sync Controls: Located in the Drive sidebar under “Sync Status”

ControlDescription
StartBegin synchronization
StopStop current sync
StatusShows idle, syncing, or error

Sync Modes:

ModeDescription
PushLocal → Cloud only
PullCloud → Local only
BisyncBidirectional (default)

Note: Sync features are only available in the desktop app. Web users see an “unavailable” status as sync requires local filesystem access.


Keyboard Shortcuts

ShortcutAction
EnterOpen selected file
DeleteMove to trash
Ctrl+CCopy selected
Ctrl+VPaste
Ctrl+XCut selected
Ctrl+ASelect all
F2Rename selected
Ctrl+FSearch files
Ctrl+UUpload files
/ Navigate files

Managing Files via Chat

Finding Files

Sharing Files

Creating Folders


API Endpoints

EndpointMethodDescription
/api/drive/listGETList files
/api/drive/uploadPOSTUpload file
/api/drive/download/:pathGETDownload file
/api/drive/delete/:pathDELETEDelete file
/api/drive/movePOSTMove/rename file
/api/drive/copyPOSTCopy file
/api/drive/mkdirPOSTCreate folder
/api/drive/sharePOSTShare file

Query Parameters

ParameterValuesDefault
pathFolder path/
sortname, size, modifiedname
orderasc, descasc
viewgrid, listgrid
filterstarred, recent, trashnone

Response Format

{
    "path": "/Projects/2024",
    "files": [
        {
            "name": "Report.pdf",
            "type": "file",
            "size": 245000,
            "modified": "2024-03-15T10:30:00Z",
            "starred": false,
            "shared": true
        },
        {
            "name": "Documents",
            "type": "folder",
            "modified": "2024-03-14T09:00:00Z",
            "starred": true
        }
    ],
    "storage": {
        "used": 4500000000,
        "total": 10737418240
    }
}

File Type Icons

TypeExtensionsIcon
Document.pdf, .doc, .docx📄
Spreadsheet.xls, .xlsx, .csv📊
Presentation.ppt, .pptx📽️
Image.jpg, .png, .gif, .svg🖼️
Video.mp4, .webm, .mov🎬
Audio.mp3, .wav, .ogg🎵
Archive.zip, .tar, .gz📦
Code.js, .py, .rs, .html💻
Folder-📁

Storage Backend

Drive uses SeaweedFS for object storage:

  • Scalable: Grows with your needs
  • Redundant: Data replicated across nodes
  • Fast: Optimized for small and large files
  • S3 Compatible: Works with standard S3 tools

Configure storage in config.csv:

key,value
drive-server,http://localhost:9000
drive-bucket,bot-files
drive-quota-gb,10

Troubleshooting

Upload Fails

  1. Check file size (default limit: 100MB)
  2. Verify storage quota isn’t exceeded
  3. Check network connection
  4. Try smaller files or compress first

Files Not Displaying

  1. Refresh the page
  2. Check folder path is correct
  3. Verify file permissions
  4. Clear browser cache

Sharing Not Working

  1. Verify recipient email address
  2. Check sharing permissions
  3. Ensure file isn’t in Trash

See Also

Tasks - Autonomous App Builder

Describe what you want, get a working application

Tasks Interface Screen

Overview

Tasks is the command center for autonomous application creation in General Bots Suite. Instead of coding, you describe what you need in plain language, and the system builds complete, working applications with databases, UI, and automation.

This is not a simple to-do list—it’s an AI-powered development environment that understands intent and executes multi-step plans to create real applications.


Features

Natural Language App Creation

Describe your application in plain English:

Intent Classification

The system automatically classifies your request and routes it appropriately:

Intent TypeExample RequestWhat Gets Created
APP_CREATE“Create app for clinic”Full HTMX application with tables
TODO“Call John tomorrow at 3pm”Task entry with reminder
MONITOR“Alert me when stock is low”ON CHANGE event handler
ACTION“Email all customers now”Immediate execution
SCHEDULE“Daily sales report at 9am”SET SCHEDULE automation
GOAL“Increase response time by 20%”Autonomous optimization loop
TOOL“When I say ‘check stock’, show inventory”Voice/chat command

Real-Time Progress Tracking

Watch your application being built step by step:

StatusIconMeaning
CompletedStep finished successfully
RunningCurrently executing
PendingWaiting to start
AttentionNeeds user input
FailedError occurred

Execution Modes

Choose how much control you want:

ModeDescriptionBest For
AutomaticExecutes all steps without confirmationTrusted, simple tasks
SupervisedAsks approval before each stepLearning, reviewing changes
Dry RunShows what would happen without executingTesting, validation

Task Types

To-Do Tasks

Simple reminders and action items:

Application Tasks

Full application generation:

Scheduled Tasks

Recurring automations:

Monitor Tasks

Event-driven triggers:

Goal Tasks

Autonomous optimization with metrics:


Generated Applications

When you create an app, it generates a complete structure:

.gbdrive/apps/cellphone-crm/
├── index.html          # Main HTMX application
├── _assets/
│   ├── htmx.min.js     # HTMX library
│   └── styles.css      # Application styles
└── schema.json         # Table definitions

Direct API Access

Generated apps use HTMX to communicate directly with botserver:

OperationHTMX AttributeExample
List recordshx-gethx-get="/api/db/customers"
Create recordhx-posthx-post="/api/db/customers"
Update recordhx-puthx-put="/api/db/customers/123"
Delete recordhx-deletehx-delete="/api/db/customers/123"
Searchhx-get with triggerhx-trigger="keyup changed delay:300ms"

Keyboard Shortcuts

ShortcutAction
EnterAdd new task
SpaceToggle task complete
DeleteDelete selected task
SStar/unstar task
EEdit task
PSet priority
DSet due date
/Search tasks

API Endpoints

EndpointMethodDescription
/api/tasksGETList all tasks
/api/tasksPOSTCreate new task
/api/tasks/:idGETGet task details
/api/tasks/:idPUTUpdate task
/api/tasks/:idDELETEDelete task
/api/tasks/:id/executePOSTExecute task plan
/api/tasks/:id/pausePOSTPause running task
/api/tasks/:id/resumePOSTResume paused task
/api/tasks/:id/cancelPOSTCancel task
/api/tasks/:id/stepsGETGet task steps

Task Actions

ActionWhen AvailableWhat It Does
ExecuteTask plannedStart executing the plan
PauseTask runningTemporarily stop execution
ResumeTask pausedContinue from last step
CancelAnytimeStop and discard changes
RetryStep failedRetry the failed step
ModifyTask plannedEdit the plan before executing

Writing Effective Requests

Be Specific

✅ Good Request❌ Vague Request
“CRM for cellphone store with customer tracking, repair status, and inventory”“Make an app”
“Inventory system with low stock alerts when below 10 units”“Track stuff”
“Daily sales report emailed at 9am with revenue chart”“Send reports”
“Alert when any customer hasn’t been contacted in 30 days”“Monitor customers”

Include Context

  • What data? Customers, products, orders, etc.
  • What workflow? Status changes, approvals, notifications
  • What output? Reports, alerts, dashboards
  • What schedule? Daily, weekly, on-change

Examples

Cellphone Repair Shop

"CRM for my repair shop with:
- Customers (name, phone, email)
- Repairs with status: received, diagnosing, waiting parts, repairing, ready, delivered
- Parts inventory with low stock alerts
- Daily summary of pending repairs"

Restaurant Reservations

"Reservation system with:
- Tables (number, capacity, location)
- Reservations (date, time, party size, notes)
- Waitlist when fully booked
- SMS confirmation to customers"

Project Management

"Project tracker with:
- Projects (name, client, deadline)
- Tasks with assignees and status
- Time tracking per task
- Weekly progress report"

Troubleshooting

Task Stuck on “Running”

  1. Check the step details for errors
  2. Try pausing and resuming
  3. Check server logs for issues
  4. Cancel and retry with modified request

Generated App Not Working

  1. Verify tables were created in database
  2. Check browser console for JavaScript errors
  3. Ensure API endpoints are accessible
  4. Review generated HTML for issues

Intent Misclassified

  1. Be more explicit in your request
  2. Use keywords like “create app”, “remind me”, “every day”
  3. Break complex requests into smaller parts

See Also

Mail - Email Client

Your intelligent inbox

Mail Interface Screen

Overview

Mail is the email application in General Bots Suite. Read, compose, and organize your emails with AI assistance. Mail helps you write better emails, find important messages, and stay on top of your inbox without the clutter.


Features

Folders

FolderDescription
📥 InboxIncoming messages
⭐ StarredImportant emails
📤 SentSent messages
📝 DraftsUnsent drafts
🗑️ TrashDeleted emails

Labels

LabelIconUse For
Urgent🔴Time-sensitive
Personal🟢Private emails
Work🔵Professional
Finance🟡Bills & money
Custom🟣Create your own

Email Actions

ActionDescription
ReplyRespond to sender
Reply AllRespond to all recipients
ForwardSend to someone else
ArchiveRemove from inbox
StarMark as important
LabelOrganize with labels
DeleteMove to trash

Keyboard Shortcuts

ShortcutAction
CCompose new email
RReply
AReply all
FForward
EArchive
SStar/unstar
DeleteMove to trash
Ctrl+EnterSend email
EscapeClose/cancel
/ Navigate emails

Email via Chat

Checking Your Inbox

Reading an Email

Composing an Email

AI-Assisted Writing

Searching Emails


API Endpoints

EndpointMethodDescription
/api/email/inboxGETList inbox messages
/api/email/sendPOSTSend email
/api/email/draftPOSTSave draft
/api/email/:idGETRead email
/api/email/:idDELETEDelete email
/api/email/:id/starPOSTToggle star
/api/email/:id/labelPOSTAdd label
/api/email/searchGETSearch emails

Query Parameters

ParameterValuesDefault
folderinbox, sent, drafts, trash, starredinbox
labelLabel namenone
unreadtrue, falsenone
limit1-10025
offsetNumber0

Send Email Request

{
    "to": ["john@company.com"],
    "cc": [],
    "bcc": [],
    "subject": "Project Update",
    "body": "Hi John,\n\nHere's the latest update...",
    "attachments": ["file-id-123"]
}

Email Response

{
    "id": "msg-456",
    "from": "sarah@company.com",
    "to": ["you@company.com"],
    "subject": "Q2 Report Review",
    "body": "Hi,\n\nPlease review the attached...",
    "date": "2025-05-15T10:32:00Z",
    "read": false,
    "starred": true,
    "labels": ["work"],
    "attachments": [
        {
            "id": "att-789",
            "name": "Q2-Report.pdf",
            "size": 2457600
        }
    ]
}

Configuration

Configure email in config.csv:

key,value
smtp-server,smtp.gmail.com
smtp-port,587
imap-server,imap.gmail.com
imap-port,993
email-from,Your Name <you@gmail.com>

Note: Use app-specific passwords for Gmail, not your main password.


Troubleshooting

Emails Not Loading

  1. Check internet connection
  2. Verify email credentials
  3. Check IMAP settings
  4. Refresh the page

Send Fails

  1. Check recipient address
  2. Verify SMTP settings
  3. Check attachment size (max 25MB)
  4. Try again in a moment

Missing Emails

  1. Check spam/junk folder
  2. Verify filters aren’t hiding emails
  3. Check trash folder
  4. Sync may take a few minutes

Integration Features

Snooze

Hide an email until later. Click the snooze button in the toolbar to pick a time (later today, tomorrow, next week). The email reappears automatically at the chosen time via POST /api/email/snooze.

CRM Panel

When viewing an email, the CRM panel automatically looks up the sender via GET /api/crm/contact/by-email/:email and shows linked deals. Click Log to CRM to record the email against a contact or opportunity.

AI Lead Suggestion

If the email looks like a sales inquiry, an AI banner appears offering to create a lead via POST /api/ai/extract-lead.

Campaign Actions

Add the sender to a marketing list directly from the email via POST /api/crm/lists/:id.

Smart Replies

AI-suggested short replies appear below the email content.

Enabling Mail

Add mail to apps= in botserver/.product:

apps=...,mail

See Also

Calendar - Scheduling

Your personal scheduling assistant

Calendar Interface Screen

Overview

Calendar is your scheduling hub in General Bots Suite. Create events, manage appointments, schedule meetings, and let the AI help you find the perfect time. Calendar syncs with your other apps so you never miss an important date.


Features

Views

ViewDescription
DayHourly breakdown of single day
Week7-day overview
MonthFull month grid
AgendaList of upcoming events

Calendars

CalendarIconUse For
Personal🟢Private appointments
Work🔵Professional meetings
Team🟣Shared team events
Holidays🔴Public holidays

Event Types

TypeIconDescription
Meeting👥Group meetings
Call📞Phone/video calls
Reminder🔔Personal reminders
All-Day📅Full day events
Recurring🔄Repeating events

Keyboard Shortcuts

ShortcutAction
CCreate new event
TGo to today
DDay view
WWeek view
MMonth view
/ Previous / Next period
DeleteDelete selected event
EnterOpen event details

Scheduling via Chat

Creating an Event

Checking Your Schedule

Finding Available Time

Rescheduling Events

Setting Reminders


API Endpoints

EndpointMethodDescription
/api/calendar/eventsGETList events
/api/calendar/eventsPOSTCreate event
/api/calendar/events/:idGETGet event details
/api/calendar/events/:idPATCHUpdate event
/api/calendar/events/:idDELETEDelete event
/api/calendar/availabilityGETCheck free/busy

Query Parameters

ParameterValuesDefault
startISO dateToday
endISO date+30 days
calendarCalendar nameAll
viewday, week, monthmonth

Create Event Request

{
    "title": "Team Meeting",
    "start": "2025-05-16T14:00:00Z",
    "end": "2025-05-16T15:00:00Z",
    "calendar": "work",
    "attendees": ["sarah@company.com"],
    "location": "Conference Room A",
    "reminder": 15,
    "recurrence": null
}

Event Response

{
    "id": "evt-123",
    "title": "Team Meeting",
    "start": "2025-05-16T14:00:00Z",
    "end": "2025-05-16T15:00:00Z",
    "calendar": "work",
    "attendees": [
        {
            "email": "sarah@company.com",
            "status": "accepted"
        }
    ],
    "location": "Conference Room A",
    "reminder": 15,
    "created": "2025-05-15T10:30:00Z"
}

Integration with Tasks

Tasks with due dates automatically appear on your calendar. When you complete a task, it’s marked as done on the calendar too.


Troubleshooting

Events Not Syncing

  1. Refresh the calendar
  2. Check internet connection
  3. Verify calendar is enabled in sidebar
  4. Wait a few minutes for sync

Can’t Create Events

  1. Verify you have write access to the calendar
  2. Check that start time is before end time
  3. Ensure date is not in the past

Missing Invitations

  1. Check spam/junk folder in email
  2. Verify attendee email addresses
  3. Check notification settings

See Also

Meet - Video Calls

Your virtual meeting room

Meet Interface Screen

Overview

Meet is the video conferencing app in General Bots Suite. Host video calls, share your screen, collaborate in real-time, and let the AI take notes for you. Meet integrates seamlessly with Calendar so joining meetings is just one click away.


Features

Meeting Controls

ControlDescription
🎤 MuteToggle microphone
📹 VideoToggle camera
🖥️ ShareShare screen
✋ Raise HandGet attention
💬 ChatIn-meeting chat
👥 ParticipantsView attendees
⚙️ SettingsAudio/video settings
📞 LeaveEnd call

Meeting Features

FeatureDescription
Screen SharingShare your entire screen or specific window
RecordingRecord meetings for later review
ChatSend messages during the meeting
ReactionsQuick emoji reactions
Raise HandQueue to speak
Breakout RoomsSplit into smaller groups
AI NotesAutomatic meeting notes
TranscriptionLive captions

Layouts

LayoutBest For
GalleryGroup discussions
SpeakerPresentations
SidebarScreen sharing
SpotlightFeatured speaker

Keyboard Shortcuts

ShortcutAction
MToggle mute
VToggle video
SShare screen
HRaise/lower hand
CToggle chat
PToggle participants
RStart/stop recording
SpacePush to talk (when muted)
EscapeExit fullscreen

Meetings via Chat

Starting a Meeting

Inviting Participants

Scheduling a Meeting

Getting Meeting Notes

Recording Access


API Endpoints

EndpointMethodDescription
/api/meet/roomsGETList meeting rooms
/api/meet/roomsPOSTCreate room
/api/meet/rooms/:idGETGet room details
/api/meet/rooms/:id/joinPOSTJoin meeting
/api/meet/rooms/:id/leavePOSTLeave meeting
/api/meet/rooms/:id/invitePOSTInvite participants
/api/meet/recordingsGETList recordings
/api/meet/recordings/:idGETGet recording

Create Room Request

{
    "name": "Team Sync",
    "scheduled": "2025-05-16T15:00:00Z",
    "duration": 60,
    "participants": ["sarah@company.com", "john@company.com"],
    "settings": {
        "recording": true,
        "transcription": true,
        "waitingRoom": false
    }
}

Room Response

{
    "id": "room-abc123",
    "name": "Team Sync",
    "url": "https://meet.gb/abc-123",
    "host": "you@company.com",
    "scheduled": "2025-05-16T15:00:00Z",
    "status": "scheduled",
    "participants": [
        {
            "email": "sarah@company.com",
            "status": "invited"
        }
    ],
    "settings": {
        "recording": true,
        "transcription": true
    }
}

Configuration

Configure Meet in config.csv:

key,value
meet-provider,livekit
meet-server-url,wss://localhost:7880
meet-recording-enabled,true
meet-transcription-enabled,true
meet-max-participants,50

Requirements

Browser Support

  • Chrome 90+
  • Firefox 88+
  • Safari 14+
  • Edge 90+

Hardware

  • Webcam (optional)
  • Microphone
  • Speakers or headphones
  • Stable internet connection

Permissions

Allow browser access to:

  • Camera
  • Microphone
  • Screen sharing (when needed)

Troubleshooting

No Audio/Video

  1. Check browser permissions
  2. Verify correct device selected in settings
  3. Try refreshing the page
  4. Check if another app is using the camera

Poor Quality

  1. Check internet connection speed
  2. Close other bandwidth-heavy applications
  3. Try disabling HD video
  4. Move closer to your router

Can’t Join Meeting

  1. Verify the meeting link is correct
  2. Check if meeting has started
  3. Ensure you’re not blocked by waiting room
  4. Try a different browser

Recording Not Working

  1. Verify recording is enabled for the room
  2. Check storage quota
  3. Ensure you have host permissions

See Also

CRM - Customer Relationship Management

Manage your sales pipeline from lead to close


Overview

CRM is your complete sales management solution following Microsoft Dynamics nomenclature. Track leads through qualification, manage opportunities in a visual pipeline, and convert them to accounts and contacts when deals close.


Key Concepts (Dynamics Nomenclature)

EntityDescription
LeadUnqualified prospect - someone who might become a customer
OpportunityQualified lead in the active sales process
AccountCompany/organization (converted customer)
ContactPerson at an Account
ActivityTasks, calls, emails linked to any entity

Entity Flow

Lead ──(qualify)──► Opportunity ──(convert)──► Account + Contact

Features

Pipeline View (Kanban)

The default view shows your sales pipeline as a Kanban board with drag-and-drop functionality:

StageDescription
LeadNew unqualified prospects
QualifiedLeads that meet your criteria
ProposalOpportunities with sent proposals
NegotiationActive deal discussions
WonSuccessfully closed deals
LostDeals that didn’t close

Drag cards between columns to update opportunity stages instantly.

Leads Management

Track and qualify incoming prospects:

  • Name - Contact name
  • Company - Organization name
  • Email - Primary contact email
  • Phone - Contact phone number
  • Source - Where the lead came from (web, referral, event, etc.)
  • Status - New, Contacted, Qualified

Opportunities

Manage active sales deals:

  • Opportunity Name - Deal identifier
  • Account - Associated company
  • Value - Expected deal amount
  • Stage - Current pipeline position
  • Probability - Win likelihood percentage
  • Expected Close - Target close date
  • Owner - Sales representative

Accounts

Company records for your customers:

  • Account Name - Company name
  • Industry - Business sector
  • Phone - Main phone number
  • City - Location
  • Annual Revenue - Company size indicator
  • Contacts - Number of associated contacts

Contacts

People at your customer accounts:

  • Name - Full name
  • Account - Associated company
  • Title - Job title/role
  • Email - Contact email
  • Phone - Direct phone

Pipeline Summary Metrics

Real-time dashboard showing:

MetricDescription
Total Pipeline ValueSum of all active opportunity values
Conversion RatePercentage of leads that convert to wins
Avg Deal SizeAverage value of won opportunities
Won This MonthTotal value closed this month

TabView
PipelineKanban board of opportunities
LeadsTable of all leads
OpportunitiesTable of all opportunities
AccountsTable of all accounts
ContactsTable of all contacts

Email Tab

The Email tab in CRM shows all emails linked to the selected contact. Click any contact row to load their emails via GET /api/ui/email/list?contact_email=.

Click Compose Email to open the email modal which posts to POST /api/crm/email/send. The email is automatically linked to the contact via email_crm_links.

Enabling CRM

Add crm to apps= in botserver/.product:

apps=...,crm

API Endpoints

EndpointMethodDescription
/api/crm/leadsGETList leads with filters
/api/crm/leadsPOSTCreate new lead
/api/crm/leads/:idGETGet lead details
/api/crm/leads/:idPUTUpdate lead
/api/crm/leads/:id/qualifyPOSTQualify lead to opportunity
/api/crm/opportunitiesGETList opportunities
/api/crm/opportunitiesPOSTCreate opportunity
/api/crm/opportunity/:id/stagePOSTUpdate opportunity stage
/api/crm/accountsGETList accounts
/api/crm/accountsPOSTCreate account
/api/crm/contactsGETList contacts
/api/crm/contactsPOSTCreate contact
/api/crm/pipelineGETGet pipeline data by stage
/api/crm/countGETGet counts by stage
/api/crm/stats/*GETGet various statistics
/api/crm/searchGETSearch across all CRM entities

@ Mentions in Chat

Reference CRM entities directly in chat conversations:

MentionExample
@lead:@lead:John Smith
@opportunity:@opportunity:Enterprise Deal
@account:@account:Acme Corp
@contact:@contact:Jane Doe

Hover over a mention to see entity details. Click to navigate to the record.


Keyboard Shortcuts

ShortcutAction
NNew lead (when in CRM)
EscapeClose modal
/Focus search

Filtering Options

Leads Filter

  • All Leads
  • New
  • Contacted
  • Qualified

Best Practices

Lead Management

  1. Respond quickly - Follow up on new leads within 24 hours
  2. Qualify early - Move quality leads to Opportunities promptly
  3. Track source - Know where your best leads come from

Pipeline Health

  1. Update stages daily - Keep pipeline accurate
  2. Set realistic close dates - Update Expected Close as needed
  3. Review weekly - Identify stuck opportunities

Data Quality

  1. Complete profiles - Fill in all available information
  2. Link contacts to accounts - Maintain relationships
  3. Log activities - Track all customer interactions

See Also

  • Billing — Create invoices from won opportunities
  • Products — Add products to quotes and invoices
  • Analytics — CRM reports and dashboards
  • Tasks — Create follow-up tasks from CRM

Billing - Invoices, Payments & Quotes

Manage your financial transactions from quote to payment


Overview

Billing is your complete invoicing solution following Microsoft Dynamics nomenclature. Create quotes for opportunities, convert them to invoices, and track payments through to completion.


Key Concepts (Dynamics Nomenclature)

EntityDescription
QuotePrice quotation sent to prospect/customer
InvoiceBill sent to customer for payment
PaymentPayment received against an invoice

Entity Flow

Quote ──(accept)──► Invoice ──(pay)──► Payment

Features

Invoices Management

Track all your billing documents:

  • Invoice Number - Unique identifier
  • Account - Customer being billed
  • Date - Invoice creation date
  • Due Date - Payment deadline
  • Amount - Total invoice value
  • Status - Draft, Sent, Paid, Overdue, Cancelled

Invoice Statuses

StatusDescription
DraftInvoice being prepared, not yet sent
SentInvoice delivered to customer
PaidPayment received in full
OverduePast due date, unpaid
CancelledInvoice voided

Payments

Record and track incoming payments:

  • Payment ID - Unique identifier
  • Invoice - Associated invoice
  • Account - Paying customer
  • Date - Payment received date
  • Amount - Payment amount
  • Method - Payment method used

Payment Methods

MethodDescription
Bank TransferWire/ACH transfer
Credit CardCard payment
PIXBrazilian instant payment
BoletoBrazilian bank slip
CashCash payment

Quotes

Create proposals for potential deals:

  • Quote Number - Unique identifier
  • Account - Customer receiving quote
  • Opportunity - Associated sales opportunity
  • Date - Quote creation date
  • Valid Until - Expiration date
  • Amount - Total quoted value
  • Status - Draft, Sent, Accepted, Rejected, Expired

Summary Dashboard

Real-time financial metrics:

MetricDescription
PendingTotal value of unpaid invoices
OverdueTotal value past due date
Paid This MonthPayments received this month
Revenue This MonthTotal revenue for current month

TabView
InvoicesAll invoice records
PaymentsPayment history
QuotesPrice quotations

API Endpoints

EndpointMethodDescription
/api/billing/invoicesGETList invoices with filters
/api/billing/invoicesPOSTCreate new invoice
/api/billing/invoices/:idGETGet invoice details
/api/billing/invoices/:idPUTUpdate invoice
/api/billing/invoices/:id/sendPOSTMark invoice as sent
/api/billing/invoices/:id/cancelPOSTCancel invoice
/api/billing/invoices/exportGETExport invoices
/api/billing/paymentsGETList payments
/api/billing/paymentsPOSTRecord new payment
/api/billing/quotesGETList quotes
/api/billing/quotesPOSTCreate new quote
/api/billing/quotes/:id/acceptPOSTConvert quote to invoice
/api/billing/searchGETSearch billing records
/api/billing/stats/*GETGet billing statistics

@ Mentions in Chat

Reference billing entities directly in chat:

MentionExample
@invoice:@invoice:INV-2024-001

Hover over a mention to see invoice details. Click to navigate to the record.


Filtering Options

Invoice Filters

FilterOptions
StatusAll, Draft, Sent, Paid, Overdue, Cancelled
PeriodAll Time, This Month, This Quarter, This Year

Payment Filters

FilterOptions
MethodAll Methods, Bank Transfer, Credit Card, PIX, Boleto, Cash

Quote Filters

FilterOptions
StatusAll, Draft, Sent, Accepted, Rejected, Expired

Keyboard Shortcuts

ShortcutAction
NNew invoice (when in Billing)
EscapeClose modal
/Focus search

Integration with CRM

Billing integrates seamlessly with CRM:

  1. Quote from Opportunity - Create quotes linked to opportunities
  2. Convert on Win - When opportunity is won, convert quote to invoice
  3. Account Linking - Invoices automatically linked to customer accounts

Best Practices

Invoice Management

  1. Send promptly - Issue invoices immediately after delivery
  2. Set clear terms - Include payment terms and due dates
  3. Follow up - Track overdue invoices proactively

Payment Tracking

  1. Record immediately - Log payments as soon as received
  2. Match correctly - Ensure payments match the right invoices
  3. Reconcile regularly - Review payment records weekly

Quote Management

  1. Include details - List all line items with descriptions
  2. Set expiration - Use reasonable validity periods
  3. Follow up - Check on pending quotes before expiration

Reports

Available in Analytics:

ReportDescription
Revenue SummaryTotal revenue over time
Aging ReportOverdue invoices by age
Payment HistoryPayments received over time
Monthly RevenueMonth-over-month comparison

See Also

  • CRM — Link invoices to accounts and opportunities
  • Products — Add products to invoices and quotes
  • Analytics — Billing reports and dashboards

Tickets - AI-Assisted Support Cases

Manage customer support with AI-powered resolution suggestions


Overview

Tickets is your AI-assisted support system following Microsoft Dynamics nomenclature. Create and manage support cases, track resolution times, and leverage AI to suggest solutions and automate common responses.


Key Concepts (Dynamics Nomenclature)

EntityDescription
CaseSupport ticket/request from a customer
ResolutionAI-suggested or manual solution to a case
ActivityActions taken on a case (responses, calls, etc.)

Features

Case Management

Track and resolve customer support requests:

  • Case Number - Unique identifier
  • Subject - Brief description of the issue
  • Account - Customer reporting the issue
  • Contact - Person who reported the issue
  • Priority - Urgency level
  • Status - Current case state
  • Category - Issue classification
  • Description - Full issue details
  • Assigned To - Support agent handling the case

Case Statuses

StatusDescription
OpenNew case awaiting attention
PendingWaiting for customer response or external input
In ProgressBeing actively worked on
ResolvedSolution provided, awaiting confirmation
ClosedCase completed and closed

Priority Levels

PriorityDescription
CriticalSystem down, immediate attention required
HighMajor issue affecting business operations
MediumStandard issue with workaround available
LowMinor issue or general inquiry

AI Assistance

The AI assistant helps with:

  • Auto-categorization - Automatically classify incoming cases
  • Solution Suggestions - Recommend resolutions based on similar cases
  • Response Templates - Generate contextual reply drafts
  • Priority Detection - Identify urgent cases from description
  • Knowledge Search - Find relevant KB articles automatically

Summary Dashboard

Real-time support metrics:

MetricDescription
Open CasesNumber of unresolved cases
UrgentCases with critical/high priority
Resolved TodayCases closed today
AI ResolvedPercentage of cases resolved by AI

TabView
All CasesComplete case list
OpenOnly open cases
PendingCases awaiting response
ResolvedCompleted cases

API Endpoints

EndpointMethodDescription
/api/ticketsGETList cases with filters
/api/ticketsPOSTCreate new case
/api/tickets/:idGETGet case details
/api/tickets/:idPUTUpdate case
/api/tickets/:id/resolvePOSTMark case as resolved
/api/tickets/:id/closePOSTClose case
/api/tickets/:id/reopenPOSTReopen closed case
/api/tickets/:id/assignPOSTAssign to agent
/api/tickets/:id/activitiesGETGet case activities
/api/tickets/:id/activitiesPOSTAdd activity to case
/api/tickets/:id/ai-suggestGETGet AI resolution suggestions
/api/tickets/searchGETSearch cases
/api/tickets/stats/*GETGet support statistics

@ Mentions in Chat

Reference cases directly in chat conversations:

MentionExample
@case:@case:CS-2024-001

Hover over a mention to see case details. Click to navigate to the record.


Filtering Options

Status Filters

FilterOptions
StatusAll, Open, Pending, In Progress, Resolved, Closed

Priority Filters

FilterOptions
PriorityAll, Critical, High, Medium, Low

Category Filters

FilterOptions
CategoryAll, Technical, Billing, General, Feature Request

Keyboard Shortcuts

ShortcutAction
NNew case (when in Tickets)
EscapeClose modal
/Focus search
RReply to selected case

AI Resolution Flow

  1. Case Created - Customer submits support request
  2. AI Analysis - System analyzes case description
  3. Auto-Categorize - Priority and category assigned
  4. Suggest Solutions - AI finds similar resolved cases
  5. Generate Response - Draft reply created for review
  6. Agent Review - Support agent approves or modifies
  7. Resolution - Customer receives response

Integration with CRM

Tickets integrates with your CRM data:

  1. Account Linking - Cases linked to customer accounts
  2. Contact Association - Track who reported each issue
  3. History Access - View customer’s previous cases
  4. Activity Sync - Support activities appear in CRM timeline

Best Practices

Case Management

  1. Respond quickly - Acknowledge cases within SLA
  2. Set accurate priority - Ensure urgent issues get attention
  3. Document thoroughly - Record all resolution steps
  4. Update status - Keep case status current

Using AI Assistance

  1. Review suggestions - Always verify AI recommendations
  2. Train the model - Mark good suggestions to improve accuracy
  3. Personalize responses - Edit AI drafts for customer context
  4. Escalate when needed - Don’t rely on AI for complex issues

SLA Management

  1. Define SLAs - Set response and resolution time targets
  2. Monitor compliance - Track SLA performance
  3. Escalate proactively - Flag cases approaching SLA breach

Reports

Available in Analytics:

ReportDescription
Open Cases by PriorityDistribution of active cases
Resolution TimeAverage time to resolve by category
Cases by CategoryVolume breakdown by issue type
AI Resolution RatePercentage resolved with AI assistance
Agent PerformanceCases handled per agent
SLA CompliancePercentage meeting SLA targets

Case Categories

CategoryDescription
TechnicalProduct bugs, errors, technical issues
BillingInvoice questions, payment issues
GeneralGeneral inquiries, how-to questions
Feature RequestSuggestions for new features

Activity Types

Activities logged on cases:

ActivityDescription
EmailEmail sent to/from customer
Phone CallPhone conversation logged
NoteInternal note added
Status ChangeCase status updated
AssignmentCase reassigned
ResolutionSolution provided

See Also

  • CRM — Link cases to accounts and contacts
  • Chat — AI assistant for support queries
  • Analytics — Support reports and dashboards
  • Tasks — Create follow-up tasks from cases

Analytics - Dashboards

Your business intelligence center

Analytics Interface Screen

Overview

Analytics is the data visualization and reporting app in General Bots Suite. Track key metrics, build custom dashboards, generate reports, and get AI-powered insights about your business. Analytics turns your data into actionable information.


Features

Dashboard Overview

Dashboards are collections of widgets that display your data visually.

Default Dashboards:

DashboardWhat It Shows
OverviewKey metrics across all areas
SalesRevenue, deals, pipeline
MarketingCampaigns, leads, conversion
SupportTickets, response time, satisfaction
HRHeadcount, hiring, retention

Creating a Dashboard

Step 1: Click “+ New” in the sidebar

Fill in the dashboard details:

  • Dashboard Name - A descriptive title (e.g., “Q2 Performance”)
  • Description - Optional context for the dashboard
  • Template - Start blank, use a template, or copy from existing

Step 2: Add Widgets

Click + Widget and choose a visualization type.


Widget Types

Numbers:

  • Number - Single metric display
  • Comparison - Metric with percentage change
  • Progress - Goal tracking with progress bar

Charts:

  • Line - Trends over time
  • Bar - Category comparisons
  • Area - Volume visualization
  • Pie - Proportional breakdown

Tables & Lists:

  • Table - Data grid with sorting
  • Leaderboard - Ranked list
  • List - Simple bullet items

Special:

  • Geography - Map visualization
  • Heatmap - Intensity grid
  • AI Summary - AI-generated insights

Configuring Widgets

After selecting a widget type, configure the data source:

SettingDescription
TitleWidget display name
SourceDatabase or data connection
TableSpecific table to query
X-AxisHorizontal dimension
Y-AxisVertical measure
Group ByTime period or category
AggregateSum, Count, Average, etc.
FiltersConditions to apply

Key Metric Cards

Display important numbers with context:

ElementPurpose
ValueThe main metric number
Trend ArrowUp/down indicator
Comparisonvs last period
Progress BarVisual goal tracking
GoalTarget value

Color Indicators:

ColorMeaning
🟢 Green (▲)Positive trend / On target
🔴 Red (▼)Negative trend / Below target
🟡 Yellow (─)No change / Needs attention
⚪ GrayNo comparison available

AI Insights ✨

Let AI analyze your data and surface insights:

Insight Types:

CategoryWhat It Shows
📈 TrendsPatterns and momentum in your data
⚠️ AlertsIssues that need attention
💡 RecommendationsSuggested actions
🎯 PredictionsForecasts based on current data

Ask the AI:


Reports

Generate and schedule reports:

Creating a Report:

  1. Click + New under Reports
  2. Select report type
  3. Configure data and format
  4. Schedule delivery (optional)

Report Options:

SettingOptions
ContentDashboard, AI insights, raw data
Date RangeLast 7/30/90 days, quarter, custom
FormatPDF, Interactive Web, Excel, PowerPoint
ScheduleDaily, Weekly, Monthly
RecipientsEmail addresses for delivery

Data Sources

Connect Analytics to various data sources:

Source TypeExamples
DatabasesPostgreSQL, MySQL, SQLite
FilesExcel, CSV, JSON
APIsREST endpoints, GraphQL
AppsCRM, Support, Calendar data
Bot DataConversation logs, user data

Adding a Data Source:

  1. Go to SettingsData Sources
  2. Click + Add Source
  3. Select source type
  4. Enter connection details
  5. Test and save

Sharing Dashboards

Share dashboards with your team:

  1. Click Share on any dashboard
  2. Set permissions (View, Edit, Owner)
  3. Copy link or invite by email

Permission Levels:

LevelCan Do
ViewSee dashboard, apply filters
EditModify widgets, change layout
OwnerFull control, manage sharing

Link Sharing:

  • Off - Only specific people can access
  • On - Anyone with link can view

Keyboard Shortcuts

ShortcutAction
RRefresh dashboard
FToggle fullscreen
EEdit mode
NNew widget
DDuplicate widget
DeleteDelete selected widget
Ctrl+SSave dashboard
Ctrl+PPrint / Export PDF
Ctrl+FFind / Filter
/Quick search
Navigate dashboards
EscapeExit edit mode

Tips & Tricks

Dashboard Design

💡 Keep it simple - 5-7 widgets per dashboard is optimal

💡 Most important metrics at top - Follow the F-pattern reading

💡 Use consistent colors - Same metric = same color across widgets

💡 Group related widgets - Keep sales metrics together

Data Tips

💡 Set up daily sync for data sources that change frequently

💡 Use filters to let viewers customize their view

💡 Add comparison periods (vs last month, vs last year)

💡 Include goals/targets to show progress

AI Tips

💡 Ask “why” questions - AI excels at explaining trends

💡 Request predictions for planning

💡 Use AI for anomaly detection - “What’s unusual this month?”

💡 Generate executive summaries before board meetings


Troubleshooting

Dashboard not loading

Possible causes:

  1. Data source disconnected
  2. Query timeout
  3. Permission issues

Solution:

  1. Check data source status in Settings
  2. Reduce date range or add filters
  3. Verify you have dashboard access
  4. Refresh the page

Data not updating

Possible causes:

  1. Sync schedule not running
  2. Source data hasn’t changed
  3. Cache showing old data

Solution:

  1. Click Refresh on the dashboard
  2. Check data source sync status
  3. Go to Settings → Clear cache
  4. Verify source data has new records

Charts showing wrong numbers

Possible causes:

  1. Filter applied incorrectly
  2. Wrong aggregation method
  3. Date range mismatch

Solution:

  1. Check widget filters
  2. Verify aggregation (Sum vs Count vs Average)
  3. Confirm date range matches expectations
  4. Edit widget and review query

Export not working

Possible causes:

  1. Dashboard too large
  2. Browser blocking download
  3. Permission restrictions

Solution:

  1. Try exporting individual widgets
  2. Check browser download settings
  3. Use a different export format
  4. Contact administrator for permissions

BASIC Integration

Use Analytics in your bot dialogs:

Query Metrics

revenue = GET METRIC "total_revenue" FOR "this month"
lastMonth = GET METRIC "total_revenue" FOR "last month"

growth = ((revenue - lastMonth) / lastMonth) * 100

TALK "Revenue this month: $" + FORMAT(revenue, "#,##0")
TALK "Growth: " + FORMAT(growth, "#0.0") + "%"

Generate Reports

HEAR period AS TEXT "Which period? (weekly/monthly/quarterly)"

report = GENERATE REPORT "Sales Summary" FOR period

TALK "Here's your " + period + " sales report:"
SEND FILE report.pdf

TALK "Key highlights:"
TALK report.summary

Get AI Insights

insights = GET INSIGHTS FOR "Sales Dashboard"

TALK "Here are today's insights:"
FOR EACH insight IN insights.trends
    TALK "📈 " + insight
NEXT

TALK "Alerts:"
FOR EACH alert IN insights.alerts
    TALK "⚠️ " + alert
NEXT

Create Dashboard Widget

widget = NEW OBJECT
widget.type = "line_chart"
widget.title = "Daily Active Users"
widget.source = "bot_analytics"
widget.xAxis = "date"
widget.yAxis = "active_users"
widget.dateRange = "last 30 days"

ADD WIDGET widget TO "Overview Dashboard"
TALK "Widget added successfully"

Scheduled Reports

' This dialog runs on a schedule
report = GENERATE REPORT "Weekly Metrics" FOR "last 7 days"

recipients = ["ceo@company.com", "team@company.com"]

FOR EACH recipient IN recipients
    SEND MAIL recipient, "Weekly Metrics Report - " + TODAY, 
        "Please find attached the weekly metrics report.", [report.pdf]
NEXT

LOG "Weekly report sent to " + COUNT(recipients) + " recipients"

See Also

Designer - Visual Builder

Your no-code bot building studio

Designer Interface Screen

Overview

Designer is the visual bot builder in General Bots Suite. Create conversation flows, design user interfaces, and build automations without writing code. Designer uses a drag-and-drop interface that makes bot development accessible to everyone.


Features

Creating a New Flow

Step 1: Open Designer

  1. Click the apps menu (⋮⋮⋮)
  2. Select Designer
  3. Click + New Flow

Step 2: Configure Flow

SettingDescription
Flow NameDescriptive title (e.g., “Customer Support”)
DescriptionBrief explanation of what the flow does
Start fromBlank canvas, Template, or Import from file

Step 3: Add Components

Drag components from the left panel onto the canvas.

Step 4: Connect Components

Click and drag from one component’s output to another’s input.


Component Types

Communication Components

ComponentIconPurpose
Talk💬Send a message to the user
Hear👂Wait for user input
AskAsk a question and capture response
Show🖼️Display an image, card, or media
Menu📋Show clickable options

Talk Component Options:

  • Message text with variations (AI picks randomly)
  • Use AI to personalize
  • Include typing indicator
  • Delay before sending

Ask Component Options:

  • Question text
  • Variable name to save response
  • Expected type: Text, Number, Email, Phone, Date, Yes/No, Multiple Choice
  • Validation message for invalid input

Logic Components

ComponentIconPurpose
Branch🔀Conditional logic (if/else)
Loop🔄Repeat actions
Switch🔃Multiple conditions
Wait⏱️Pause execution
End🏁End the flow

Branch Configuration:

  • Set condition using variable comparisons
  • Add multiple AND/OR conditions
  • TRUE and FALSE output paths

Action Components

ComponentIconPurpose
ActionExecute a BASIC keyword
API Call🌐Call external API
Database🗄️Query or update data
Email✉️Send an email
Set Variable📝Store a value

Action Error Handling:

  • Stop flow and show error
  • Continue to error path
  • Retry N times

AI Components

ComponentIconPurpose
AI Chat🤖Natural language conversation
Search KB🔍Search knowledge base
GenerateGenerate text with AI
Classify🏷️Categorize user input
Extract📤Extract data from text

Classify Example Categories:

  • support - Customer needs help with a problem
  • sales - Customer interested in buying
  • billing - Payment or invoice questions
  • feedback - Customer giving feedback
  • other - Anything else

Working with the Canvas

ActionHow To
PanClick and drag on empty space
Zoom inScroll up or click [+]
Zoom outScroll down or click [-]
Fit to screenClick [⌖] or press F
Select multipleHold Shift and click
Box selectHold Ctrl and drag

Canvas Controls

ControlPurpose
[+] [-]Zoom in/out
[⌖]Fit to view
GridShow/hide grid
SnapSnap to grid
AutoAuto-arrange components

Using Variables

Variables store information during the conversation.

System Variables (read-only):

VariableDescription
{{user.name}}User’s display name
{{user.email}}User’s email address
{{user.phone}}User’s phone number
{{channel}}Current channel (web, whatsapp, etc)
{{today}}Today’s date
{{now}}Current date and time
{{botName}}Name of this bot

Flow Variables: Variables you create using Ask or Set Variable components.

Reference variables with double curly braces: {{variableName}}


Testing Your Flow

Preview Mode:

  1. Click Preview button
  2. A chat window opens
  3. Test the conversation
  4. Watch the flow highlight active steps

The Preview panel shows:

  • Flow visualization with active step highlighted
  • Test conversation area
  • Current variable values
  • Clear and Reset buttons

Deploying Your Flow

When your flow is ready:

  1. Click Deploy
  2. Choose deployment options:
    • Production or Staging only
    • Immediate or Scheduled
  3. Configure triggers:
    • Specific phrases (e.g., “help”, “support”)
    • As default fallback
    • On schedule
  4. Review changes since last deploy
  5. Confirm deployment

Templates

Start faster with pre-built templates:

TemplateDescription
📋 FAQ BotAnswer common questions from knowledge base
🎫 SupportTicket creation and tracking
💰 SalesLead capture and qualification
📅 AppointmentSchedule meetings and appointments
📝 FeedbackCollect customer feedback
🚀 OnboardingNew user welcome and setup guide

Keyboard Shortcuts

Canvas

ShortcutAction
Space + DragPan canvas
Ctrl + +Zoom in
Ctrl + -Zoom out
Ctrl + 0Reset zoom
FFit to screen
GToggle grid
DeleteDelete selected
Ctrl + DDuplicate selected
Ctrl + ZUndo
Ctrl + YRedo

Components

ShortcutAction
TAdd Talk component
HAdd Hear component
AAdd Ask component
BAdd Branch component
EEdit selected component
Ctrl + CCopy component
Ctrl + VPaste component
Ctrl + XCut component

Flow

ShortcutAction
Ctrl + SSave flow
Ctrl + PPreview flow
Ctrl + EnterDeploy flow
Ctrl + EExport flow
Ctrl + IImport flow

Tips & Tricks

Design Tips

💡 Keep flows simple - Break complex flows into smaller sub-flows

💡 Use descriptive names - “Ask for Email” is better than “Step 3”

💡 Add comments - Right-click any component to add notes

💡 Test often - Preview after every few changes

Organization Tips

💡 Use folders to organize related flows

💡 Version your flows - Save before major changes

💡 Use templates for consistent starting points

💡 Color-code paths - Use colors for different intents

Performance Tips

💡 Minimize API calls - Cache results when possible

💡 Use AI classification early - Route users quickly

💡 Set timeouts - Don’t let flows hang indefinitely

💡 Handle errors - Always add error paths


Troubleshooting

Flow not triggering

Possible causes:

  1. Flow not deployed
  2. Trigger words not matching
  3. Another flow has priority

Solution:

  1. Click Deploy and confirm it’s active
  2. Check trigger configuration
  3. Review flow priority in settings
  4. Test with exact trigger phrases

Variables not working

Possible causes:

  1. Typo in variable name
  2. Variable not set yet in flow
  3. Wrong scope

Solution:

  1. Check spelling matches exactly (case-sensitive)
  2. Ensure variable is set before being used
  3. Use Preview mode to watch variable values
  4. Check the Variables panel for current values

Component errors

Possible causes:

  1. Missing required configuration
  2. Invalid connection
  3. Action failed

Solution:

  1. Click the red error icon for details
  2. Fill in all required fields
  3. Check that connections make logical sense
  4. Review error logs in Preview mode

Preview not matching production

Possible causes:

  1. Changes not deployed
  2. Different data in production
  3. External service differences

Solution:

  1. Deploy latest changes
  2. Test with same data as production
  3. Check API connections are identical
  4. Review production logs

BASIC Integration

Designer flows generate BASIC code. You can view and customize it.

View Generated Code

Right-click any component and select “View Code”:

' Generated from "Customer Support" flow

TALK "Hello! How can I help you today?"

HEAR userMessage AS TEXT

intent = CLASSIFY userMessage INTO ["support", "sales", "billing", "other"]

IF intent = "support" THEN
    TALK "I'm sorry to hear you're having issues!"
    HEAR orderNumber AS TEXT "What's your order number?"
    result = SEARCH KB "order " + orderNumber
    TALK result.answer
ELSE IF intent = "sales" THEN
    ' ... sales flow
END IF

Mix Designer and Code

Use the Code component to add custom BASIC:

' Custom calculation
discount = 0

IF userType = "premium" THEN
    discount = orderTotal * 0.15
ELSE IF orderTotal > 100 THEN
    discount = orderTotal * 0.05
END IF

finalPrice = orderTotal - discount

See Also

How To: Create Your First Bot

Tutorial 1 of the Getting Started Series

Follow these simple steps to create a working bot in 10 minutes


┌─────────────────────────────────────────────────────────────────────────┐
│                                                                         │
│   ┌─────────────────────────────────────────────────────────────────┐   │
│   │                                                                 │   │
│   │     🤖  CREATE YOUR FIRST BOT                                   │   │
│   │                                                                 │   │
│   │     ┌─────────┐    ┌─────────┐    ┌─────────┐    ┌─────────┐   │   │
│   │     │  Step   │───▶│  Step   │───▶│  Step   │───▶│  Step   │   │   │
│   │     │   1     │    │   2     │    │   3     │    │   4     │   │   │
│   │     │ Access  │    │ Create  │    │Configure│    │  Test   │   │   │
│   │     │ Suite   │    │  Bot    │    │  Bot    │    │  Bot    │   │   │
│   │     └─────────┘    └─────────┘    └─────────┘    └─────────┘   │   │
│   │                                                                 │   │
│   └─────────────────────────────────────────────────────────────────┘   │
│                                                                         │
└─────────────────────────────────────────────────────────────────────────┘

Objective

By the end of this tutorial, you will have:

  • Created a new bot instance
  • Configured basic settings
  • Written a simple greeting
  • Tested your bot by talking to it

Time Required

⏱️ 10 minutes


Prerequisites

Before you begin, make sure you have:

  • Access to General Bots Suite (URL provided by your administrator)
  • A web browser (Chrome, Firefox, Safari, or Edge)
  • Administrator or Bot Creator permissions

Step 1: Access the Suite

1.1 Open Your Browser

Launch your preferred web browser by clicking its icon.

┌─────────────────────────────────────────────────────────────────────────┐
│  🌐 Browser                                                     [─][□][×]│
├─────────────────────────────────────────────────────────────────────────┤
│  ← → ↻  │ https://your-company.bot:9000                          │ ☆ │  │
├─────────────────────────────────────────────────────────────────────────┤
│                                                                         │
│                          Loading...                                     │
│                                                                         │
└─────────────────────────────────────────────────────────────────────────┘

1.2 Navigate to Your General Bots URL

Type your General Bots address in the address bar and press Enter.

💡 Tip: Your URL will look something like:

  • http://localhost:9000 (development)
  • https://bots.yourcompany.com (production)
  • https://app.pragmatismo.cloud (cloud hosted)

1.3 Log In (If Required)

If you see a login screen:

  1. Enter your username or email
  2. Enter your password
  3. Click Sign In
┌─────────────────────────────────────────────────────────────────────────┐
│                                                                         │
│                    ┌────────────────────────────┐                       │
│                    │    🤖 General Bots         │                       │
│                    │                            │                       │
│                    │  Username:                 │                       │
│                    │  ┌────────────────────┐    │                       │
│                    │  │ admin@company.com  │    │                       │
│                    │  └────────────────────┘    │                       │
│                    │                            │                       │
│                    │  Password:                 │                       │
│                    │  ┌────────────────────┐    │                       │
│                    │  │ ••••••••••••       │    │                       │
│                    │  └────────────────────┘    │                       │
│                    │                            │                       │
│                    │  ┌────────────────────┐    │                       │
│                    │  │     Sign In  ──►   │    │                       │
│                    │  └────────────────────┘    │                       │
│                    │                            │                       │
│                    └────────────────────────────┘                       │
│                                                                         │
└─────────────────────────────────────────────────────────────────────────┘

Checkpoint: You should now see the General Bots Suite interface.


Step 2: Create a New Bot

2.1 Open the Apps Menu

Click the nine-dot grid icon (⋮⋮⋮) in the top-right corner of the screen.

┌─────────────────────────────────────────────────────────────────────────┐
│  🤖 General Bots                                    [⋮⋮⋮] ◄── Click here │
├─────────────────────────────────────────────────────────────────────────┤
│                                                                         │
│                                                                         │

2.2 Select “Sources”

From the apps menu that appears, click Sources.

┌─────────────────────────────────────────────────────────────────────────┐
│                                                                         │
│                         ┌───────────────────┐                           │
│                         │   💬 Chat         │                           │
│                         │   📁 Drive        │                           │
│                         │   ✓  Tasks        │                           │
│                         │   ✉  Mail         │                           │
│                         │   📝 Paper        │                           │
│                         │   📊 Analytics    │                           │
│                         │ ▶ 📋 Sources ◀───┼─── Click here             │
│                         │   🎨 Designer     │                           │
│                         │   ⚙️  Settings     │                           │
│                         └───────────────────┘                           │
│                                                                         │
└─────────────────────────────────────────────────────────────────────────┘

2.3 Click “New Bot”

In the Sources application, locate and click the New Bot button.

┌─────────────────────────────────────────────────────────────────────────┐
│  Sources                                                                │
├─────────────────────────────────────────────────────────────────────────┤
│  ┌──────────────┐  ┌──────────────┐  ┌──────────────┐                  │
│  │  Templates   │  │   Prompts    │  │    Bots      │ ◄── Active Tab   │
│  └──────────────┘  └──────────────┘  └──────────────┘                  │
├─────────────────────────────────────────────────────────────────────────┤
│                                                                         │
│  Your Bots                              ┌─────────────────┐             │
│  ─────────                              │  ➕ New Bot     │ ◄── Click  │
│                                         └─────────────────┘             │
│  ┌─────────────────────────────────────────────────────────────────┐   │
│  │  No bots yet. Create your first bot!                            │   │
│  └─────────────────────────────────────────────────────────────────┘   │
│                                                                         │
└─────────────────────────────────────────────────────────────────────────┘

2.4 Enter Bot Details

A dialog box will appear. Fill in the following fields:

FieldWhat to EnterExample
Bot NameA unique identifier (no spaces)mycompany
Display NameFriendly name shown to usersMy Company Assistant
DescriptionWhat your bot doesHelps employees find information
TemplateStarting point (select from dropdown)default
┌─────────────────────────────────────────────────────────────────────────┐
│                          Create New Bot                           [×]   │
├─────────────────────────────────────────────────────────────────────────┤
│                                                                         │
│  Bot Name *                                                             │
│  ┌─────────────────────────────────────────────────────────────────┐   │
│  │ mycompany                                                       │   │
│  └─────────────────────────────────────────────────────────────────┘   │
│  ⚠️ Use lowercase letters, numbers, and hyphens only                    │
│                                                                         │
│  Display Name *                                                         │
│  ┌─────────────────────────────────────────────────────────────────┐   │
│  │ My Company Assistant                                            │   │
│  └─────────────────────────────────────────────────────────────────┘   │
│                                                                         │
│  Description                                                            │
│  ┌─────────────────────────────────────────────────────────────────┐   │
│  │ Helps employees find information and complete tasks             │   │
│  └─────────────────────────────────────────────────────────────────┘   │
│                                                                         │
│  Template                                                               │
│  ┌─────────────────────────────────────────────────────────────────┐   │
│  │ default                                                     [▼] │   │
│  └─────────────────────────────────────────────────────────────────┘   │
│                                                                         │
│                    ┌──────────┐  ┌──────────────────┐                  │
│                    │  Cancel  │  │  Create Bot ──►  │                  │
│                    └──────────┘  └──────────────────┘                  │
│                                                                         │
└─────────────────────────────────────────────────────────────────────────┘

2.5 Click “Create Bot”

Click the Create Bot button to create your bot.

💡 Tip: The bot creation process takes a few seconds. You’ll see a progress indicator.

Checkpoint: Your new bot should appear in the bot list.


Step 3: Configure Basic Settings

3.1 Open Bot Settings

Click on your new bot to select it, then click Settings (or the ⚙️ icon).

┌─────────────────────────────────────────────────────────────────────────┐
│  Your Bots                                                              │
├─────────────────────────────────────────────────────────────────────────┤
│                                                                         │
│  ┌─────────────────────────────────────────────────────────────────┐   │
│  │  🤖 mycompany                                              [⚙️]  │◄──│
│  │     My Company Assistant                                        │   │
│  │     Status: ● Active                                            │   │
│  └─────────────────────────────────────────────────────────────────┘   │
│                                                                         │
└─────────────────────────────────────────────────────────────────────────┘
                                                                    │
                                                          Click the ⚙️ icon

3.2 Set the Welcome Message

Find the Welcome Message field and enter a friendly greeting:

Welcome Message:

┌─────────────────────────────────────────────────────────────────────────┐
│ Hello! 👋 I'm your Company Assistant. I can help you with:             │
│                                                                         │
│ • Finding documents and information                                     │
│ • Answering questions about policies                                    │
│ • Creating tasks and reminders                                          │
│                                                                         │
│ How can I help you today?                                               │
└─────────────────────────────────────────────────────────────────────────┘

3.3 Configure AI Model (Optional)

If you have API keys for AI services, configure them:

SettingDescriptionExample Value
LLM ProviderAI service to useanthropic
ModelSpecific modelclaude-sonnet-4.5
API KeyYour API keysk-...

⚠️ Warning: Keep your API keys secret. Never share them.

3.4 Save Settings

Click the Save button to save your configuration.

┌─────────────────────────────────────────────────────────────────────────┐
│  Bot Settings                                                     [×]   │
├─────────────────────────────────────────────────────────────────────────┤
│                                                                         │
│  [General]  [AI Model]  [Channels]  [Advanced]                         │
│                                                                         │
│  ─────────────────────────────────────────────────────────────────     │
│                                                                         │
│                                              ┌────────────────────┐     │
│                                              │    💾 Save         │◄────│
│                                              └────────────────────┘     │
│                                                                         │
└─────────────────────────────────────────────────────────────────────────┘
                                                         Click Save

Checkpoint: Your settings are saved. The bot is ready to test.


Step 4: Test Your Bot

4.1 Open Chat

Click the Chat app from the Apps Menu (⋮⋮⋮).

4.2 Select Your Bot

If you have multiple bots, select yours from the bot dropdown:

┌─────────────────────────────────────────────────────────────────────────┐
│  💬 Chat                              [mycompany           ▼]           │
├─────────────────────────────────────────────────────────────────────────┤
│                                                                         │
│      ┌─────────────────────────────────────────────────────────────┐   │
│      │  🤖 My Company Assistant                                    │   │
│      │                                                             │   │
│      │  Hello! 👋 I'm your Company Assistant. I can help          │   │
│      │  you with:                                                  │   │
│      │                                                             │   │
│      │  • Finding documents and information                        │   │
│      │  • Answering questions about policies                       │   │
│      │  • Creating tasks and reminders                             │   │
│      │                                                             │   │
│      │  How can I help you today?                                  │   │
│      └─────────────────────────────────────────────────────────────┘   │
│                                                                         │
├─────────────────────────────────────────────────────────────────────────┤
│  Type your message...                                            [↑]   │
└─────────────────────────────────────────────────────────────────────────┘

4.3 Send a Test Message

Type a simple message and press Enter:

You: Hello!

4.4 Verify the Response

Your bot should respond! If it does, congratulations — your bot is working!

┌─────────────────────────────────────────────────────────────────────────┐
│                                                                         │
│      ┌─────────────────────────────────────────────────────────────┐   │
│      │  👤 You                                                     │   │
│      │  Hello!                                                     │   │
│      └─────────────────────────────────────────────────────────────┘   │
│                                                                         │
│      ┌─────────────────────────────────────────────────────────────┐   │
│      │  🤖 My Company Assistant                                    │   │
│      │  Hello! How can I assist you today?                         │   │
│      └─────────────────────────────────────────────────────────────┘   │
│                                                                         │
└─────────────────────────────────────────────────────────────────────────┘

Checkpoint: Your bot responds to messages. Setup complete!


🎉 Congratulations!

You have successfully created your first bot! Here’s what you accomplished:

┌─────────────────────────────────────────────────────────────────────────┐
│                                                                         │
│    ✓ Accessed General Bots Suite                                        │
│    ✓ Created a new bot instance                                         │
│    ✓ Configured basic settings                                          │
│    ✓ Tested the bot with a conversation                                 │
│                                                                         │
│    Your bot "mycompany" is now ready to use!                            │
│                                                                         │
└─────────────────────────────────────────────────────────────────────────┘

Troubleshooting

Problem: “Create Bot” button is disabled

Cause: Required fields are empty or invalid.

Solution:

  1. Check that Bot Name contains only lowercase letters, numbers, and hyphens
  2. Ensure Display Name is not empty
  3. Verify a template is selected

Problem: Bot doesn’t respond

Cause: AI model not configured or API key invalid.

Solution:

  1. Open bot settings
  2. Verify AI model configuration
  3. Check that API key is correct
  4. Ensure you have API credits remaining

Problem: “Permission denied” error

Cause: Your account doesn’t have bot creation rights.

Solution:

  1. Contact your administrator
  2. Request “Bot Creator” or “Administrator” role

Problem: Page won’t load

Cause: Network or server issue.

Solution:

  1. Check your internet connection
  2. Try refreshing the page (F5 or Ctrl+R)
  3. Clear browser cache
  4. Contact your system administrator

What You Learned

In this tutorial, you learned:

ConceptDescription
Bot InstanceA unique bot with its own configuration
Bot NameTechnical identifier used internally
Display NameFriendly name shown to users
TemplatePre-built starting point for your bot
Welcome MessageFirst message users see

Next Steps

Now that you have a working bot, continue learning:

Next TutorialWhat You’ll Learn
Your First ConversationUnderstanding how conversations work
Add Knowledge Base DocumentsTeaching your bot from files
Write Your First DialogCreating custom conversation flows

Quick Reference

Bot Naming Rules

  • mycompany — Good
  • hr-assistant — Good
  • support2024 — Good
  • My Company — No spaces
  • HR_Bot — No underscores
  • Support@2024 — No special characters

Essential Settings Checklist

  • Bot Name (unique identifier)
  • Display Name (user-friendly)
  • Welcome Message (first impression)
  • AI Model (for responses)
  • Language (for localization)

Tutorial 1 of 30 • Back to How-To IndexNext: Your First Conversation →

How To: Write Your First Dialog

Tutorial 5 of the BASIC Dialogs Series

Create a simple conversation script in 20 minutes


┌─────────────────────────────────────────────────────────────────────────┐
│                                                                         │
│   ┌─────────────────────────────────────────────────────────────────┐   │
│   │                                                                 │   │
│   │     📝  WRITE YOUR FIRST DIALOG                                 │   │
│   │                                                                 │   │
│   │     ┌─────────┐    ┌─────────┐    ┌─────────┐    ┌─────────┐   │   │
│   │     │  Step   │───▶│  Step   │───▶│  Step   │───▶│  Step   │   │   │
│   │     │   1     │    │   2     │    │   3     │    │   4     │   │   │
│   │     │ Create  │    │  Write  │    │  Test   │    │ Enhance │   │   │
│   │     │  File   │    │  Code   │    │ Dialog  │    │  Logic  │   │   │
│   │     └─────────┘    └─────────┘    └─────────┘    └─────────┘   │   │
│   │                                                                 │   │
│   └─────────────────────────────────────────────────────────────────┘   │
│                                                                         │
└─────────────────────────────────────────────────────────────────────────┘

Objective

By the end of this tutorial, you will have:

  • Created a .bas dialog file
  • Written code using TALK and HEAR keywords
  • Used conditional logic (IF/THEN/ELSE)
  • Stored and retrieved user information
  • Tested your dialog in the chat interface

Time Required

⏱️ 20 minutes


Prerequisites

Before you begin, make sure you have:

  • A working bot (see Create Your First Bot)
  • Access to the Designer or Drive app
  • Basic understanding of the chat interface

What is a Dialog?

A dialog is a conversation script written in BASIC that controls how your bot talks with users. Think of it like a script for a play — you write what the bot should say and how it should respond to the user.

┌─────────────────────────────────────────────────────────────────────────┐
│                        HOW DIALOGS WORK                                 │
├─────────────────────────────────────────────────────────────────────────┤
│                                                                         │
│    User says: "Hello"                                                   │
│         │                                                               │
│         ▼                                                               │
│    ┌─────────────────┐                                                 │
│    │  Dialog Script  │  ◄── Your BASIC code runs here                  │
│    │  (greeting.bas) │                                                 │
│    └────────┬────────┘                                                 │
│             │                                                           │
│             ▼                                                           │
│    Bot says: "Hi there! What's your name?"                              │
│         │                                                               │
│         ▼                                                               │
│    User says: "Sarah"                                                   │
│         │                                                               │
│         ▼                                                               │
│    Bot says: "Nice to meet you, Sarah!"                                 │
│                                                                         │
└─────────────────────────────────────────────────────────────────────────┘

Step 1: Create the Dialog File

1.1 Open the Drive App

Click the Apps Menu (⋮⋮⋮) and select Drive.

┌─────────────────────────────────────────────────────────────────────────┐
│  📁 Drive                                                               │
├─────────────────────────────────────────────────────────────────────────┤
│                                                                         │
│  📂 mycompany.gbai                                                      │
│     ├── 📂 mycompany.gbdialog    ◄── Dialog files go here              │
│     ├── 📂 mycompany.gbot                                               │
│     ├── 📂 mycompany.gbkb                                               │
│     └── 📂 mycompany.gbdrive                                            │
│                                                                         │
└─────────────────────────────────────────────────────────────────────────┘

1.2 Navigate to the Dialog Folder

Double-click mycompany.gbai, then mycompany.gbdialog.

1.3 Create a New File

Click New File (or press Ctrl+N) and name it:

greeting.bas

⚠️ Warning: The file must end with .bas to be recognized as a dialog.

┌─────────────────────────────────────────────────────────────────────────┐
│                           New File                                [×]   │
├─────────────────────────────────────────────────────────────────────────┤
│                                                                         │
│  File Name:                                                             │
│  ┌─────────────────────────────────────────────────────────────────┐   │
│  │ greeting.bas                                                    │   │
│  └─────────────────────────────────────────────────────────────────┘   │
│                                                                         │
│  Location: mycompany.gbai / mycompany.gbdialog /                       │
│                                                                         │
│                    ┌──────────┐  ┌──────────────────┐                  │
│                    │  Cancel  │  │  Create  ──►     │                  │
│                    └──────────┘  └──────────────────┘                  │
│                                                                         │
└─────────────────────────────────────────────────────────────────────────┘

Checkpoint: You should see greeting.bas in your dialog folder.


Step 2: Write the Dialog Code

2.1 Open the File for Editing

Double-click greeting.bas to open it in the editor.

2.2 Write Your First Line

Type the following code:

TALK "Hello! Welcome to our service. 👋"

This is the simplest possible dialog — the bot just says one thing.

2.3 Add User Input

Now let’s ask for the user’s name:

TALK "Hello! Welcome to our service. 👋"
TALK "What is your name?"
HEAR name
TALK "Nice to meet you, " + name + "!"

Let’s break this down:

LineWhat It Does
TALK "..."Bot displays a message
HEAR nameBot waits for user input, stores it in name
"..." + name + "..."Combines text with the variable

2.4 The Complete First Dialog

Here’s your complete greeting.bas:

' ============================================
' GREETING DIALOG
' A friendly welcome conversation
' ============================================

' Greet the user
TALK "Hello! Welcome to our service. 👋"

' Ask for their name
TALK "What is your name?"
HEAR name

' Respond with their name
TALK "Nice to meet you, " + name + "!"
TALK "How can I help you today?"

💡 Tip: Lines starting with ' are comments — they’re ignored by the bot but help you understand the code.

┌─────────────────────────────────────────────────────────────────────────┐
│  📝 greeting.bas                                              [Save] ⌘S │
├─────────────────────────────────────────────────────────────────────────┤
│  1 │ ' ============================================                     │
│  2 │ ' GREETING DIALOG                                                  │
│  3 │ ' A friendly welcome conversation                                  │
│  4 │ ' ============================================                     │
│  5 │                                                                    │
│  6 │ ' Greet the user                                                   │
│  7 │ TALK "Hello! Welcome to our service. 👋"                           │
│  8 │                                                                    │
│  9 │ ' Ask for their name                                               │
│ 10 │ TALK "What is your name?"                                          │
│ 11 │ HEAR name                                                          │
│ 12 │                                                                    │
│ 13 │ ' Respond with their name                                          │
│ 14 │ TALK "Nice to meet you, " + name + "!"                             │
│ 15 │ TALK "How can I help you today?"                                   │
│    │                                                                    │
└─────────────────────────────────────────────────────────────────────────┘

2.5 Save the File

Press Ctrl+S or click the Save button.

Checkpoint: Your dialog file is saved and ready to test.


Step 3: Test Your Dialog

3.1 Open Chat

Click the Apps Menu (⋮⋮⋮) and select Chat.

3.2 Trigger the Dialog

Type the command to run your dialog:

/greeting

Or simply type something that matches “greeting” — the system will recognize it.

3.3 Have the Conversation

Watch your dialog run:

┌─────────────────────────────────────────────────────────────────────────┐
│  💬 Chat                                                                │
├─────────────────────────────────────────────────────────────────────────┤
│                                                                         │
│      ┌─────────────────────────────────────────────────────────────┐   │
│      │  👤 You                                                     │   │
│      │  /greeting                                                  │   │
│      └─────────────────────────────────────────────────────────────┘   │
│                                                                         │
│      ┌─────────────────────────────────────────────────────────────┐   │
│      │  🤖 Bot                                                     │   │
│      │  Hello! Welcome to our service. 👋                          │   │
│      │  What is your name?                                         │   │
│      └─────────────────────────────────────────────────────────────┘   │
│                                                                         │
│      ┌─────────────────────────────────────────────────────────────┐   │
│      │  👤 You                                                     │   │
│      │  Sarah                                                      │   │
│      └─────────────────────────────────────────────────────────────┘   │
│                                                                         │
│      ┌─────────────────────────────────────────────────────────────┐   │
│      │  🤖 Bot                                                     │   │
│      │  Nice to meet you, Sarah!                                   │   │
│      │  How can I help you today?                                  │   │
│      └─────────────────────────────────────────────────────────────┘   │
│                                                                         │
├─────────────────────────────────────────────────────────────────────────┤
│  Type your message...                                            [↑]   │
└─────────────────────────────────────────────────────────────────────────┘

Checkpoint: Your dialog runs and responds correctly!


Step 4: Enhance with Logic

Now let’s make our dialog smarter with conditional logic.

4.1 Add Input Validation

Update your dialog to handle different types of input:

' ============================================
' GREETING DIALOG (Enhanced)
' A friendly welcome with input validation
' ============================================

TALK "Hello! Welcome to our service. 👋"
TALK "What is your name?"
HEAR name

' Check if name was provided
IF name = "" THEN
    TALK "I didn't catch your name. That's okay!"
    name = "friend"
END IF

TALK "Nice to meet you, " + name + "!"

4.2 Add Menu Options

Let’s give the user choices:

' ============================================
' GREETING DIALOG (Full Version)
' Welcome with menu options
' ============================================

TALK "Hello! Welcome to our service. 👋"
TALK "What is your name?"
HEAR name

IF name = "" THEN
    name = "friend"
END IF

TALK "Nice to meet you, " + name + "!"
TALK ""
TALK "How can I help you today?"
TALK "1. Learn about our services"
TALK "2. Contact support"
TALK "3. Check my account"
TALK ""
TALK "Please type 1, 2, or 3:"

HEAR choice

SELECT CASE choice
    CASE "1"
        TALK "Great! We offer AI-powered automation for businesses."
        TALK "Would you like to schedule a demo?"
    CASE "2"
        TALK "I'll connect you with our support team."
        TALK "Please describe your issue:"
        HEAR issue
        TALK "Thank you. A support agent will contact you about: " + issue
    CASE "3"
        TALK "To check your account, I'll need to verify your identity."
        TALK "Please enter your email address:"
        HEAR email
        TALK "Looking up account for: " + email
    CASE ELSE
        TALK "I didn't understand that choice."
        TALK "Please type 1, 2, or 3 next time."
END SELECT

TALK ""
TALK "Is there anything else I can help with, " + name + "?"

4.3 Understanding SELECT CASE

┌─────────────────────────────────────────────────────────────────────────┐
│                      SELECT CASE EXPLAINED                              │
├─────────────────────────────────────────────────────────────────────────┤
│                                                                         │
│    User types: "2"                                                      │
│         │                                                               │
│         ▼                                                               │
│    ┌─────────────────────────────────────────────────────────────┐     │
│    │  SELECT CASE choice                                          │     │
│    │    ┌─────────────┐                                          │     │
│    │    │ CASE "1"    │──▶ Skip (not matched)                    │     │
│    │    └─────────────┘                                          │     │
│    │    ┌─────────────┐                                          │     │
│    │    │ CASE "2"  ★ │──▶ EXECUTE! ───▶ "I'll connect you..."   │     │
│    │    └─────────────┘                                          │     │
│    │    ┌─────────────┐                                          │     │
│    │    │ CASE "3"    │──▶ Skip (not checked after match)        │     │
│    │    └─────────────┘                                          │     │
│    │    ┌─────────────┐                                          │     │
│    │    │ CASE ELSE   │──▶ Skip (only runs if nothing matched)   │     │
│    │    └─────────────┘                                          │     │
│    │  END SELECT                                                  │     │
│    └─────────────────────────────────────────────────────────────┘     │
│                                                                         │
└─────────────────────────────────────────────────────────────────────────┘

Step 5: Remember User Information

5.1 Store User Data

Use SET USER MEMORY to remember information between conversations:

' After getting the name
SET USER MEMORY "name", name

' Later, in another dialog, retrieve it:
savedName = GET USER MEMORY "name"

IF savedName <> "" THEN
    TALK "Welcome back, " + savedName + "!"
ELSE
    TALK "Hello! I don't think we've met before."
END IF

5.2 Store Bot-Wide Data

Use SET BOT MEMORY for data that applies to all users:

' Store a bot-wide counter
visitorCount = GET BOT MEMORY "visitor_count"
IF visitorCount = "" THEN
    visitorCount = 0
END IF
visitorCount = visitorCount + 1
SET BOT MEMORY "visitor_count", visitorCount

TALK "You are visitor number " + visitorCount + " today!"

Complete Example: Support Request Dialog

Here’s a complete, practical dialog you can use as a template:

' ============================================
' SUPPORT REQUEST DIALOG
' Collects support ticket information
' ============================================

' Check if we know this user
userName = GET USER MEMORY "name"

IF userName = "" THEN
    TALK "Hello! I'm here to help you create a support request."
    TALK "First, what's your name?"
    HEAR userName
    SET USER MEMORY "name", userName
ELSE
    TALK "Welcome back, " + userName + "!"
END IF

' Get contact information
TALK "What email should we use to contact you?"
HEAR AS email
email

IF email = "" THEN
    TALK "I'll need an email to send you updates."
    HEAR AS email
    email
END IF

' Get issue category
TALK ""
TALK "What type of issue are you experiencing?"
TALK ""
TALK "1. 🔧 Technical problem"
TALK "2. 💳 Billing question"
TALK "3. 📦 Order status"
TALK "4. ❓ General question"
TALK ""

HEAR category

SELECT CASE category
    CASE "1"
        categoryName = "Technical"
        TALK "I'm sorry you're having technical difficulties."
    CASE "2"
        categoryName = "Billing"
        TALK "I can help with billing questions."
    CASE "3"
        categoryName = "Orders"
        TALK "Let me check on your order."
    CASE ELSE
        categoryName = "General"
        TALK "I'll make sure the right team sees this."
END SELECT

' Get description
TALK ""
TALK "Please describe your issue in detail:"
HEAR description

' Get urgency
TALK ""
TALK "How urgent is this?"
TALK "1. 🔴 Critical - I can't work"
TALK "2. 🟡 High - Affecting my work"
TALK "3. 🟢 Normal - When you get a chance"
HEAR urgency

SELECT CASE urgency
    CASE "1"
        urgencyLevel = "Critical"
    CASE "2"
        urgencyLevel = "High"
    CASE ELSE
        urgencyLevel = "Normal"
END SELECT

' Confirm ticket
TALK ""
TALK "━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━"
TALK "📋 SUPPORT REQUEST SUMMARY"
TALK "━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━"
TALK "Name: " + userName
TALK "Email: " + email
TALK "Category: " + categoryName
TALK "Urgency: " + urgencyLevel
TALK "Issue: " + description
TALK "━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━"
TALK ""
TALK "Should I submit this request? (yes/no)"

HEAR confirm

IF confirm = "yes" OR confirm = "Yes" OR confirm = "YES" THEN
    ' Here you would typically save to a database
    ' For now, just confirm
    TALK "✅ Your support request has been submitted!"
    TALK "Ticket ID: SR-" + FORMAT(NOW, "yyyyMMddHHmm")
    TALK "You'll receive a confirmation email at " + email
    TALK "Our team typically responds within 24 hours."
ELSE
    TALK "No problem! Your request was not submitted."
    TALK "Feel free to start over when you're ready."
END IF

TALK ""
TALK "Is there anything else I can help with?"

🎉 Congratulations!

You’ve written your first dialog! Here’s what you learned:

┌─────────────────────────────────────────────────────────────────────────┐
│                                                                         │
│    ✓ Created a .bas dialog file                                         │
│    ✓ Used TALK to display messages                                      │
│    ✓ Used HEAR to get user input                                        │
│    ✓ Combined text with variables                                       │
│    ✓ Used IF/THEN/ELSE for decisions                                    │
│    ✓ Used SELECT CASE for menus                                         │
│    ✓ Stored data with SET USER MEMORY                                   │
│                                                                         │
└─────────────────────────────────────────────────────────────────────────┘

Troubleshooting

Problem: Dialog doesn’t start

Cause: File name or location is incorrect.

Solution:

  1. Verify file ends with .bas
  2. Confirm file is in the .gbdialog folder
  3. Check there are no syntax errors

Problem: “Unexpected token” error

Cause: Syntax error in your code.

Solution:

  1. Check all strings have opening and closing quotes
  2. Verify IF statements have matching END IF
  3. Ensure SELECT CASE has END SELECT

Problem: Variable is empty

Cause: User skipped the HEAR prompt.

Solution:

  1. Add validation: IF variable = "" THEN
  2. Provide a default value
  3. Ask again if needed

Problem: Bot doesn’t remember data

Cause: Not using memory keywords correctly.

Solution:

  1. Use SET USER MEMORY "key", value to save
  2. Use GET USER MEMORY "key" to retrieve
  3. Ensure key names match exactly (case-sensitive)

Quick Reference

Essential Keywords

KeywordPurposeExample
TALKDisplay messageTALK "Hello!"
HEARGet user inputHEAR name
HEAR AS typeGet typed inputHEAR AS email emailVar
SETSet variableSET x = 5
IF/THEN/ELSEConditionalIF x > 5 THEN ... END IF
SELECT CASEMenu choiceSELECT CASE x ... END SELECT
SET USER MEMORYSave user dataSET USER MEMORY "key", value
GET USER MEMORYLoad user datax = GET USER MEMORY "key"
SET BOT MEMORYSave bot dataSET BOT MEMORY "key", value
GET BOT MEMORYLoad bot datax = GET BOT MEMORY "key"

Common Patterns

Greeting with memory:

name = GET USER MEMORY "name"
IF name = "" THEN
    TALK "What's your name?"
    HEAR name
    SET USER MEMORY "name", name
ELSE
    TALK "Welcome back, " + name + "!"
END IF

Menu with validation:

TALK "Choose: 1, 2, or 3"
HEAR choice
IF choice < "1" OR choice > "3" THEN
    TALK "Invalid choice, using default."
    choice = "1"
END IF

Loop for retries:

attempts = 0
valid = FALSE
WHILE valid = FALSE AND attempts < 3
    TALK "Enter your email:"
    HEAR AS email input
    IF input <> "" THEN
        valid = TRUE
    END IF
    attempts = attempts + 1
WEND

Next Steps

Next TutorialWhat You’ll Learn
Store User InformationAdvanced memory patterns
Call External APIsConnect to web services
Send Automated MessagesScheduled broadcasts

Best Practices

  1. Comment your code — Use ' for explanations
  2. Validate all input — Never assume users type correctly
  3. Provide defaults — Handle empty responses gracefully
  4. Use clear prompts — Tell users exactly what to type
  5. Confirm important actions — Ask before submitting forms
  6. Use spaces in keywordsSET BOT MEMORY not SET_BOT_MEMORY
  7. Test thoroughly — Try all menu options and edge cases

Tutorial 5 of 30 • Back to How-To IndexNext: Store User Information →

How To: Add Documents to Knowledge Base

Tutorial 3 of the Knowledge Base Series

Teach your bot from files in 15 minutes


Add Documents to KB - 4 Step Process

Objective

By the end of this tutorial, you will have:

  • Prepared documents for the knowledge base
  • Uploaded files to your bot’s .gbkb folder
  • Indexed documents for semantic search
  • Tested that your bot can answer questions from the documents

Time Required

⏱️ 15 minutes


Prerequisites

Before you begin, make sure you have:

  • A working bot (see Create Your First Bot)
  • Access to the Drive app
  • Documents to upload (PDF, Word, Text, or Markdown files)

What is a Knowledge Base?

A Knowledge Base (KB) is a collection of documents that your bot uses to answer questions. When a user asks something, the bot searches through these documents to find relevant information.

Knowledge Base Semantic Search Flow

Supported File Formats

FormatExtensionBest For
PDF.pdfManuals, reports, official documents
Word.docx, .docPolicies, procedures, articles
Text.txtSimple content, FAQs
Markdown.mdTechnical documentation
Excel.xlsx, .xlsFAQs, structured data
PowerPoint.pptxTraining materials
HTML.htmlWeb content

Step 1: Prepare Your Documents

1.1 Gather Your Files

Collect the documents you want your bot to learn from. Good candidates include:

  • ✅ Product manuals
  • ✅ FAQ documents
  • ✅ Company policies
  • ✅ Help articles
  • ✅ Training materials

1.2 Review Document Quality

Before uploading, check that your documents:

CheckWhy It Matters
Text is selectableScanned images can’t be indexed
Content is accurateBot will repeat incorrect info
Information is currentOutdated docs confuse users
No sensitive dataProtect confidential information

⚠️ Warning: The bot will use exactly what’s in your documents. Remove any outdated or incorrect information first.

1.3 Organize Files (Optional)

For large knowledge bases, organize files into folders by topic:

mycompany.gbkb/
├── 📁 products/
│   ├── product-guide.pdf
│   └── specifications.docx
├── 📁 policies/
│   ├── refund-policy.pdf
│   └── privacy-policy.md
├── 📁 support/
│   ├── faq.docx
│   └── troubleshooting.pdf
└── 📁 training/
    └── onboarding-guide.pptx

Checkpoint: You have documents ready to upload.


Step 2: Upload Files to Knowledge Base

2.1 Open the Drive App

Click the Apps Menu (⋮⋮⋮) and select Drive.

2.2 Navigate to Your Bot’s KB Folder

Navigate to your bot’s knowledge base folder:

📂 mycompany.gbai
   └── 📂 mycompany.gbkb    ◄── Open this folder
┌─────────────────────────────────────────────────────────────────────────┐
│  📁 Drive                                                               │
├─────────────────────────────────────────────────────────────────────────┤
│  📂 mycompany.gbai                                                      │
│     ├── 📂 mycompany.gbdialog                                          │
│     ├── 📂 mycompany.gbot                                               │
│     ├── 📂 mycompany.gbkb     ◄── Knowledge base folder                │
│     │      └── (your documents go here)                                │
│     └── 📂 mycompany.gbdrive                                            │
│                                                                         │
└─────────────────────────────────────────────────────────────────────────┘

2.3 Upload Your Documents

Method A: Drag and Drop

  1. Open your file explorer
  2. Select the documents you want to upload
  3. Drag them into the Drive window

Method B: Upload Button

  1. Click the Upload button (📤)
  2. Select files from your computer
  3. Click Open
┌─────────────────────────────────────────────────────────────────────────┐
│  📁 Drive > mycompany.gbai > mycompany.gbkb                            │
├─────────────────────────────────────────────────────────────────────────┤
│  ┌─────────────────┐  ┌─────────────────┐                              │
│  │ 📤 Upload       │  │ 📁 New Folder   │                              │
│  └─────────────────┘  └─────────────────┘                              │
├─────────────────────────────────────────────────────────────────────────┤
│                                                                         │
│  📄 company-faq.pdf                              2.3 MB   Just now     │
│  📄 product-manual.docx                          1.1 MB   Just now     │
│  📄 refund-policy.pdf                            0.5 MB   Just now     │
│                                                                         │
│  ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ │
│  ↑ Drag files here to upload                                           │
│                                                                         │
└─────────────────────────────────────────────────────────────────────────┘

2.4 Wait for Upload to Complete

You’ll see a progress indicator for each file. Wait until all uploads finish.

💡 Tip: Large files may take longer. PDF files typically upload fastest.

Checkpoint: Your documents appear in the .gbkb folder.


Step 3: Index the Knowledge Base

After uploading, you need to index the documents so the bot can search them.

3.1 Automatic Indexing

In most cases, indexing happens automatically when files are uploaded. Look for:

  • A “Processing…” indicator
  • Files changing from gray to normal color
  • A completion notification

3.2 Manual Indexing (If Needed)

If automatic indexing doesn’t start, trigger it manually:

From Chat:

/reindex

From a BASIC Dialog:

' Clear and rebuild the knowledge base
CLEAR KB
USE KB "mycompany"

3.3 Check Indexing Status

You can check how many documents are indexed:

From Chat:

/kb stats

Expected Output:

┌─────────────────────────────────────────────────────────────────────────┐
│  📊 Knowledge Base Statistics                                          │
├─────────────────────────────────────────────────────────────────────────┤
│                                                                         │
│  Collection: mycompany                                                  │
│  Documents:  3                                                          │
│  Vectors:    847                                                        │
│  Status:     ● Ready                                                    │
│  Last Index: 2 minutes ago                                              │
│                                                                         │
└─────────────────────────────────────────────────────────────────────────┘

Checkpoint: Documents are indexed and ready to search.


Step 4: Test the Knowledge Base

4.1 Open Chat

Click the Apps Menu (⋮⋮⋮) and select Chat.

4.2 Ask a Question from Your Documents

Type a question that can be answered by your uploaded documents:

You: What is the refund policy?

4.3 Verify the Response

The bot should answer using information from your documents:

┌─────────────────────────────────────────────────────────────────────────┐
│  💬 Chat                                                                │
├─────────────────────────────────────────────────────────────────────────┤
│                                                                         │
│      ┌─────────────────────────────────────────────────────────────┐   │
│      │  👤 You                                                     │   │
│      │  What is the refund policy?                                 │   │
│      └─────────────────────────────────────────────────────────────┘   │
│                                                                         │
│      ┌─────────────────────────────────────────────────────────────┐   │
│      │  🤖 Bot                                                     │   │
│      │                                                             │   │
│      │  Based on our refund policy document:                       │   │
│      │                                                             │   │
│      │  Customers may request a full refund within 30 days of      │   │
│      │  purchase. After 30 days, refunds are prorated based on     │   │
│      │  usage. To request a refund, contact support@company.com    │   │
│      │  with your order number.                                    │   │
│      │                                                             │   │
│      │  📄 Source: refund-policy.pdf                               │   │
│      └─────────────────────────────────────────────────────────────┘   │
│                                                                         │
└─────────────────────────────────────────────────────────────────────────┘

4.4 Test Different Questions

Try several questions to ensure the knowledge base is working:

Test QuestionExpected Source
“How do I return a product?”refund-policy.pdf
“What are the product specs?”product-manual.docx
“How do I contact support?”company-faq.pdf

Checkpoint: Your bot answers questions using the uploaded documents!


🎉 Congratulations!

You’ve successfully added documents to your knowledge base! Here’s what you accomplished:

┌─────────────────────────────────────────────────────────────────────────┐
│                                                                         │
│    ✓ Prepared documents for upload                                      │
│    ✓ Uploaded files to the .gbkb folder                                │
│    ✓ Indexed documents for semantic search                              │
│    ✓ Tested that the bot can answer from documents                      │
│                                                                         │
│    Your bot can now answer questions from your documents!               │
│                                                                         │
└─────────────────────────────────────────────────────────────────────────┘

Troubleshooting

Problem: Bot doesn’t find information from documents

Cause: Documents may not be indexed yet.

Solution:

  1. Check indexing status with /kb stats
  2. Wait a few minutes for processing to complete
  3. Try /reindex to force re-indexing

Problem: Bot gives wrong information

Cause: Document contains outdated or incorrect content.

Solution:

  1. Review the source document
  2. Update or replace the incorrect document
  3. Re-index the knowledge base

Problem: “No relevant information found”

Cause: Question doesn’t match document content well enough.

Solution:

  1. Try rephrasing the question
  2. Use keywords that appear in your documents
  3. Check that the document actually contains the answer

Problem: Upload fails

Cause: File too large or unsupported format.

Solution:

  1. Check file size (max 50MB per file)
  2. Verify file format is supported
  3. Try converting to PDF if format issues persist

Problem: PDF text not extracted

Cause: PDF contains scanned images, not selectable text.

Solution:

  1. Use OCR software to convert image-based PDFs
  2. Or recreate the document as a text-based PDF
  3. Consider using Word format instead

Best Practices

Document Organization

┌─────────────────────────────────────────────────────────────────────────┐
│                    RECOMMENDED KB STRUCTURE                             │
├─────────────────────────────────────────────────────────────────────────┤
│                                                                         │
│  mycompany.gbkb/                                                        │
│  │                                                                      │
│  ├── 📁 policies/          ◄── Company policies                        │
│  │   ├── refund-policy.pdf                                              │
│  │   ├── privacy-policy.pdf                                             │
│  │   └── terms-of-service.pdf                                           │
│  │                                                                      │
│  ├── 📁 products/          ◄── Product documentation                   │
│  │   ├── product-guide.pdf                                              │
│  │   ├── user-manual.pdf                                                │
│  │   └── specifications.xlsx                                            │
│  │                                                                      │
│  ├── 📁 support/           ◄── Support resources                       │
│  │   ├── faq.docx                                                       │
│  │   └── troubleshooting.pdf                                            │
│  │                                                                      │
│  └── 📁 internal/          ◄── Internal documentation                  │
│      ├── processes.docx                                                 │
│      └── guidelines.pdf                                                 │
│                                                                         │
└─────────────────────────────────────────────────────────────────────────┘

Content Guidelines

  1. Be specific — Clear, detailed content produces better answers
  2. Use headings — Helps the bot find relevant sections
  3. Include keywords — Use terms users are likely to search for
  4. Update regularly — Keep documents current
  5. Remove duplicates — Avoid conflicting information

Naming Conventions

✅ Good Names❌ Bad Names
refund-policy-2024.pdfdoc1.pdf
product-manual-v2.docxfinal final (2).docx
employee-handbook.pdfnew document.pdf

Advanced: Using KB in Dialogs

You can reference the knowledge base in your BASIC dialogs:

' Activate a specific knowledge base
USE KB "mycompany"

' Ask the user what they want to know
TALK "What would you like to know about?"
HEAR question

' The bot will automatically search the KB and respond

Multiple Knowledge Bases

You can have different knowledge bases for different purposes:

' Switch between knowledge bases based on topic
TALK "Are you asking about Products or Policies?"
HEAR topic

IF topic = "Products" THEN
    USE KB "products"
ELSE IF topic = "Policies" THEN
    USE KB "policies"
END IF

TALK "What would you like to know?"
HEAR question

Next Steps

Next TutorialWhat You’ll Learn
Import a WebsiteCrawl web pages into your KB
Create FAQ ResponsesDefine question-answer pairs
Manage CollectionsOrganize knowledge by topic

Quick Reference

Chat Commands

CommandDescription
/kb statsShow knowledge base statistics
/reindexRebuild the search index
/kb listList all KB collections

BASIC Keywords

KeywordDescriptionExample
USE KBActivate a KBUSE KB "mycompany"
CLEAR KBClear current KBCLEAR KB
KB STATISTICSGet KB infostats = KB STATISTICS

File Size Limits

File TypeMax Size
PDF50 MB
Word25 MB
Excel25 MB
Text/MD10 MB

Tutorial 3 of 30 • Back to How-To IndexNext: Import a Website →

How To: Connect WhatsApp

Tutorial 5 of the Channels Series

Connect your bot to WhatsApp in 20 minutes


Connect WhatsApp - 4 Step Process

Objective

By the end of this tutorial, you will have:

  • Created a Meta Business account
  • Set up a WhatsApp Business App
  • Connected WhatsApp to your General Bots instance
  • Tested the connection with a real message

Time Required

⏱️ 20 minutes


Prerequisites

Before you begin, make sure you have:

  • A working bot (see Create Your First Bot)
  • A phone number for WhatsApp Business (cannot be used with regular WhatsApp)
  • A Facebook account
  • Administrator access to General Bots

Understanding WhatsApp Integration

WhatsApp Integration Flow

Step 1: Set Up Meta Business Account

1.1 Go to Meta for Developers

Open your browser and navigate to:

https://developers.facebook.com

┌─────────────────────────────────────────────────────────────────────────┐
│  🌐 Browser                                                     [─][□][×]│
├─────────────────────────────────────────────────────────────────────────┤
│  ← → ↻  │ https://developers.facebook.com                        │ ☆ │  │
├─────────────────────────────────────────────────────────────────────────┤
│                                                                         │
│                     Meta for Developers                                 │
│                                                                         │
│                   ┌─────────────────────┐                               │
│                   │      Log In         │                               │
│                   └─────────────────────┘                               │
│                                                                         │
└─────────────────────────────────────────────────────────────────────────┘

1.2 Log In with Facebook

  1. Click Log In
  2. Enter your Facebook credentials
  3. Click Log In

1.3 Create a Meta Business Account (If Needed)

If you don’t have a business account:

  1. Go to https://business.facebook.com
  2. Click Create Account
  3. Enter your business name
  4. Enter your name and business email
  5. Click Submit

💡 Note: You can use your personal Facebook account, but a business account is recommended for production use.

Checkpoint: You should now be logged into Meta for Developers.


Step 2: Create a WhatsApp App

2.1 Go to My Apps

Click My Apps in the top navigation.

┌─────────────────────────────────────────────────────────────────────────┐
│  Meta for Developers                           [My Apps ▼] [👤 Account] │
├─────────────────────────────────────────────────────────────────────────┤
│                                                                         │
│                              My Apps                                    │
│                              ───────                                    │
│                                                                         │
│                   ┌─────────────────────────┐                           │
│                   │     + Create App        │ ◄── Click here            │
│                   └─────────────────────────┘                           │
│                                                                         │
│                   You don't have any apps yet.                          │
│                                                                         │
└─────────────────────────────────────────────────────────────────────────┘

2.2 Create a New App

  1. Click Create App
  2. Select Business as the app type
  3. Click Next
┌─────────────────────────────────────────────────────────────────────────┐
│  Create an App                                                    [×]   │
├─────────────────────────────────────────────────────────────────────────┤
│                                                                         │
│  Select an app type:                                                    │
│                                                                         │
│  ┌─────────────────┐  ┌─────────────────┐  ┌─────────────────┐         │
│  │    Consumer     │  │   ● Business    │  │     Gaming      │         │
│  │                 │  │   ◄── Select    │  │                 │         │
│  │  For consumer   │  │                 │  │  For game       │         │
│  │  apps           │  │  For business   │  │  integrations   │         │
│  │                 │  │  integrations   │  │                 │         │
│  └─────────────────┘  └─────────────────┘  └─────────────────┘         │
│                                                                         │
│                                                        [Next]           │
│                                                                         │
└─────────────────────────────────────────────────────────────────────────┘

2.3 Fill In App Details

FieldWhat to EnterExample
App NameYour bot’s nameMy Company Bot
App Contact EmailYour emailadmin@company.com
Business AccountSelect or createMy Company
┌─────────────────────────────────────────────────────────────────────────┐
│  Add App Details                                                  [×]   │
├─────────────────────────────────────────────────────────────────────────┤
│                                                                         │
│  App Name:                                                              │
│  ┌─────────────────────────────────────────────────────────────────┐   │
│  │ My Company Bot                                                  │   │
│  └─────────────────────────────────────────────────────────────────┘   │
│                                                                         │
│  App Contact Email:                                                     │
│  ┌─────────────────────────────────────────────────────────────────┐   │
│  │ admin@company.com                                               │   │
│  └─────────────────────────────────────────────────────────────────┘   │
│                                                                         │
│  Business Account:                                                      │
│  ┌─────────────────────────────────────────────────────────────────┐   │
│  │ My Company                                              [▼]     │   │
│  └─────────────────────────────────────────────────────────────────┘   │
│                                                                         │
│                                                    [Create App]         │
│                                                                         │
└─────────────────────────────────────────────────────────────────────────┘
  1. Click Create App
  2. Complete the security check if prompted

2.4 Add WhatsApp to Your App

  1. In the app dashboard, scroll to Add Products
  2. Find WhatsApp and click Set Up
┌─────────────────────────────────────────────────────────────────────────┐
│  Add Products to Your App                                               │
├─────────────────────────────────────────────────────────────────────────┤
│                                                                         │
│  ┌─────────────────┐  ┌─────────────────┐  ┌─────────────────┐         │
│  │   Messenger     │  │   📱 WhatsApp   │  │   Instagram     │         │
│  │                 │  │                 │  │                 │         │
│  │   [Set Up]      │  │   [Set Up] ◄──  │  │   [Set Up]      │         │
│  │                 │  │   Click here    │  │                 │         │
│  └─────────────────┘  └─────────────────┘  └─────────────────┘         │
│                                                                         │
└─────────────────────────────────────────────────────────────────────────┘

Checkpoint: WhatsApp should now appear in your app’s left sidebar.


Step 3: Configure WhatsApp Settings

3.1 Get Your API Credentials

In the left sidebar, click WhatsAppAPI Setup.

You’ll see:

  • Phone number ID - Identifies your WhatsApp number
  • WhatsApp Business Account ID - Your business account
  • Temporary access token - For testing (expires in 24 hours)
┌─────────────────────────────────────────────────────────────────────────┐
│  WhatsApp > API Setup                                                   │
├─────────────────────────────────────────────────────────────────────────┤
│                                                                         │
│  STEP 1: Select Phone Numbers                                           │
│  ────────────────────────────                                           │
│                                                                         │
│  From: [Test Number - 15550001234         ▼]                           │
│                                                                         │
│  To: (Add a recipient phone number for testing)                         │
│  ┌─────────────────────────────────────────────────────────────────┐   │
│  │ +1 555 123 4567                                                 │   │
│  └─────────────────────────────────────────────────────────────────┘   │
│                                                                         │
│  ─────────────────────────────────────────────────────────────────────  │
│                                                                         │
│  STEP 2: Send Messages with the API                                     │
│  ──────────────────────────────────                                     │
│                                                                         │
│  Temporary Access Token:                                                │
│  ┌─────────────────────────────────────────────────────────────────┐   │
│  │ EAAGm0PX4ZCp...                                        [Copy]   │   │
│  └─────────────────────────────────────────────────────────────────┘   │
│  ⚠️ This token expires in 24 hours. Use System User for production.     │
│                                                                         │
│  Phone Number ID: 123456789012345                          [Copy]       │
│  WhatsApp Business Account ID: 987654321098765             [Copy]       │
│                                                                         │
└─────────────────────────────────────────────────────────────────────────┘

📝 Write down these values - You’ll need them in the next step:

  • Phone Number ID: _______________
  • Access Token: _______________

3.2 Create a Permanent Access Token

For production, you need a permanent token:

  1. Go to Business SettingsSystem Users
  2. Click Add to create a system user
  3. Name it (e.g., “WhatsApp Bot”)
  4. Set role to Admin
  5. Click Generate Token
  6. Select your app and the whatsapp_business_messaging permission
  7. Click Generate Token

💡 Important: Save this token securely! You won’t be able to see it again.

3.3 Configure the Webhook

The webhook tells Meta where to send incoming messages.

  1. In the left sidebar, click WhatsAppConfiguration
  2. Under Webhook, click Edit
┌─────────────────────────────────────────────────────────────────────────┐
│  Webhook Configuration                                            [×]   │
├─────────────────────────────────────────────────────────────────────────┤
│                                                                         │
│  Callback URL:                                                          │
│  ┌─────────────────────────────────────────────────────────────────┐   │
│  │ https://your-bot-server.com/webhook/whatsapp                    │   │
│  └─────────────────────────────────────────────────────────────────┘   │
│                                                                         │
│  Verify Token:                                                          │
│  ┌─────────────────────────────────────────────────────────────────┐   │
│  │ your-custom-verify-token-here                                   │   │
│  └─────────────────────────────────────────────────────────────────┘   │
│                                                                         │
│  ⚠️ Your server must respond to Meta's verification request             │
│                                                                         │
│  ┌─────────────────────────────────────────────────────────────────┐   │
│  │                    Verify and Save                              │   │
│  └─────────────────────────────────────────────────────────────────┘   │
│                                                                         │
└─────────────────────────────────────────────────────────────────────────┘

Enter these values:

FieldValue
Callback URLhttps://your-server.com/webhook/whatsapp
Verify TokenA secret string you create (e.g., my_bot_verify_123)
  1. Click Verify and Save

3.4 Subscribe to Webhook Events

After verifying, select which events to receive:

┌─────────────────────────────────────────────────────────────────────────┐
│  Webhook Fields                                                         │
├─────────────────────────────────────────────────────────────────────────┤
│                                                                         │
│  ☑ messages              ◄── Required! Receive incoming messages       │
│  ☐ message_template_status_update                                       │
│  ☐ phone_number_name_update                                             │
│  ☐ phone_number_quality_update                                          │
│  ☑ account_review_update                                                │
│  ☐ account_update                                                       │
│  ☐ business_capability_update                                           │
│  ☐ flows                                                                │
│  ☑ security                                                             │
│  ☑ message_echoes                                                       │
│                                                                         │
└─────────────────────────────────────────────────────────────────────────┘

At minimum, select:

  • messages (required - to receive user messages)

Checkpoint: Webhook should show as “Active” with a green indicator.


Step 4: Configure General Bots

4.1 Open Bot Settings

  1. In General Bots, go to Sources
  2. Click ⚙️ on your bot
  3. Go to the Channels tab
┌─────────────────────────────────────────────────────────────────────────┐
│  Bot Settings: support                                            [×]   │
├─────────────────────────────────────────────────────────────────────────┤
│                                                                         │
│  ┌──────────┬──────────┬──────────┬──────────┬──────────┐              │
│  │ General  │    AI    │*Channels*│ Security │ Advanced │              │
│  └──────────┴──────────┴──────────┴──────────┴──────────┘              │
│                                                                         │
│  CONNECTED CHANNELS                                                     │
│  ──────────────────                                                     │
│                                                                         │
│  ┌─────────────────────────────────────────────────────────────────┐   │
│  │ 🌐 Web Chat                                      Status: ● On   │   │
│  │    Embedded widget on your website                              │   │
│  │    [Configure]                                                  │   │
│  └─────────────────────────────────────────────────────────────────┘   │
│                                                                         │
│  ┌─────────────────────────────────────────────────────────────────┐   │
│  │ 📱 WhatsApp                                      Status: ○ Off  │   │
│  │    Not configured                                               │   │
│  │    [Configure] ◄── Click here                                   │   │
│  └─────────────────────────────────────────────────────────────────┘   │
│                                                                         │
└─────────────────────────────────────────────────────────────────────────┘

4.2 Enter WhatsApp Credentials

Click Configure for WhatsApp and enter your credentials:

┌─────────────────────────────────────────────────────────────────────────┐
│  WhatsApp Configuration                                           [×]   │
├─────────────────────────────────────────────────────────────────────────┤
│                                                                         │
│  META API CREDENTIALS                                                   │
│  ────────────────────                                                   │
│                                                                         │
│  Phone Number ID:                                                       │
│  ┌─────────────────────────────────────────────────────────────────┐   │
│  │ 123456789012345                                                 │   │
│  └─────────────────────────────────────────────────────────────────┘   │
│                                                                         │
│  Access Token:                                                          │
│  ┌─────────────────────────────────────────────────────────────────┐   │
│  │ ••••••••••••••••••••••••••••••••••••••                          │   │
│  └─────────────────────────────────────────────────────────────────┘   │
│                                                                         │
│  Verify Token:                                                          │
│  ┌─────────────────────────────────────────────────────────────────┐   │
│  │ my_bot_verify_123                                               │   │
│  └─────────────────────────────────────────────────────────────────┘   │
│  ⚠️ Must match the token you set in Meta Developer Portal               │
│                                                                         │
│  ─────────────────────────────────────────────────────────────────────  │
│                                                                         │
│  WEBHOOK URL (provide this to Meta)                                     │
│  ──────────────────────────────────                                     │
│  ┌─────────────────────────────────────────────────────────────────┐   │
│  │ https://your-server.com/webhook/whatsapp                [Copy]  │   │
│  └─────────────────────────────────────────────────────────────────┘   │
│                                                                         │
│  ┌─────────────────┐  ┌─────────────────┐                              │
│  │  Test Connection │  │      Save       │                              │
│  └─────────────────┘  └─────────────────┘                              │
│                                                                         │
└─────────────────────────────────────────────────────────────────────────┘

4.3 Test the Connection

  1. Click Test Connection
  2. You should see “Connection successful!”

If the test fails, check:

  • Token is correct and not expired
  • Phone Number ID is correct
  • Your server is accessible from the internet

4.4 Save and Enable

  1. Click Save
  2. Toggle WhatsApp to On

Checkpoint: WhatsApp should now show Status: ● On


Step 5: Test Your WhatsApp Bot

5.1 Add Test Phone Number

In Meta Developer Portal:

  1. Go to WhatsAppAPI Setup
  2. Under “To”, add your phone number
  3. Click Send to receive a test message

5.2 Send a Test Message

  1. Open WhatsApp on your phone
  2. Message the bot’s number (the test number from Meta)
  3. Send: “Hello”
┌─────────────────────────────────────────────────────────────────────────┐
│                                                                         │
│                    WhatsApp                                             │
│                                                                         │
│  ┌─────────────────────────────────────────────────────────────────┐   │
│  │                                                                 │   │
│  │  My Company Bot                                                 │   │
│  │  +1 555 000 1234                                               │   │
│  │                                                                 │   │
│  │  ─────────────────────────────────────────────────────────────  │   │
│  │                                                                 │   │
│  │                                              ┌───────────────┐  │   │
│  │                                              │    Hello      │  │   │
│  │                                              │         10:30 │  │   │
│  │                                              └───────────────┘  │   │
│  │                                                                 │   │
│  │  ┌─────────────────────────────────────────┐                    │   │
│  │  │ Hello! How can I help you today?        │                    │   │
│  │  │ 10:30                                   │                    │   │
│  │  └─────────────────────────────────────────┘                    │   │
│  │                                                                 │   │
│  │                                              ┌───────────────┐  │   │
│  │                                              │ I need help   │  │   │
│  │                                              │ with my order │  │   │
│  │                                              │         10:31 │  │   │
│  │                                              └───────────────┘  │   │
│  │                                                                 │   │
│  │  ┌─────────────────────────────────────────┐                    │   │
│  │  │ I'd be happy to help with your order!   │                    │   │
│  │  │ What's your order number?               │                    │   │
│  │  │ 10:31                                   │                    │   │
│  │  └─────────────────────────────────────────┘                    │   │
│  │                                                                 │   │
│  └─────────────────────────────────────────────────────────────────┘   │
│                                                                         │
│  ┌─────────────────────────────────────────────────────────────────┐   │
│  │ Type a message...                                          📎   │   │
│  └─────────────────────────────────────────────────────────────────┘   │
│                                                                         │
└─────────────────────────────────────────────────────────────────────────┘

5.3 Verify in General Bots

Check that the conversation appears in Analytics:

  1. Go to AnalyticsSessions
  2. You should see a new session with channel “WhatsApp”

🎉 Congratulations!

Your bot is now connected to WhatsApp! Users can message your WhatsApp Business number and receive responses from your bot.

┌─────────────────────────────────────────────────────────────────────────┐
│                                                                         │
│                         ✓ SUCCESS!                                      │
│                                                                         │
│                    Your WhatsApp bot is live!                           │
│                                                                         │
│     ┌───────────┐              ┌───────────┐              ┌──────────┐ │
│     │           │              │           │              │          │ │
│     │    📱     │  ────────▶   │    🤖     │  ────────▶   │    💬    │ │
│     │  WhatsApp │              │  General  │              │   Bot    │ │
│     │   User    │  ◀────────   │   Bots    │  ◀────────   │ Response │ │
│     │           │              │           │              │          │ │
│     └───────────┘              └───────────┘              └──────────┘ │
│                                                                         │
└─────────────────────────────────────────────────────────────────────────┘

Going to Production

Add a Real Phone Number

The test number has limitations. For production:

  1. Go to WhatsAppPhone Numbers in Meta Developer Portal
  2. Click Add Phone Number
  3. Enter your business phone number
  4. Verify via SMS or voice call
  5. Update the Phone Number ID in General Bots settings

Submit for App Review

Before going live with real customers:

  1. Go to App Review in Meta Developer Portal
  2. Request whatsapp_business_messaging permission
  3. Complete the review process (takes 2-5 business days)

Set Up Message Templates

For proactive messages (not replies), you need approved templates:

  1. Go to WhatsAppMessage Templates
  2. Create templates for notifications, alerts, etc.
  3. Wait for Meta approval (usually 24 hours)

Troubleshooting

Problem: “Webhook verification failed”

Possible causes:

  1. Verify token doesn’t match
  2. Server not responding
  3. HTTPS not configured

Solution:

  1. Check the verify token matches exactly in both places
  2. Verify your server is accessible: curl https://your-server.com/webhook/whatsapp
  3. Ensure you have a valid SSL certificate

Problem: Messages not arriving

Possible causes:

  1. Webhook not subscribed to “messages”
  2. App in development mode with unlisted numbers
  3. Access token expired

Solution:

  1. Check webhook subscriptions include “messages”
  2. Add test phone numbers in API Setup
  3. Generate a new access token

Problem: Bot responds slowly

Possible causes:

  1. Network latency
  2. Bot processing time
  3. Meta rate limits

Solution:

  1. Ensure server is geographically close to users
  2. Optimize bot dialog processing
  3. Check rate limit status in Meta dashboard

Problem: “Error 190: Invalid OAuth access token”

Possible causes:

  1. Token expired (temporary tokens last 24 hours)
  2. Token was revoked
  3. Wrong token used

Solution:

  1. Generate a new System User token (permanent)
  2. Update the token in General Bots settings
  3. Verify you’re using the WhatsApp access token, not a Facebook token

Problem: Phone number shows as unverified

Possible causes:

  1. Verification not completed
  2. Two-factor authentication issue
  3. Number already in use

Solution:

  1. Re-request verification code
  2. Check business verification status
  3. Contact Meta support if number was previously registered

What You Learned

In this tutorial, you:

  • ✅ Created a Meta Developer account and app
  • ✅ Configured WhatsApp Cloud API
  • ✅ Set up webhook for incoming messages
  • ✅ Connected WhatsApp to General Bots
  • ✅ Tested the integration with real messages

Next Steps

Now that WhatsApp is connected, try these:

Next TutorialWhat You’ll Learn
Write Your First DialogCreate custom conversation flows
Add KB DocumentsMake your bot smarter
Monitor SessionsTrack WhatsApp conversations

Quick Reference

WhatsApp Message Limits

TierMessages/DayHow to Qualify
Unverified250New accounts
Verified1,000Complete business verification
Tier 110,000Good quality rating
Tier 2100,000Maintain quality
Tier 3UnlimitedHigh volume, good quality

Quality Rating

Meta monitors your WhatsApp quality based on:

  • User blocks and reports
  • Message template quality
  • Response time

Keep quality high by:

  • Responding to all messages
  • Not spamming users
  • Using approved templates for outbound messages

Key URLs

ResourceURL
Meta for Developershttps://developers.facebook.com
Meta Business Suitehttps://business.facebook.com
WhatsApp Cloud API Docshttps://developers.facebook.com/docs/whatsapp
API Statushttps://metastatus.com

Configuration via config.csv

Configure WhatsApp in your bot’s config.csv:

name,value
whatsapp-phone-number-id,123456789012345
whatsapp-access-token,EAAGm0PX4ZCp...
whatsapp-verify-token,my_bot_verify_123
whatsapp-business-account-id,987654321098765

Note: Do not use environment variables for WhatsApp configuration. All settings belong in config.csv.


See Also

Theme Structure

The gbtheme package is simply CSS files that style the bot’s UI. Themes don’t include HTML or JavaScript - they only control appearance.

theme-name.gbtheme/
├── default.css            # Main theme file (required)
├── dark.css              # Optional dark mode variant
├── print.css             # Optional print styles
└── assets/              # Optional theme resources
    ├── images/
    ├── fonts/
    └── icons/

Design Principles

  • CSS-only theming – Themes are pure CSS files, no HTML or JavaScript modifications
  • CSS Variables – Use CSS custom properties for colors, spacing, and other values
  • Responsive design – Use media queries within your CSS for mobile-first layouts
  • Asset locality – Optional assets/ folder for theme-specific images, fonts, and icons

Creating Your Theme

  1. Create a .gbtheme folder in your bot package
  2. Add a default.css file with your styles
  3. Override CSS variables to change colors and spacing
  4. Add optional assets like fonts or background images

The system automatically picks up any theme placed under @/templates/… when the bot’s configuration (.gbtheme entry in config.csv) points to the folder name.

Theme Loading Process

  1. Discovery: Bot looks for theme folder in work/{bot_name}/{bot_name}.gbtheme/
  2. Validation: Checks for required files (at least one CSS file)
  3. Registration: Theme becomes available in theme selector
  4. Activation: User selects theme or bot loads default
  5. Hot Reload: Changes apply immediately without restart

File Organization Best Practices

CSS File Options

You can have multiple CSS files in your theme:

mybot.gbtheme/
├── default.css       # Main theme (loaded automatically)
├── dark.css         # Dark mode variant
├── mobile.css       # Mobile-specific overrides
└── print.css        # Print media styles

Or keep everything in a single file - your choice!

Asset Management

assets/
├── images/
│   ├── logo.svg      # Vector graphics preferred
│   ├── bg.webp       # Modern formats for performance
│   └── icons/        # Icon set
├── fonts/
│   └── custom.woff2  # Web fonts if needed
└── data/
    └── theme.json    # Theme metadata

Creating a Custom Theme

Step 1: Create Theme Folder

mkdir -p work/mybot/mybot.gbtheme

Step 2: Create Your CSS

Create default.css with CSS variables:

:root {
  /* Brand Colors */
  --brand-primary: #your-color;
  --brand-secondary: #your-color;
  
  /* Semantic Colors */
  --color-success: #10b981;
  --color-warning: #f59e0b;
  --color-error: #ef4444;
  
  /* Typography */
  --font-family: 'Inter', system-ui, sans-serif;
  --font-size-base: 16px;
  --line-height: 1.5;
  
  /* Spacing Scale */
  --space-xs: 0.25rem;
  --space-sm: 0.5rem;
  --space-md: 1rem;
  --space-lg: 2rem;
  --space-xl: 4rem;
}

Step 3: Style Components

Add your component styles in the same file:

/* Custom message bubbles */
.message-user {
  background: var(--brand-primary);
  color: white;
  border-radius: 18px 18px 4px 18px;
}

.message-bot {
  background: #f3f4f6;
  border: 1px solid #e5e7eb;
  border-radius: 18px 18px 18px 4px;
}

Using the Theme

Once you’ve created your CSS file, the bot will automatically load it. You can switch between themes using BASIC:

' Switch to a different theme
CHANGE THEME "dark"

' Back to default
CHANGE THEME "default"

Performance Optimization

CSS Loading Strategy

  1. Critical CSS: Inline essential styles in HTML
  2. Async Loading: Load non-critical CSS asynchronously
  3. Minification: Minify CSS for production
  4. Purging: Remove unused CSS rules

Asset Optimization

  • Use SVG for logos and icons
  • Implement lazy loading for images
  • Serve WebP with fallbacks
  • Enable gzip compression

Theme Selection

Themes are switched via BASIC commands, not JavaScript. The system handles the CSS file swapping automatically.

Accessibility Considerations

  • Maintain WCAG 2.1 AA contrast ratios
  • Support high contrast mode
  • Include focus indicators
  • Test with screen readers

Advanced Personalization Options

Beyond CSS Theming

While themes handle visual styling, you have more options for deeper UI customization:

  1. Extend default.gbui - The UI templates in .gbui packages can be modified:

    • Copy the default UI templates to your bot’s .gbui folder
    • Modify the HTML structure to fit your needs
    • Add custom components and layouts
    • The system will use your UI instead of the default
  2. Create Your Own UI Type - Build a completely custom interface:

    • Design your own UI framework
    • Implement custom WebSocket handlers
    • Create unique interaction patterns
    • Full control over the user experience

Join the Community

We encourage you to contribute! The General Bots project welcomes:

  • UI Improvements - Submit pull requests with better default UIs
  • Theme Collections - Share your creative themes
  • Custom UI Types - Develop new interaction paradigms
  • Documentation - Help improve these guides

Using General Bots as a Foundation

General Bots is designed to be a starting point for your own projects:

Fork the project → Customize the UI → Build your product

You can:

  • Use it as a base for commercial products
  • Create industry-specific bot interfaces
  • Develop specialized UI frameworks
  • Build on top of the core engine

The architecture is intentionally modular - take what you need, replace what you don’t.

Getting Started with UI Development

  1. Study the default.gbui - Understand the current structure
  2. Fork the repository - Create your own version
  3. Experiment freely - The UI layer is independent
  4. Share your work - Help others learn from your innovations

Remember: The UI is just HTML/CSS/JS talking to the bot via WebSocket. You have complete freedom to reimagine how users interact with your bot!

See Also

Next Step

Continue to CSS Customization for detailed styling techniques.

CSS Customization

The gbtheme CSS files define the visual style of the bot UI. They are split into three layers to make them easy to extend.

Files

FileRole
main.cssCore layout, typography, and global variables.
components.cssStyles for reusable UI components (buttons, cards, modals).
responsive.cssMedia queries for mobile, tablet, and desktop breakpoints.

CSS Variables (in main.css)

:root {
  --primary-color: #2563eb;
  --secondary-color: #64748b;
  --background-color: #ffffff;
  --text-color: #1e293b;
  --border-radius: 8px;
  --spacing-unit: 8px;
}

Changing a variable updates the entire theme without editing individual rules.

Extending the Theme

  1. Add a new variable – Append to :root and reference it in any selector.
  2. Override a component – Duplicate the selector in components.css after the original definition; the later rule wins.
  3. Create a dark mode – Add a @media (prefers-color-scheme: dark) block that redefines the variables.
@media (prefers-color-scheme: dark) {
  :root {
    --primary-color: #3b82f6;
    --background-color: #111827;
    --text-color: #f9fafb;
  }
}

Best Practices

  • Keep the file size small – avoid large image data URIs; store images in assets/.
  • Use rem units for font sizes; they scale with the root font-size.
  • Limit the depth of nesting; flat selectors improve performance.

All CSS files are loaded in index.html in the order: main.css, components.css, responsive.css.

Component Styling Guide

Message Bubbles

Customize chat message appearance:

/* User messages */
.message-user {
  background: linear-gradient(135deg, #667eea 0%, #764ba2 100%);
  color: white;
  padding: 12px 16px;
  border-radius: 18px 18px 4px 18px;
  max-width: 70%;
  margin-left: auto;
}

/* Bot messages */
.message-bot {
  background: #f7fafc;
  color: #2d3748;
  padding: 12px 16px;
  border-radius: 18px 18px 18px 4px;
  max-width: 70%;
  border: 1px solid #e2e8f0;
}

/* Typing indicator */
.typing-indicator {
  display: inline-flex;
  padding: 16px;
  background: #edf2f7;
  border-radius: 18px;
}

.typing-indicator span {
  height: 8px;
  width: 8px;
  background: #718096;
  border-radius: 50%;
  margin: 0 2px;
  animation: typing 1.4s infinite;
}

Input Field

Style the message input area:

.input-container {
  padding: 16px;
  background: white;
  border-top: 1px solid #e2e8f0;
}

.input-wrapper {
  display: flex;
  align-items: center;
  background: #f7fafc;
  border: 2px solid #e2e8f0;
  border-radius: 24px;
  padding: 8px 16px;
  transition: all 0.2s;
}

.input-wrapper:focus-within {
  border-color: var(--primary-color);
  background: white;
  box-shadow: 0 0 0 3px rgba(66, 153, 225, 0.1);
}

.message-input {
  flex: 1;
  border: none;
  background: transparent;
  outline: none;
  font-size: 16px;
}

.send-button {
  background: var(--primary-color);
  color: white;
  border: none;
  border-radius: 50%;
  width: 36px;
  height: 36px;
  display: flex;
  align-items: center;
  justify-content: center;
  cursor: pointer;
  transition: transform 0.2s;
}

.send-button:hover {
  transform: scale(1.1);
}

.send-button:active {
  transform: scale(0.95);
}

Buttons

Consistent button styling:

/* Primary button */
.btn-primary {
  background: var(--primary-color);
  color: white;
  border: none;
  padding: 10px 20px;
  border-radius: 8px;
  font-weight: 500;
  cursor: pointer;
  transition: all 0.2s;
}

.btn-primary:hover {
  filter: brightness(110%);
  box-shadow: 0 4px 6px rgba(0, 0, 0, 0.1);
}

/* Secondary button */
.btn-secondary {
  background: transparent;
  color: var(--primary-color);
  border: 2px solid var(--primary-color);
  padding: 8px 18px;
  border-radius: 8px;
  font-weight: 500;
  cursor: pointer;
  transition: all 0.2s;
}

.btn-secondary:hover {
  background: var(--primary-color);
  color: white;
}

/* Icon button */
.btn-icon {
  background: transparent;
  border: none;
  width: 40px;
  height: 40px;
  border-radius: 50%;
  display: flex;
  align-items: center;
  justify-content: center;
  cursor: pointer;
  transition: background 0.2s;
}

.btn-icon:hover {
  background: rgba(0, 0, 0, 0.05);
}

Animation Library

Entrance Animations

@keyframes slideInUp {
  from {
    transform: translateY(20px);
    opacity: 0;
  }
  to {
    transform: translateY(0);
    opacity: 1;
  }
}

@keyframes fadeIn {
  from { opacity: 0; }
  to { opacity: 1; }
}

@keyframes scaleIn {
  from {
    transform: scale(0.95);
    opacity: 0;
  }
  to {
    transform: scale(1);
    opacity: 1;
  }
}

/* Apply animations */
.message {
  animation: slideInUp 0.3s ease-out;
}

.modal {
  animation: scaleIn 0.2s ease-out;
}

Loading States

/* Spinner */
.spinner {
  width: 40px;
  height: 40px;
  border: 3px solid #e2e8f0;
  border-top-color: var(--primary-color);
  border-radius: 50%;
  animation: spin 0.8s linear infinite;
}

@keyframes spin {
  to { transform: rotate(360deg); }
}

/* Skeleton loader */
.skeleton {
  background: linear-gradient(90deg, #f0f0f0 25%, #e0e0e0 50%, #f0f0f0 75%);
  background-size: 200% 100%;
  animation: loading 1.5s infinite;
}

@keyframes loading {
  0% { background-position: 200% 0; }
  100% { background-position: -200% 0; }
}

Responsive Design Patterns

Mobile-First Approach

/* Base mobile styles */
.container {
  padding: 16px;
  width: 100%;
}

/* Tablet and up */
@media (min-width: 768px) {
  .container {
    padding: 24px;
    max-width: 768px;
    margin: 0 auto;
  }
}

/* Desktop */
@media (min-width: 1024px) {
  .container {
    padding: 32px;
    max-width: 1024px;
  }
}

/* Wide screens */
@media (min-width: 1440px) {
  .container {
    max-width: 1280px;
  }
}

Touch-Friendly Styles

/* Increase touch targets on mobile */
@media (pointer: coarse) {
  button, a, input, select {
    min-height: 44px;
    min-width: 44px;
  }
  
  .btn-primary, .btn-secondary {
    padding: 12px 24px;
    font-size: 16px;
  }
}

/* Disable hover effects on touch devices */
@media (hover: none) {
  .btn-primary:hover {
    filter: none;
    box-shadow: none;
  }
}

Theme Variants

Dark Mode

@media (prefers-color-scheme: dark) {
  :root {
    --primary-color: #60a5fa;
    --secondary-color: #94a3b8;
    --background-color: #0f172a;
    --text-color: #f1f5f9;
    --border-color: #334155;
  }
  
  .message-bot {
    background: #1e293b;
    color: #f1f5f9;
    border-color: #334155;
  }
  
  .input-wrapper {
    background: #1e293b;
    border-color: #334155;
  }
}

High Contrast

@media (prefers-contrast: high) {
  :root {
    --primary-color: #0066cc;
    --text-color: #000000;
    --background-color: #ffffff;
  }
  
  * {
    border-width: 2px !important;
  }
  
  button:focus, input:focus {
    outline: 3px solid #000000 !important;
    outline-offset: 2px !important;
  }
}

Performance Tips

  1. Use CSS Variables: Change themes by updating variables, not entire stylesheets
  2. Minimize Specificity: Keep selectors simple for faster parsing
  3. Avoid Deep Nesting: Maximum 3 levels deep
  4. Use Transform/Opacity: For animations instead of layout properties
  5. Lazy Load Non-Critical CSS: Load theme variations on demand

Browser Compatibility

/* Provide fallbacks for older browsers */
.gradient-bg {
  background: #3b82f6; /* Fallback */
  background: linear-gradient(135deg, #3b82f6 0%, #8b5cf6 100%);
}

/* Use @supports for progressive enhancement */
@supports (backdrop-filter: blur(10px)) {
  .modal-backdrop {
    backdrop-filter: blur(10px);
  }
}

See Also

Next Step

Return to Chapter 5 Overview or continue to Chapter 6: BASIC Dialogs.

Chapter 8: REST API & Tools

HTTP API endpoints for integrating with botserver.

Overview

botserver exposes REST endpoints organized by functional area. All endpoints follow consistent patterns for authentication, pagination, and error handling.

Base URL

http://localhost:8000/api/v1

Authentication

Authorization: Bearer <token>

API Categories

CategoryPrefixDescription
User APIs/api/user/*Personal settings, profile, preferences
Admin APIs/api/admin/*Organization management (requires admin role)
Files/files/*Drive operations
Chat/chat/*Conversations and messages

User vs Admin Endpoints

The API separates user-level and admin-level operations:

User Endpoints (/api/user/*):

  • Personal profile and settings
  • User’s own files and data
  • Individual preferences
  • Accessible by all authenticated users

Admin Endpoints (/api/admin/*):

  • Organization-wide settings
  • User management
  • Group management
  • DNS, billing, audit logs
  • Requires admin role

Quick Example

curl -X POST http://localhost:8000/api/v1/chat \
  -H "Authorization: Bearer $TOKEN" \
  -H "Content-Type: application/json" \
  -d '{"message": "Hello", "session_id": "abc123"}'

Response Format

{
  "success": true,
  "data": { ... },
  "error": null
}

Chapter Contents

See Also

Files API Reference

Complete file and document management operations including upload, download, copy, move, search, sharing, and synchronization.

Overview

The Files API provides comprehensive file management capabilities built on top of S3-compatible storage. All file operations support both single files and folders with recursive operations.

Base Path: /api/files

Authentication

All endpoints require authentication. Include session token in headers:

Authorization: Bearer <token>

File Operations

List Files

List files and folders in a bucket or path.

Endpoint: GET /api/files/list

Query Parameters:

  • bucket (optional) - Bucket name
  • path (optional) - Folder path

Response:

{
  "success": true,
  "data": [
    {
      "name": "document.pdf",
      "path": "/documents/document.pdf",
      "is_dir": false,
      "size": 1048576,
      "modified": "2024-01-15T10:30:00Z",
      "icon": "📄"
    },
    {
      "name": "images",
      "path": "/images",
      "is_dir": true,
      "size": null,
      "modified": "2024-01-15T09:00:00Z",
      "icon": "📁"
    }
  ]
}

Example:

curl -X GET "http://localhost:3000/api/files/list?bucket=my-bucket&path=/documents" \
  -H "Authorization: Bearer <token>"

Read File

Read file content from storage.

Endpoint: POST /api/files/read

Request Body:

{
  "bucket": "my-bucket",
  "path": "/documents/file.txt"
}

Response:

{
  "content": "File content here..."
}

Example:

curl -X POST "http://localhost:3000/api/files/read" \
  -H "Authorization: Bearer <token>" \
  -H "Content-Type: application/json" \
  -d '{"bucket":"my-bucket","path":"/file.txt"}'

Get File Contents

Alias for read file with alternative naming.

Endpoint: POST /api/files/getContents

Same parameters and response as /api/files/read.

Write File

Write or update file content.

Endpoint: POST /api/files/write

Request Body:

{
  "bucket": "my-bucket",
  "path": "/documents/file.txt",
  "content": "New file content"
}

Response:

{
  "success": true,
  "message": "File written successfully"
}

Save File

Alias for write file.

Endpoint: POST /api/files/save

Same parameters and response as /api/files/write.

Upload File

Upload file to storage.

Endpoint: POST /api/files/upload

Request Body:

{
  "bucket": "my-bucket",
  "path": "/documents/upload.pdf",
  "content": "base64_encoded_content_or_text"
}

Response:

{
  "success": true,
  "message": "File uploaded successfully"
}

Download File

Download file from storage.

Endpoint: POST /api/files/download

Request Body:

{
  "bucket": "my-bucket",
  "path": "/documents/file.pdf"
}

Response:

{
  "content": "file_content"
}

Copy File

Copy file or folder to another location.

Endpoint: POST /api/files/copy

Request Body:

{
  "source_bucket": "my-bucket",
  "source_path": "/documents/original.pdf",
  "dest_bucket": "my-bucket",
  "dest_path": "/backup/copy.pdf"
}

Response:

{
  "success": true,
  "message": "File copied successfully"
}

Move File

Move file or folder to another location.

Endpoint: POST /api/files/move

Request Body:

{
  "source_bucket": "my-bucket",
  "source_path": "/documents/file.pdf",
  "dest_bucket": "archive-bucket",
  "dest_path": "/archived/file.pdf"
}

Response:

{
  "success": true,
  "message": "File moved successfully"
}

Note: Move operation copies the file and then deletes the source.

Delete File

Delete file or folder.

Endpoint: POST /api/files/delete

Request Body:

{
  "bucket": "my-bucket",
  "path": "/documents/file.pdf"
}

Response:

{
  "success": true,
  "message": "Deleted successfully"
}

Note: If path ends with /, all objects with that prefix are deleted (recursive folder deletion).

Create Folder

Create a new folder.

Endpoint: POST /api/files/createFolder

Request Body:

{
  "bucket": "my-bucket",
  "path": "/documents",
  "name": "new-folder"
}

Response:

{
  "success": true,
  "message": "Folder created successfully"
}

Alternative Endpoint: POST /api/files/create-folder (dash notation)

List Folder Contents

List contents of a specific folder.

Endpoint: POST /api/files/dirFolder

Request Body:

{
  "bucket": "my-bucket",
  "path": "/documents"
}

Response:

[
  {
    "name": "file1.pdf",
    "path": "/documents/file1.pdf",
    "is_dir": false,
    "size": 1024,
    "modified": "2024-01-15T10:30:00Z",
    "icon": "📄"
  }
]

Search and Discovery

Search Files

Search for files across buckets.

Endpoint: GET /api/files/search

Query Parameters:

  • bucket (optional) - Limit search to specific bucket
  • query (required) - Search term
  • file_type (optional) - File extension filter (e.g., “.pdf”)

Response:

[
  {
    "name": "matching-file.pdf",
    "path": "/documents/matching-file.pdf",
    "is_dir": false,
    "size": 2048576,
    "modified": "2024-01-15T10:30:00Z",
    "icon": "📄"
  }
]

Example:

curl -X GET "http://localhost:3000/api/files/search?query=report&file_type=.pdf" \
  -H "Authorization: Bearer <token>"

Recent Files

Get recently modified files.

Endpoint: GET /api/files/recent

Query Parameters:

  • bucket (optional) - Filter by bucket

Response:

[
  {
    "name": "recent-file.txt",
    "path": "/documents/recent-file.txt",
    "is_dir": false,
    "size": 1024,
    "modified": "2024-01-15T14:30:00Z",
    "icon": "📃"
  }
]

Note: Returns up to 50 most recently modified files, sorted by modification date descending.

Favorite Files

List user’s favorite files.

Endpoint: GET /api/files/favorite

Response:

[]

Note: Currently returns empty array. Favorite functionality to be implemented.

Sharing and Permissions

Share Folder

Share folder with other users.

Endpoint: POST /api/files/shareFolder

Request Body:

{
  "bucket": "my-bucket",
  "path": "/documents/shared",
  "users": ["user1@example.com", "user2@example.com"],
  "permissions": "read-write"
}

Response:

{
  "share_id": "550e8400-e29b-41d4-a716-446655440000",
  "url": "https://share.example.com/550e8400-e29b-41d4-a716-446655440000",
  "expires_at": "2024-01-22T10:30:00Z"
}

List Shared Files

Get files and folders shared with user.

Endpoint: GET /api/files/shared

Response:

[]

Get Permissions

Get permissions for file or folder.

Endpoint: GET /api/files/permissions

Query Parameters:

  • bucket (required) - Bucket name
  • path (required) - File/folder path

Response:

{
  "bucket": "my-bucket",
  "path": "/documents/file.pdf",
  "permissions": {
    "read": true,
    "write": true,
    "delete": true,
    "share": true
  },
  "shared_with": []
}

Storage Management

Get Quota

Check storage quota information.

Endpoint: GET /api/files/quota

Response:

{
  "total_bytes": 100000000000,
  "used_bytes": 45678901234,
  "available_bytes": 54321098766,
  "percentage_used": 45.68
}

Example:

curl -X GET "http://localhost:3000/api/files/quota" \
  -H "Authorization: Bearer <token>"

Synchronization

Sync Status

Get current synchronization status.

Endpoint: GET /api/files/sync/status

Response:

{
  "status": "idle",
  "last_sync": "2024-01-15T10:30:00Z",
  "files_synced": 0,
  "bytes_synced": 0
}

Status values:

  • idle - No sync in progress
  • syncing - Sync in progress
  • error - Sync error occurred
  • paused - Sync paused

Start Sync

Start file synchronization.

Endpoint: POST /api/files/sync/start

Response:

{
  "success": true,
  "message": "Sync started"
}

Stop Sync

Stop file synchronization.

Endpoint: POST /api/files/sync/stop

Response:

{
  "success": true,
  "message": "Sync stopped"
}

File Icons

Files are automatically assigned icons based on extension:

ExtensionIconType
.bas⚙️BASIC script
.ast🔧AST file
.csv📊Spreadsheet
.gbkb📚Knowledge base
.json🔖JSON data
.txt, .md📃Text
.pdf📕PDF document
.zip, .tar, .gz📦Archive
.jpg, .png, .gif🖼️Image
folder📁Directory
.gbai🤖Bot package
default📄Generic file

Error Handling

Common error responses:

Service Unavailable:

{
  "error": "S3 service not available"
}

Status: 503

File Not Found:

{
  "error": "Failed to read file: NoSuchKey"
}

Status: 500

Invalid UTF-8:

{
  "error": "File is not valid UTF-8"
}

Status: 500

Best Practices

  1. Large Files: For files > 5MB, consider chunked uploads
  2. Batch Operations: Use batch endpoints when operating on multiple files
  3. Path Naming: Use forward slashes, avoid special characters
  4. Permissions: Always check permissions before operations
  5. Error Handling: Implement retry logic for transient failures
  6. Quotas: Monitor quota usage to prevent storage exhaustion

Examples

Upload and Share Workflow

// 1. Upload file
const uploadResponse = await fetch('/api/files/upload', {
  method: 'POST',
  headers: {
    'Authorization': 'Bearer token',
    'Content-Type': 'application/json'
  },
  body: JSON.stringify({
    bucket: 'my-bucket',
    path: '/documents/report.pdf',
    content: fileContent
  })
});

// 2. Share with team
const shareResponse = await fetch('/api/files/shareFolder', {
  method: 'POST',
  headers: {
    'Authorization': 'Bearer token',
    'Content-Type': 'application/json'
  },
  body: JSON.stringify({
    bucket: 'my-bucket',
    path: '/documents',
    users: ['team@example.com'],
    permissions: 'read-write'
  })
});

const { url } = await shareResponse.json();
console.log('Share URL:', url);

Search and Download

import requests

# Search for files
response = requests.get(
    'http://localhost:3000/api/files/search',
    params={'query': 'report', 'file_type': '.pdf'},
    headers={'Authorization': 'Bearer token'}
)

files = response.json()

# Download first result
if files:
    download_response = requests.post(
        'http://localhost:3000/api/files/download',
        json={
            'bucket': 'my-bucket',
            'path': files[0]['path']
        },
        headers={'Authorization': 'Bearer token'}
    )
    
    content = download_response.json()['content']
    with open('downloaded.pdf', 'w') as f:
        f.write(content)

Next Steps

Users API

The Users API provides endpoints for user management operations. User authentication is handled through Zitadel, with botserver maintaining session associations and user preferences.

Overview

User management in General Bots follows a federated model:

  • Zitadel: Primary identity provider (authentication, SSO, user creation)
  • botserver: Session management, preferences, bot-specific user data

Endpoints

Get Current User

GET /api/users/me

Returns current authenticated user information.

Headers:

Authorization: Bearer {session_token}

Response:

{
  "user_id": "user-123",
  "username": "john_doe",
  "email": "john@example.com",
  "display_name": "John Doe",
  "avatar_url": "/api/users/user-123/avatar",
  "roles": ["user", "manager"],
  "created_at": "2024-01-01T00:00:00Z",
  "last_login": "2024-01-15T10:30:00Z"
}

Get User by ID

GET /api/users/:id

Retrieve specific user details.

Required Permission: admin:users or same user

Response:

{
  "user_id": "user-123",
  "username": "john_doe",
  "email": "john@example.com",
  "display_name": "John Doe",
  "status": "active",
  "created_at": "2024-01-01T00:00:00Z"
}

List Users

GET /api/users

List users in the organization.

Required Permission: admin:users

Query Parameters:

  • limit - Number of results (default: 50, max: 100)
  • offset - Pagination offset
  • status - Filter by status (active/suspended/inactive)
  • role - Filter by role
  • search - Search by name or email

Response:

{
  "users": [
    {
      "user_id": "user-123",
      "username": "john_doe",
      "email": "john@example.com",
      "display_name": "John Doe",
      "status": "active",
      "roles": ["user", "manager"]
    },
    {
      "user_id": "user-456",
      "username": "jane_smith",
      "email": "jane@example.com",
      "display_name": "Jane Smith",
      "status": "active",
      "roles": ["user"]
    }
  ],
  "total": 47,
  "limit": 50,
  "offset": 0
}

Update User

PUT /api/users/:id

Update user information.

Required Permission: admin:users or same user (limited fields)

Request:

{
  "display_name": "John D. Doe",
  "avatar_url": "https://example.com/avatar.jpg"
}

Admin-only fields:

{
  "status": "suspended",
  "roles": ["user"]
}

Response:

{
  "user_id": "user-123",
  "status": "updated",
  "updated_fields": ["display_name"]
}

Update User Settings

PUT /api/users/:id/settings

Update user preferences.

Request:

{
  "theme": "dark",
  "language": "en",
  "notifications": {
    "email": true,
    "push": false,
    "digest": "daily"
  },
  "default_bot": "support-bot"
}

Response:

{
  "status": "updated",
  "settings": {
    "theme": "dark",
    "language": "en"
  }
}

Get User Settings

GET /api/users/:id/settings

Retrieve user preferences.

Response:

{
  "theme": "dark",
  "language": "en",
  "timezone": "America/New_York",
  "notifications": {
    "email": true,
    "push": false,
    "digest": "daily"
  },
  "default_bot": "support-bot"
}

Suspend User

POST /api/users/:id/suspend

Suspend a user account.

Required Permission: admin:users

Request:

{
  "reason": "Policy violation"
}

Response:

{
  "user_id": "user-123",
  "status": "suspended",
  "suspended_at": "2024-01-15T10:30:00Z"
}

Activate User

POST /api/users/:id/activate

Reactivate a suspended user.

Required Permission: admin:users

Response:

{
  "user_id": "user-123",
  "status": "active",
  "activated_at": "2024-01-15T10:30:00Z"
}

Delete User

DELETE /api/users/:id

Deactivate/delete user account.

Required Permission: admin:users

Response:

{
  "user_id": "user-123",
  "status": "deleted",
  "deleted_at": "2024-01-15T10:30:00Z"
}

User Sessions

List User Sessions

GET /api/users/:id/sessions

List active sessions for a user.

Response:

{
  "sessions": [
    {
      "session_id": "sess-001",
      "bot_id": "support-bot",
      "started_at": "2024-01-15T09:00:00Z",
      "last_activity": "2024-01-15T10:30:00Z",
      "device": "Chrome on Windows"
    }
  ]
}

Terminate Session

DELETE /api/users/:id/sessions/:session_id

End a specific user session.

Response:

{
  "session_id": "sess-001",
  "status": "terminated"
}

Terminate All Sessions

DELETE /api/users/:id/sessions

End all user sessions (logout everywhere).

Response:

{
  "terminated_count": 3,
  "status": "all_sessions_terminated"
}

User Authentication Flow

Login

POST /api/users/login

Authenticate user (redirects to Zitadel).

Request:

{
  "email": "user@example.com",
  "password": "password",
  "remember_me": true
}

Response:

{
  "redirect_url": "https://auth.yourdomain.com/oauth/authorize?..."
}

Logout

POST /api/users/logout

End current session.

Response:

{
  "status": "logged_out",
  "redirect_url": "/"
}

Register

POST /api/users/register

Register new user (if self-registration enabled).

Request:

{
  "email": "newuser@example.com",
  "username": "newuser",
  "password": "SecurePassword123!",
  "display_name": "New User"
}

Response:

{
  "user_id": "user-789",
  "status": "pending_verification",
  "message": "Check your email to verify your account"
}

User Management via Zitadel

For full user management, access Zitadel admin console:

  1. Access Console: http://localhost:9000 (or your Zitadel URL)
  2. Create Users: Organization → Users → Add
  3. Manage Roles: Users → Select User → Authorizations
  4. Reset Passwords: Users → Select User → Actions → Reset Password
  5. Configure SSO: Settings → Identity Providers

Database Schema

botserver maintains minimal user data:

-- users table (synced from Zitadel)
CREATE TABLE users (
    id UUID PRIMARY KEY,
    zitadel_id TEXT UNIQUE,
    username TEXT,
    email TEXT,
    display_name TEXT,
    avatar_url TEXT,
    status TEXT DEFAULT 'active',
    created_at TIMESTAMPTZ DEFAULT NOW(),
    updated_at TIMESTAMPTZ DEFAULT NOW()
);

-- user_settings table
CREATE TABLE user_settings (
    id UUID PRIMARY KEY,
    user_id UUID REFERENCES users(id),
    setting_key TEXT NOT NULL,
    setting_value TEXT,
    UNIQUE(user_id, setting_key)
);

-- user_sessions table
CREATE TABLE sessions (
    id UUID PRIMARY KEY,
    user_id UUID REFERENCES users(id),
    bot_id UUID,
    status TEXT DEFAULT 'active',
    device_info TEXT,
    created_at TIMESTAMPTZ DEFAULT NOW(),
    last_activity TIMESTAMPTZ DEFAULT NOW()
);

Error Handling

Status CodeErrorDescription
400invalid_requestMalformed request
401unauthorizedNot authenticated
403forbiddenInsufficient permissions
404user_not_foundUser doesn’t exist
409conflictUsername/email already exists
422validation_errorInvalid field values

Rate Limits

EndpointLimit
Login10/minute per IP
Register5/hour per IP
User List60/minute per user
User Update30/minute per user

BASIC Integration

Access user information in scripts:

' Get current user info
user_name = GET user_name
user_email = GET user_email

' Greet by name
TALK "Hello, " + user_name + "!"

' Check user role
role = GET role
IF role = "admin" THEN
    TALK "Welcome, administrator!"
END IF

See Also

Groups API

The Groups API provides endpoints for managing groups and organizations through Zitadel integration.

Overview

Groups in botserver represent organizations in Zitadel. They provide multi-tenant support and user grouping capabilities.

Endpoints

Create Group

POST /groups/create

Creates a new group/organization.

Request:

{
  "name": "Engineering Team",
  "description": "Software engineering department",
  "domain": "engineering.example.com"
}

Response:

{
  "id": "org-123",
  "name": "Engineering Team",
  "created_at": "2024-01-20T10:00:00Z"
}

Update Group

PUT /groups/:id/update

Updates group information.

Request:

{
  "name": "Updated Name",
  "description": "Updated description"
}

Response:

{
  "id": "org-123",
  "name": "Updated Name",
  "updated_at": "2024-01-20T11:00:00Z"
}

Delete Group

DELETE /groups/:id/delete

Deletes a group/organization.

Response:

{
  "success": true,
  "message": "Group deleted successfully"
}

List Groups

GET /groups/list

Lists all groups accessible to the user.

Query Parameters:

  • limit - Maximum number of results (default: 20)
  • offset - Pagination offset

Response:

{
  "groups": [
    {
      "id": "org-123",
      "name": "Engineering Team",
      "member_count": 25,
      "created_at": "2024-01-20T10:00:00Z"
    }
  ],
  "total": 1
}

Get Group Members

GET /groups/:id/members

Retrieves members of a specific group.

Response:

{
  "members": [
    {
      "user_id": "user-456",
      "username": "john_doe",
      "email": "john@example.com",
      "role": "member",
      "joined_at": "2024-01-15T09:00:00Z"
    }
  ],
  "total": 1
}

Add Group Member

POST /groups/:id/members/add

Adds a user to a group.

Request:

{
  "user_id": "user-789",
  "role": "member"
}

Response:

{
  "success": true,
  "message": "Member added successfully"
}

Remove Group Member

DELETE /groups/:id/members/remove

Removes a user from a group.

Request:

{
  "user_id": "user-789"
}

Response:

{
  "success": true,
  "message": "Member removed successfully"
}

Implementation Details

Zitadel Integration

All group operations are proxied to Zitadel:

  • Groups map to Zitadel organizations
  • Members are managed through Zitadel’s org API
  • Permissions inherited from Zitadel roles

Data Model

Groups are not stored in botserver’s database. All data comes from Zitadel:

  • Group metadata from Zitadel orgs
  • Membership from Zitadel org members
  • Permissions from Zitadel policies

Error Responses

All endpoints may return standard error responses:

{
  "error": "Group not found",
  "code": "GROUP_NOT_FOUND",
  "status": 404
}

Common error codes:

  • GROUP_NOT_FOUND - Group doesn’t exist
  • UNAUTHORIZED - User lacks permission
  • MEMBER_EXISTS - User already in group
  • MEMBER_NOT_FOUND - User not in group
  • ZITADEL_ERROR - Upstream service error

Permissions

Group operations require appropriate Zitadel permissions:

  • Create: Organization admin
  • Update: Organization owner or admin
  • Delete: Organization owner
  • List: Authenticated user
  • View Members: Group member
  • Add/Remove Members: Group admin

Rate Limiting

Group endpoints are rate-limited:

  • 100 requests per minute for read operations
  • 20 requests per minute for write operations

Best Practices

  1. Cache Group Data: Groups change infrequently
  2. Batch Operations: Use bulk endpoints when available
  3. Handle Zitadel Errors: Gracefully handle upstream failures
  4. Validate Permissions: Check user has required role
  5. Audit Changes: Log all group modifications

Conversations API

The Conversations API provides endpoints for managing chat conversations, message history, and real-time communication.

Overview

Conversations in General Bots are handled primarily through WebSocket connections for real-time messaging, with REST endpoints for history retrieval and session management.

Endpoints

Start Conversation

POST /api/conversations/start

Initiates a new conversation with a bot.

Request:

{
  "bot_id": "bot-123",
  "initial_message": "Hello"
}

Response:

{
  "conversation_id": "conv-456",
  "session_id": "session-789",
  "status": "active"
}

Send Message

POST /api/conversations/:id/messages

Sends a message in an existing conversation.

Request:

{
  "content": "User message",
  "attachments": []
}

Response:

{
  "message_id": "msg-123",
  "timestamp": "2024-01-15T10:30:00Z",
  "status": "delivered"
}

Get Conversation History

GET /api/conversations/:id/history

Retrieves message history for a conversation.

Query Parameters:

  • limit - Number of messages (default: 50, max: 100)
  • before - Messages before timestamp
  • after - Messages after timestamp

Response:

{
  "messages": [
    {
      "id": "msg-001",
      "sender": "user",
      "content": "Hello",
      "timestamp": "2024-01-15T10:00:00Z"
    },
    {
      "id": "msg-002",
      "sender": "bot",
      "content": "Hi! How can I help you?",
      "timestamp": "2024-01-15T10:00:01Z"
    }
  ],
  "has_more": false
}

List Conversations

GET /api/conversations

Lists user’s conversations.

Query Parameters:

  • bot_id - Filter by bot
  • status - Filter by status (active/archived)
  • limit - Number of results
  • offset - Pagination offset

Response:

{
  "conversations": [
    {
      "id": "conv-456",
      "bot_id": "bot-123",
      "bot_name": "Support Bot",
      "last_message": "Thank you!",
      "last_activity": "2024-01-15T10:30:00Z",
      "status": "active"
    }
  ],
  "total": 1
}

WebSocket Protocol

Real-time messaging uses WebSocket connections at /ws.

Message Types

TypeDirectionDescription
messageBothChat message
typingServer→ClientBot is typing
suggestionServer→ClientQuick reply suggestions
statusServer→ClientConnection status
errorServer→ClientError notification

Send Message Format

{
  "type": "message",
  "content": "Hello",
  "session_id": "session-123"
}

Receive Message Format

{
  "type": "message",
  "sender": "bot",
  "content": "Hi! How can I help you?",
  "timestamp": "2024-01-15T10:00:01Z"
}

Anonymous Conversations

Anonymous users can chat without authentication:

  • Session created automatically on WebSocket connect
  • Limited to default bot only
  • No history persistence
  • Session expires after inactivity

Authenticated Conversations

Logged-in users get additional features:

  • Full conversation history
  • Multiple bot access
  • Cross-device sync
  • Persistent sessions

Database Schema

Conversations are stored in:

-- sessions table
CREATE TABLE sessions (
    id UUID PRIMARY KEY,
    user_id UUID,
    bot_id UUID,
    status TEXT,
    created_at TIMESTAMPTZ,
    updated_at TIMESTAMPTZ
);

-- message_history table  
CREATE TABLE message_history (
    id UUID PRIMARY KEY,
    session_id UUID REFERENCES sessions(id),
    sender TEXT,
    content TEXT,
    metadata JSONB,
    created_at TIMESTAMPTZ
);

Error Handling

Status CodeErrorDescription
400invalid_messageMalformed message content
401unauthorizedAuthentication required
403forbiddenNo access to conversation
404not_foundConversation doesn’t exist
429rate_limitedToo many messages

Rate Limits

EndpointLimit
Messages60/minute per user
History100/minute per user
List30/minute per user

See Also

Email API

The Email API provides endpoints for email operations including sending, receiving, and managing email accounts through the Stalwart mail server integration.

Overview

Email functionality in General Bots is available through:

  1. REST API - Documented in this chapter
  2. BASIC Keywords - SEND MAIL for scripts
  3. Email Module - Background processing and IMAP/SMTP integration

Endpoints

Send Email

POST /api/email/send

Send an email message.

Request:

{
  "to": ["recipient@example.com"],
  "cc": ["cc@example.com"],
  "bcc": [],
  "subject": "Meeting Tomorrow",
  "body": "Hi, just a reminder about our meeting.",
  "body_type": "text",
  "attachments": []
}

Response:

{
  "message_id": "msg-abc123",
  "status": "sent",
  "timestamp": "2024-01-15T10:30:00Z"
}

Body Types:

  • text - Plain text
  • html - HTML formatted

List Emails

GET /api/email/inbox

Retrieve inbox messages.

Query Parameters:

  • folder - Folder name (default: INBOX)
  • limit - Number of messages (default: 50)
  • offset - Pagination offset
  • unread - Filter unread only (boolean)
  • since - Messages since date (ISO 8601)

Response:

{
  "messages": [
    {
      "id": "email-001",
      "from": "sender@example.com",
      "subject": "Hello",
      "preview": "Just wanted to say hi...",
      "date": "2024-01-15T09:00:00Z",
      "read": false,
      "has_attachments": false
    }
  ],
  "total": 142,
  "unread_count": 5
}

Get Email

GET /api/email/:id

Get specific email details.

Response:

{
  "id": "email-001",
  "from": {
    "name": "John Doe",
    "email": "john@example.com"
  },
  "to": [
    {
      "name": "You",
      "email": "you@example.com"
    }
  ],
  "cc": [],
  "subject": "Meeting Notes",
  "body": "Here are the notes from today's meeting...",
  "body_html": "<p>Here are the notes from today's meeting...</p>",
  "date": "2024-01-15T09:00:00Z",
  "read": true,
  "attachments": [
    {
      "id": "att-001",
      "filename": "notes.pdf",
      "size": 102400,
      "content_type": "application/pdf"
    }
  ]
}

Delete Email

DELETE /api/email/:id

Delete an email message.

Response:

{
  "status": "deleted",
  "message_id": "email-001"
}

Get Attachment

GET /api/email/:id/attachments/:attachment_id

Download an email attachment.

Response: Binary file with appropriate Content-Type header.

Mark as Read

PUT /api/email/:id/read

Mark email as read.

Request:

{
  "read": true
}

Move Email

PUT /api/email/:id/move

Move email to a different folder.

Request:

{
  "folder": "Archive"
}

List Folders

GET /api/email/folders

List available email folders.

Response:

{
  "folders": [
    {
      "name": "INBOX",
      "path": "INBOX",
      "unread_count": 5,
      "total_count": 142
    },
    {
      "name": "Sent",
      "path": "Sent",
      "unread_count": 0,
      "total_count": 89
    },
    {
      "name": "Drafts",
      "path": "Drafts",
      "unread_count": 0,
      "total_count": 3
    }
  ]
}

Create Draft

POST /api/email/drafts

Create an email draft.

Request:

{
  "to": ["recipient@example.com"],
  "subject": "Draft subject",
  "body": "Draft content..."
}

Response:

{
  "draft_id": "draft-001",
  "status": "saved"
}

Send Draft

POST /api/email/drafts/:id/send

Send a previously saved draft.

Response:

{
  "message_id": "msg-abc123",
  "status": "sent"
}

Email Accounts

List Accounts

GET /api/email/accounts

List configured email accounts.

Response:

{
  "accounts": [
    {
      "id": "account-001",
      "email": "user@example.com",
      "provider": "stalwart",
      "status": "connected"
    }
  ]
}

Add Account

POST /api/email/accounts

Add a new email account.

Request:

{
  "email": "user@example.com",
  "imap_server": "imap.example.com",
  "imap_port": 993,
  "smtp_server": "smtp.example.com",
  "smtp_port": 587,
  "username": "user@example.com",
  "password": "app-specific-password"
}

Response:

{
  "account_id": "account-002",
  "status": "connected",
  "message": "Account added successfully"
}

BASIC Integration

Use email in your BASIC scripts:

' Simple email
SEND MAIL "recipient@example.com", "Subject", "Body"

' With variables
TALK "Who should I email?"
recipient = HEAR

TALK "What's the subject?"
subject = HEAR

TALK "What's the message?"
body = HEAR

SEND MAIL recipient, subject, body
TALK "Email sent!"

Configuration

Configure email in config.csv:

key,value
smtp-server,smtp.gmail.com
smtp-port,587
imap-server,imap.gmail.com
imap-port,993
email-username,your-email@gmail.com
email-password,your-app-password
email-from,Your Name <your-email@gmail.com>

Gmail Configuration:

  • Use App Passwords (not your main password)
  • Enable IMAP in Gmail settings
  • Allow less secure apps or use OAuth

Stalwart Mail Server

When using the built-in Stalwart mail server:

Automatic Configuration:

  • Server runs on standard ports (25, 993, 587)
  • Accounts created through Zitadel integration
  • TLS certificates auto-managed

Manual Configuration:

key,value
stalwart-enabled,true
stalwart-domain,mail.yourdomain.com
stalwart-admin-password,secure-password

Error Handling

Status CodeErrorDescription
400invalid_recipientInvalid email address
401unauthorizedAuthentication required
403forbiddenNo access to mailbox
404not_foundEmail not found
422send_failedSMTP delivery failed
503service_unavailableMail server offline

Rate Limits

EndpointLimit
Send100/hour per user
Inbox300/hour per user
Attachments50/hour per user

Email Read Tracking

General Bots supports email read tracking via an invisible 1x1 pixel embedded in HTML emails. When enabled, you can track when recipients open your emails.

Configuration

Enable tracking in config.csv:

name,value
email-read-pixel,true
server-url,https://yourdomain.com

How It Works

  1. When sending an HTML email, a tracking pixel is automatically injected
  2. When the recipient opens the email, their email client loads the pixel
  3. The server records the open event with timestamp and metadata
  4. You can query the tracking status via API or view in the Suite UI

Tracking Endpoints

Serve Tracking Pixel

GET /api/email/tracking/pixel/:tracking_id

This endpoint is called automatically by email clients when loading the tracking pixel. It returns a 1x1 transparent GIF and records the read event.

Response: Binary GIF image (1x1 pixel)

Headers Set:

  • Content-Type: image/gif
  • Cache-Control: no-store, no-cache, must-revalidate, max-age=0

Get Tracking Status

GET /api/email/tracking/status/:tracking_id

Get the read status for a specific sent email.

Response:

{
  "success": true,
  "data": {
    "tracking_id": "550e8400-e29b-41d4-a716-446655440000",
    "to_email": "recipient@example.com",
    "subject": "Meeting Tomorrow",
    "sent_at": "2024-01-15T10:30:00Z",
    "is_read": true,
    "read_at": "2024-01-15T14:22:00Z",
    "read_count": 3
  }
}

List Tracked Emails

GET /api/email/tracking/list

List all sent emails with their tracking status.

Query Parameters:

  • account_id - Filter by email account (optional)
  • limit - Number of results (default: 50)
  • offset - Pagination offset (default: 0)
  • filter - Filter by status: all, read, unread (default: all)

Response:

{
  "success": true,
  "data": [
    {
      "tracking_id": "550e8400-e29b-41d4-a716-446655440000",
      "to_email": "recipient@example.com",
      "subject": "Meeting Tomorrow",
      "sent_at": "2024-01-15T10:30:00Z",
      "is_read": true,
      "read_at": "2024-01-15T14:22:00Z",
      "read_count": 3
    },
    {
      "tracking_id": "661e8400-e29b-41d4-a716-446655440001",
      "to_email": "another@example.com",
      "subject": "Project Update",
      "sent_at": "2024-01-15T11:00:00Z",
      "is_read": false,
      "read_at": null,
      "read_count": 0
    }
  ]
}

Get Tracking Statistics

GET /api/email/tracking/stats

Get aggregate statistics for email tracking.

Response:

{
  "success": true,
  "data": {
    "total_sent": 150,
    "total_read": 98,
    "read_rate": 65.33,
    "avg_time_to_read_hours": 4.5
  }
}

Tracking Data Stored

For each tracked email, the following data is recorded:

FieldDescription
tracking_idUnique ID embedded in the pixel URL
to_emailRecipient email address
subjectEmail subject line
sent_atTimestamp when email was sent
is_readWhether email has been opened
read_atTimestamp of first open
read_countNumber of times opened
first_read_ipIP address of first open
last_read_ipIP address of most recent open
user_agentBrowser/client user agent string

Privacy Considerations

  • Email tracking should be used responsibly
  • Consider disclosing tracking in your email footer
  • Some email clients block tracking pixels by default
  • Users may have images disabled, preventing tracking
  • GDPR/LGPD may require consent for tracking

Suite UI Integration

The Suite email interface shows tracking status:

  • 📊 Tracking folder shows all tracked emails
  • Green checkmarks (✓✓) indicate opened emails
  • Gray checkmarks indicate sent but unread
  • Hover over emails to see open timestamp
  • Statistics panel shows open rates

Security Notes

  1. Never hardcode credentials - Use config.csv
  2. Use App Passwords - Not main account passwords
  3. Enable TLS - Always use encrypted connections
  4. Audit sending - Log all outbound emails

Database Schema

-- user_email_accounts
CREATE TABLE user_email_accounts (
    id UUID PRIMARY KEY,
    user_id UUID REFERENCES users(id),
    email TEXT NOT NULL,
    imap_server TEXT,
    smtp_server TEXT,
    encrypted_password TEXT,
    created_at TIMESTAMPTZ
);

-- email_drafts
CREATE TABLE email_drafts (
    id UUID PRIMARY KEY,
    user_id UUID REFERENCES users(id),
    recipients JSONB,
    subject TEXT,
    body TEXT,
    attachments JSONB,
    created_at TIMESTAMPTZ,
    updated_at TIMESTAMPTZ
);

Database Schema

-- user_email_accounts
CREATE TABLE user_email_accounts (
    id UUID PRIMARY KEY,
    user_id UUID REFERENCES users(id),
    email TEXT NOT NULL,
    imap_server TEXT,
    smtp_server TEXT,
    encrypted_password TEXT,
    created_at TIMESTAMPTZ
);

-- email_drafts
CREATE TABLE email_drafts (
    id UUID PRIMARY KEY,
    user_id UUID REFERENCES users(id),
    recipients JSONB,
    subject TEXT,
    body TEXT,
    attachments JSONB,
    created_at TIMESTAMPTZ,
    updated_at TIMESTAMPTZ
);

-- sent_email_tracking (for read receipts)
CREATE TABLE sent_email_tracking (
    id UUID PRIMARY KEY,
    tracking_id UUID NOT NULL UNIQUE,
    bot_id UUID NOT NULL,
    account_id UUID NOT NULL,
    from_email VARCHAR(255) NOT NULL,
    to_email VARCHAR(255) NOT NULL,
    cc TEXT,
    bcc TEXT,
    subject TEXT NOT NULL,
    sent_at TIMESTAMPTZ NOT NULL,
    is_read BOOLEAN NOT NULL DEFAULT FALSE,
    read_at TIMESTAMPTZ,
    read_count INTEGER NOT NULL DEFAULT 0,
    first_read_ip VARCHAR(45),
    last_read_ip VARCHAR(45),
    user_agent TEXT,
    created_at TIMESTAMPTZ NOT NULL,
    updated_at TIMESTAMPTZ NOT NULL
);

See Also

Calendar API

The Calendar API provides endpoints for managing events, schedules, and time-based activities within botserver.

Status

⚠️ NOT IMPLEMENTED

This API is planned for future development but is not currently available in botserver.

Planned Features

The Calendar API will enable event creation and management, meeting scheduling, availability checking, recurring events, calendar synchronization, and reminders with notifications.

Planned Endpoints

Event Management

Event management endpoints handle the lifecycle of calendar events. Create events with POST /api/v1/calendar/events, list events with GET /api/v1/calendar/events, retrieve specific event details with GET /api/v1/calendar/events/{event_id}, update events with PATCH /api/v1/calendar/events/{event_id}, and delete events with DELETE /api/v1/calendar/events/{event_id}.

Scheduling

Scheduling endpoints help coordinate meetings. Find available time slots with POST /api/v1/calendar/schedule, schedule meetings with POST /api/v1/calendar/meeting, and check availability with GET /api/v1/calendar/availability.

Recurring Events

Recurring event endpoints manage events that repeat on a schedule. Create recurring events with POST /api/v1/calendar/events/recurring and update recurrence patterns with PATCH /api/v1/calendar/events/{event_id}/recurrence.

Reminders

Reminder endpoints manage notifications for upcoming events. Add reminders with POST /api/v1/calendar/events/{event_id}/reminders and list upcoming reminders with GET /api/v1/calendar/reminders.

Planned Integration with BASIC

When implemented, calendar features will be accessible via BASIC keywords:

' Create event (not yet available)
event_id = CREATE EVENT "Team Meeting", "2024-02-01 14:00"
SET EVENT DURATION event_id, 60  ' 60 minutes

' Check availability (not yet available)
available = CHECK AVAILABILITY "user123", "2024-02-01"
IF available THEN
    TALK "User is available"
END IF

' Schedule meeting (not yet available)
meeting_id = SCHEDULE MEETING participants, datetime, duration
SEND INVITES meeting_id

Planned Data Models

Event

{
  "event_id": "evt_123",
  "title": "Team Meeting",
  "description": "Weekly sync",
  "start_time": "2024-02-01T14:00:00Z",
  "end_time": "2024-02-01T15:00:00Z",
  "location": "Conference Room A",
  "attendees": ["user123", "user456"],
  "recurrence": {
    "frequency": "weekly",
    "interval": 1,
    "days_of_week": ["monday"],
    "end_date": "2024-12-31"
  },
  "reminders": [
    {"minutes_before": 15, "method": "notification"},
    {"minutes_before": 60, "method": "email"}
  ]
}

Availability

{
  "user_id": "user123",
  "date": "2024-02-01",
  "time_slots": [
    {"start": "09:00", "end": "10:00", "available": true},
    {"start": "10:00", "end": "11:00", "available": false},
    {"start": "11:00", "end": "12:00", "available": true}
  ]
}

Planned Features Detail

Event Types

The API will support several event types. Single events are one-time occurrences. Recurring events follow daily, weekly, or monthly patterns. All-day events span the full day without specific start and end times. Multi-day events extend across multiple consecutive days.

Notification Methods

Notifications can be delivered through in-app notifications, email reminders, SMS alerts when configured, and bot messages through the chat interface.

Calendar Views

The API will support multiple calendar views including day view for detailed hourly scheduling, week view for weekly planning, month view for long-term visibility, and agenda view for a list-based perspective.

Time Zone Support

Time zone handling will include user-specific time zones, automatic daylight saving time adjustments, and cross-timezone meeting coordination to ensure events display correctly for all participants.

Integration Points

The calendar system will integrate with external calendar systems like Google Calendar and Outlook, video conferencing platforms, the task management system, and the notification system for reminders.

Implementation Considerations

When implemented, the Calendar API will use PostgreSQL for event storage, support iCal format for import and export, handle time zones properly across all operations, provide conflict detection for scheduling, include role-based access control for event management, support delegation for assistants, and enable calendar sharing between users.

Alternative Solutions

Until the Calendar API is implemented, consider these alternatives.

External Calendar Services

You can integrate with external providers such as Google Calendar API, Microsoft Graph API for Outlook, or CalDAV servers for standards-based calendar access.

Simple Scheduling in BASIC

For basic appointment tracking, you can store appointments in bot memory:

' Store appointments in bot memory
appointment = "Meeting with client at 2 PM"
SET BOT MEMORY "appointment_" + date, appointment

' Retrieve appointments
today_appointment = GET BOT MEMORY "appointment_" + TODAY()
IF today_appointment <> "" THEN
    TALK "Today's appointment: " + today_appointment
END IF

Task-based Scheduling

An alternative approach uses the Tasks API with due dates, creates tasks for time-sensitive items, and sets reminders via scheduled BASIC scripts.

Future Integration

The Calendar API will integrate with the Tasks API to link tasks to calendar events, the Notifications API for event reminders, the User API for user availability, and the Meeting API for video conferencing.

Workaround Example

Until the Calendar API is available, you can implement basic scheduling:

' Simple appointment booking system
FUNCTION BookAppointment(date, time, description)
    key = "appointment_" + date + "_" + time
    existing = GET BOT MEMORY key
    
    IF existing = "" THEN
        SET BOT MEMORY key, description
        TALK "Appointment booked for " + date + " at " + time
        RETURN TRUE
    ELSE
        TALK "That time slot is already taken"
        RETURN FALSE
    END IF
END FUNCTION

' Check availability
FUNCTION CheckAvailability(date)
    slots = ["09:00", "10:00", "11:00", "14:00", "15:00", "16:00"]
    available = []
    
    FOR EACH slot IN slots
        key = "appointment_" + date + "_" + slot
        appointment = GET BOT MEMORY key
        IF appointment = "" THEN
            available = APPEND(available, slot)
        END IF
    NEXT
    
    RETURN available
END FUNCTION

Status Updates

Check the GitHub repository for updates on Calendar API implementation status.

Tasks API

botserver provides RESTful endpoints for creating, managing, and tracking tasks and workflows within bot conversations.

Overview

The Tasks API enables:

  • Task creation and assignment
  • Workflow management
  • Task tracking and status updates
  • Deadline management
  • Task prioritization
  • Collaboration features

Base URL

http://localhost:9000/api/v1/tasks

Authentication

All Tasks API requests require authentication:

Authorization: Bearer <token>

Endpoints

Create Task

POST /tasks

Create a new task.

Request Body:

{
  "title": "Review customer complaint",
  "description": "Investigate and respond to customer issue #1234",
  "assignee": "user456",
  "due_date": "2024-01-20T17:00:00Z",
  "priority": "high",
  "tags": ["support", "urgent"],
  "context": {
    "conversation_id": "conv_abc123",
    "bot_id": "support_bot"
  }
}

Response:

{
  "task_id": "tsk_xyz789",
  "title": "Review customer complaint",
  "status": "pending",
  "created_at": "2024-01-15T10:00:00Z",
  "created_by": "user123"
}

Get Task

GET /tasks/{task_id}

Retrieve task details.

Response:

{
  "task_id": "tsk_xyz789",
  "title": "Review customer complaint",
  "description": "Investigate and respond to customer issue #1234",
  "status": "in_progress",
  "assignee": {
    "user_id": "user456",
    "name": "Jane Smith",
    "avatar_url": "https://example.com/avatar.jpg"
  },
  "priority": "high",
  "due_date": "2024-01-20T17:00:00Z",
  "created_at": "2024-01-15T10:00:00Z",
  "updated_at": "2024-01-15T14:30:00Z",
  "progress": 60,
  "time_spent_minutes": 45,
  "comments_count": 3,
  "attachments_count": 2
}

Update Task

PATCH /tasks/{task_id}

Update task properties.

Request Body:

{
  "status": "in_progress",
  "progress": 60,
  "assignee": "user789"
}

Response:

{
  "task_id": "tsk_xyz789",
  "updated": true,
  "updated_fields": ["status", "progress", "assignee"],
  "updated_at": "2024-01-15T14:30:00Z"
}

List Tasks

GET /tasks

List tasks with filtering and pagination.

Query Parameters:

  • status - Filter by status: pending, in_progress, completed, cancelled
  • assignee - Filter by assignee user ID
  • priority - Filter by priority: low, medium, high, critical
  • due_before - Tasks due before date
  • due_after - Tasks due after date
  • tags - Comma-separated tags
  • page - Page number (default: 1)
  • limit - Items per page (default: 20)
  • sort - Sort by: created_at, due_date, priority, updated_at
  • order - Sort order: asc, desc

Response:

{
  "tasks": [
    {
      "task_id": "tsk_xyz789",
      "title": "Review customer complaint",
      "status": "in_progress",
      "assignee": "user456",
      "priority": "high",
      "due_date": "2024-01-20T17:00:00Z",
      "progress": 60
    }
  ],
  "total": 42,
  "page": 1,
  "limit": 20
}

Complete Task

POST /tasks/{task_id}/complete

Mark a task as completed.

Request Body:

{
  "resolution": "Issue resolved - refund processed",
  "time_spent_minutes": 90,
  "outcomes": ["customer_satisfied", "refund_issued"]
}

Response:

{
  "task_id": "tsk_xyz789",
  "status": "completed",
  "completed_at": "2024-01-15T16:00:00Z",
  "completed_by": "user456"
}

Delete Task

DELETE /tasks/{task_id}

Delete a task.

Response:

{
  "deleted": true,
  "task_id": "tsk_xyz789"
}

Task Comments

Add Comment

POST /tasks/{task_id}/comments

Add a comment to a task.

Request Body:

{
  "text": "Contacted customer via email, waiting for response",
  "mentions": ["user123"],
  "attachments": ["file_abc123"]
}

Response:

{
  "comment_id": "cmt_123",
  "task_id": "tsk_xyz789",
  "text": "Contacted customer via email, waiting for response",
  "author": "user456",
  "created_at": "2024-01-15T14:30:00Z"
}

List Comments

GET /tasks/{task_id}/comments

Get task comments.

Response:

{
  "comments": [
    {
      "comment_id": "cmt_123",
      "text": "Contacted customer via email",
      "author": {
        "user_id": "user456",
        "name": "Jane Smith"
      },
      "created_at": "2024-01-15T14:30:00Z"
    }
  ],
  "total": 3
}

Task Attachments

Upload Attachment

POST /tasks/{task_id}/attachments

Attach a file to a task.

Request:

  • Method: POST
  • Content-Type: multipart/form-data
  • Form fields: file (binary)

Response:

{
  "attachment_id": "att_789",
  "task_id": "tsk_xyz789",
  "filename": "screenshot.png",
  "size_bytes": 102400,
  "mime_type": "image/png",
  "uploaded_at": "2024-01-15T14:45:00Z"
}

Task Templates

Create Template

POST /templates

Create a reusable task template.

Request Body:

{
  "name": "Customer Complaint",
  "description_template": "Investigate issue: {{issue_id}}",
  "default_priority": "high",
  "default_tags": ["support"],
  "checklist": [
    "Review conversation history",
    "Contact customer",
    "Provide resolution",
    "Follow up"
  ]
}

Create Task from Template

POST /tasks/from-template

Create a task from a template.

Request Body:

{
  "template_id": "tpl_123",
  "variables": {
    "issue_id": "#1234"
  },
  "assignee": "user456",
  "due_date": "2024-01-20T17:00:00Z"
}

Workflows

Create Workflow

POST /workflows

Create a multi-step workflow.

Request Body:

{
  "name": "Customer Onboarding",
  "steps": [
    {
      "name": "Account Setup",
      "assignee": "user456",
      "duration_hours": 2
    },
    {
      "name": "Training",
      "assignee": "user789",
      "duration_hours": 4,
      "depends_on": ["Account Setup"]
    }
  ]
}

Get Workflow Status

GET /workflows/{workflow_id}/status

Get workflow progress.

Response:

{
  "workflow_id": "wf_123",
  "name": "Customer Onboarding",
  "status": "in_progress",
  "progress": 50,
  "completed_steps": 1,
  "total_steps": 2,
  "current_step": "Training",
  "estimated_completion": "2024-01-16T12:00:00Z"
}

Task Automation

Create Automation Rule

POST /automations

Create rules for automatic task creation.

Request Body:

{
  "name": "High Priority Support",
  "trigger": {
    "type": "conversation_tag",
    "value": "urgent"
  },
  "action": {
    "type": "create_task",
    "template": "tpl_urgent",
    "auto_assign": true,
    "priority": "critical"
  }
}

Notifications

Task Notifications

Configure notifications for task events:

{
  "events": [
    "task_assigned",
    "task_completed",
    "task_overdue",
    "comment_added"
  ],
  "channels": ["email", "in_app"],
  "recipients": ["assignee", "watchers"]
}

Analytics

Task Analytics

GET /tasks/analytics

Get task performance metrics.

Response:

{
  "summary": {
    "total_tasks": 234,
    "completed": 189,
    "in_progress": 35,
    "overdue": 10,
    "completion_rate": 0.81,
    "average_completion_time_hours": 4.5
  },
  "by_priority": {
    "critical": {"total": 10, "completed": 8},
    "high": {"total": 45, "completed": 40},
    "medium": {"total": 120, "completed": 100},
    "low": {"total": 59, "completed": 41}
  },
  "by_assignee": [
    {
      "user_id": "user456",
      "name": "Jane Smith",
      "tasks_completed": 45,
      "average_time_hours": 3.2
    }
  ]
}

Error Responses

400 Bad Request

{
  "error": "invalid_due_date",
  "message": "Due date must be in the future"
}

404 Not Found

{
  "error": "task_not_found",
  "message": "Task tsk_xyz789 not found"
}

403 Forbidden

{
  "error": "permission_denied",
  "message": "You don't have permission to modify this task"
}

Best Practices

  1. Clear Titles: Use descriptive, action-oriented task titles
  2. Set Priorities: Always set appropriate priority levels
  3. Add Context: Include conversation or bot context
  4. Use Templates: Create templates for recurring task types
  5. Track Progress: Update progress regularly
  6. Set Realistic Deadlines: Allow adequate time for completion
  7. Use Tags: Categorize tasks with consistent tags

Integration with BASIC

Tasks can be created from BASIC scripts:

' Create task from conversation
task_id = CREATE TASK "Follow up with customer", "user456"
SET TASK PRIORITY task_id, "high"
SET TASK DUE DATE task_id, NOW() + 24 * 3600

' Check task status
status = GET TASK STATUS task_id
IF status = "completed" THEN
    TALK "Task has been completed"
END IF

Rate Limits

OperationLimitWindow
Create Task100/hourPer user
Update Task200/hourPer user
List Tasks60/minutePer user
Add Comment50/hourPer user

Storage API

botserver provides a RESTful API for managing file storage and object management through its S3-compatible storage backend.

Overview

The Storage API allows you to:

  • Upload and download files
  • Manage buckets and objects
  • Generate presigned URLs
  • Handle binary data and documents
  • Organize bot assets

Base URL

http://localhost:8080/api/v1/storage

Authentication

All storage API requests require authentication:

Authorization: Bearer <token>

Endpoints

List Buckets

GET /buckets

List all available storage buckets.

Response:

{
  "buckets": [
    {
      "name": "mybot.gbai",
      "created": "2024-01-15T10:00:00Z",
      "size": 1048576
    }
  ]
}

Create Bucket

POST /buckets

Create a new storage bucket.

Request Body:

{
  "name": "newbot.gbai",
  "region": "us-east-1",
  "versioning": false
}

Response:

{
  "bucket": "newbot.gbai",
  "created": true,
  "location": "/newbot.gbai"
}

List Objects

GET /buckets/{bucket}/objects

List objects in a bucket.

Query Parameters:

  • prefix - Filter objects by prefix
  • delimiter - Delimiter for grouping
  • max_keys - Maximum number of results (default: 1000)
  • continuation_token - Pagination token

Response:

{
  "objects": [
    {
      "key": "documents/manual.pdf",
      "size": 2048576,
      "last_modified": "2024-01-15T10:30:00Z",
      "etag": "d41d8cd98f00b204e9800998ecf8427e"
    }
  ],
  "is_truncated": false,
  "continuation_token": null
}

Upload Object

PUT /buckets/{bucket}/objects/{key}

Upload a file to storage.

Headers:

  • Content-Type - MIME type of the file
  • Content-Length - Size of the file
  • x-amz-meta-* - Custom metadata

Request Body: Binary file data

Response:

{
  "bucket": "mybot.gbai",
  "key": "documents/report.pdf",
  "etag": "d41d8cd98f00b204e9800998ecf8427e",
  "version_id": null
}

Download Object

GET /buckets/{bucket}/objects/{key}

Download a file from storage.

Headers:

  • Range - Partial content request (optional)
  • If-None-Match - ETag for caching (optional)

Response: Binary file data with appropriate headers

Delete Object

DELETE /buckets/{bucket}/objects/{key}

Delete an object from storage.

Response:

{
  "deleted": true,
  "key": "documents/old-file.pdf"
}

Copy Object

POST /buckets/{bucket}/objects/{key}/copy

Copy an object to a new location.

Request Body:

{
  "source_bucket": "source.gbai",
  "source_key": "file.pdf",
  "destination_bucket": "dest.gbai",
  "destination_key": "copied-file.pdf"
}

Response:

{
  "copied": true,
  "source": "source.gbai/file.pdf",
  "destination": "dest.gbai/copied-file.pdf"
}

Generate Presigned URL

POST /buckets/{bucket}/objects/{key}/presign

Generate a presigned URL for temporary access.

Request Body:

{
  "operation": "GET",
  "expires_in": 3600,
  "content_type": "application/pdf"
}

Response:

{
  "url": "http://localhost:8080/mybot.gbai/file.pdf?X-Amz-Algorithm=...",
  "expires_at": "2024-01-15T11:30:00Z"
}

Object Metadata

HEAD /buckets/{bucket}/objects/{key}

Get object metadata without downloading.

Response Headers:

  • Content-Type - MIME type
  • Content-Length - File size
  • Last-Modified - Modification time
  • ETag - Entity tag
  • x-amz-meta-* - Custom metadata

Multipart Upload

POST /buckets/{bucket}/objects/{key}/multipart

Initiate multipart upload for large files.

Response:

{
  "upload_id": "abc123...",
  "bucket": "mybot.gbai",
  "key": "large-file.zip"
}

Upload Part: PUT /buckets/{bucket}/objects/{key}/multipart/{uploadId}/{partNumber}

Complete Upload: POST /buckets/{bucket}/objects/{key}/multipart/{uploadId}/complete

Error Responses

404 Not Found

{
  "error": "not_found",
  "message": "Object not found",
  "resource": "mybot.gbai/missing.pdf"
}

409 Conflict

{
  "error": "conflict",
  "message": "Bucket already exists",
  "bucket": "existing.gbai"
}

507 Insufficient Storage

{
  "error": "insufficient_storage",
  "message": "Storage quota exceeded",
  "quota": 10737418240,
  "used": 10737418240
}

Usage Examples

Upload File with cURL

curl -X PUT \
  -H "Authorization: Bearer token123" \
  -H "Content-Type: application/pdf" \
  --data-binary @document.pdf \
  http://localhost:8080/api/v1/storage/buckets/mybot.gbai/objects/docs/manual.pdf

Download File

curl -X GET \
  -H "Authorization: Bearer token123" \
  http://localhost:8080/api/v1/storage/buckets/mybot.gbai/objects/docs/manual.pdf \
  -o downloaded.pdf

List Objects with Prefix

curl -X GET \
  -H "Authorization: Bearer token123" \
  "http://localhost:8080/api/v1/storage/buckets/mybot.gbai/objects?prefix=docs/"

Storage Organization

bucket/
├── .gbkb/           # Knowledge base files
│   ├── docs/
│   └── data/
├── .gbdialog/       # Dialog scripts
│   ├── scripts/
│   └── tools/
├── .gbtheme/        # Theme assets
│   ├── css/
│   └── images/
└── .gbdrive/        # User uploads
    ├── attachments/
    └── temp/

Quotas and Limits

LimitDefault Value
Max file size100 MB
Max bucket size10 GB
Max objects per bucket10,000
Presigned URL validity7 days
Multipart chunk size5 MB

Performance Tips

  1. Use Multipart Upload for files > 5MB
  2. Enable Caching with ETags
  3. Compress Large Files before upload
  4. Use Presigned URLs for direct client uploads
  5. Implement Retry Logic for network failures

Security Considerations

  • All uploads are scanned for malware
  • File types are validated
  • Presigned URLs expire automatically
  • Access control per bucket
  • Encryption at rest
  • SSL/TLS for transfers

Analytics API

The Analytics API provides endpoints for tracking, analyzing, and reporting on bot usage and performance metrics.

Status: Roadmap

This API is on the development roadmap. The endpoints documented below represent the planned interface design.

Base URL

http://localhost:9000/api/v1/analytics

Authentication

Uses the standard botserver authentication mechanism with appropriate role-based permissions.

Endpoints

Usage Statistics

MethodEndpointDescription
GET/api/v1/analytics/usageGet overall usage statistics
GET/api/v1/analytics/usage/dailyGet daily usage breakdown
GET/api/v1/analytics/usage/monthlyGet monthly usage summary

Conversation Metrics

MethodEndpointDescription
GET/api/v1/analytics/conversationsGet conversation metrics
GET/api/v1/analytics/conversations/volumeGet conversation volume over time
GET/api/v1/analytics/conversations/durationGet average conversation duration
GET/api/v1/analytics/conversations/resolutionGet resolution rate metrics

User Engagement

MethodEndpointDescription
GET/api/v1/analytics/engagementGet user engagement metrics
GET/api/v1/analytics/engagement/retentionGet user retention data
GET/api/v1/analytics/engagement/satisfactionGet satisfaction scores

Reports

MethodEndpointDescription
POST/api/v1/analytics/reportsGenerate a custom report
GET/api/v1/analytics/reports/{report_id}Get report by ID
GET/api/v1/analytics/reportsList all reports

Real-time Metrics

MethodEndpointDescription
GET/api/v1/analytics/realtimeGet real-time metrics
GET/api/v1/analytics/realtime/activeGet active sessions count

Request Examples

Get Usage Statistics

stats = GET "/api/v1/analytics/usage"
TALK "Total conversations: " + stats.total_conversations
TALK "Active users: " + stats.active_users

Get Daily Usage

daily = GET "/api/v1/analytics/usage/daily?days=7"
FOR EACH day IN daily.data
    TALK day.date + ": " + day.conversations + " conversations"
NEXT

Generate Custom Report

report_config = NEW OBJECT
report_config.type = "engagement"
report_config.start_date = "2025-01-01"
report_config.end_date = "2025-01-31"
report_config.format = "pdf"

report = POST "/api/v1/analytics/reports", report_config
TALK "Report ID: " + report.id

Get Real-time Metrics

realtime = GET "/api/v1/analytics/realtime"
TALK "Active sessions: " + realtime.active_sessions
TALK "Messages per minute: " + realtime.messages_per_minute

Response Codes

CodeDescription
200Success
400Bad Request (invalid parameters)
401Unauthorized
403Forbidden (insufficient permissions)
404Not Found
500Internal Server Error

Query Parameters

Time Range Filters

ParameterTypeDescription
start_dateStringStart date (ISO 8601 format)
end_dateStringEnd date (ISO 8601 format)
daysIntegerNumber of days to include
periodStringPredefined period (today, week, month, year)

Grouping Options

ParameterTypeDescription
group_byStringGroup results by (hour, day, week, month)
bot_idUUIDFilter by specific bot
user_idUUIDFilter by specific user

Required Permissions

Endpoint CategoryRequired Role
Usage Statisticsanalytics_viewer or admin
Conversation Metricsanalytics_viewer or admin
User Engagementanalytics_viewer or admin
Reportsanalytics_admin or admin
Real-time Metricsanalytics_viewer or admin

Admin API

The Admin API provides endpoints for system administration, user management, and configuration management.

Status: Roadmap

This API is on the development roadmap. The endpoints documented below represent the planned interface design.

Base URL

http://localhost:9000/api/v1/admin

Authentication

Uses the standard botserver authentication mechanism with administrator-level permissions required.

Endpoints

System Configuration

MethodEndpointDescription
GET/api/v1/admin/configRetrieve system configuration
PUT/api/v1/admin/configUpdate system configuration

User Management

MethodEndpointDescription
GET/api/v1/admin/usersList all users
POST/api/v1/admin/usersCreate a new user
GET/api/v1/admin/users/{user_id}Get user details
PUT/api/v1/admin/users/{user_id}Update user
DELETE/api/v1/admin/users/{user_id}Delete user

Bot Management

MethodEndpointDescription
GET/api/v1/admin/botsList all bots
GET/api/v1/admin/bots/{bot_id}Get bot details
POST/api/v1/admin/bots/{bot_id}/restartRestart a bot
DELETE/api/v1/admin/bots/{bot_id}Delete a bot

System Health

MethodEndpointDescription
GET/api/v1/admin/healthSystem health check
GET/api/v1/admin/metricsSystem metrics

Audit Logs

MethodEndpointDescription
GET/api/v1/admin/auditRetrieve audit logs
GET/api/v1/admin/audit/{event_id}Get specific audit event

Request Examples

Get System Configuration

config = GET "/api/v1/admin/config"
TALK "Server port: " + config.server_port

Create User

user_data = NEW OBJECT
user_data.email = "admin@example.com"
user_data.role = "administrator"

result = POST "/api/v1/admin/users", user_data
TALK "Created user: " + result.id

Restart Bot

POST "/api/v1/admin/bots/my-bot/restart", {}
TALK "Bot restart initiated"

Response Codes

CodeDescription
200Success
201Created
204No Content (successful deletion)
400Bad Request
401Unauthorized
403Forbidden (insufficient permissions)
404Not Found
500Internal Server Error

Required Permissions

Endpoint CategoryRequired Role
System Configurationadmin
User Managementadmin
Bot Managementadmin or bot_manager
System Healthadmin or monitor
Audit Logsadmin or auditor

AI API

The AI API provides endpoints for managing AI models, inference, training, and advanced AI operations.

Status: Roadmap

This API is on the development roadmap. The endpoints documented below represent the planned interface design.

Base URL

http://localhost:9000/api/v1/ai

Authentication

Uses the standard botserver authentication mechanism with appropriate role-based permissions.

Endpoints

Model Management

MethodEndpointDescription
GET/api/v1/ai/modelsList available models
GET/api/v1/ai/models/{model_id}Get model details
POST/api/v1/ai/models/deployDeploy a new model
DELETE/api/v1/ai/models/{model_id}Remove a model

Inference

MethodEndpointDescription
POST/api/v1/ai/inferenceRun inference on input data
POST/api/v1/ai/chat/completionsChat completion endpoint
POST/api/v1/ai/embeddingsGenerate embeddings

Training

MethodEndpointDescription
POST/api/v1/ai/training/startStart a training job
GET/api/v1/ai/training/{job_id}/statusGet training job status
POST/api/v1/ai/training/{job_id}/cancelCancel training job

Model Configuration

MethodEndpointDescription
GET/api/v1/ai/models/{model_id}/configGet model configuration
PUT/api/v1/ai/models/{model_id}/configUpdate model configuration

Request Examples

List Available Models

models = GET "/api/v1/ai/models"
FOR EACH model IN models
    TALK model.name + " - " + model.status
NEXT

Chat Completion

request = NEW OBJECT
request.model = "gpt-4"
request.messages = NEW ARRAY
request.messages.ADD({"role": "user", "content": "Hello, how are you?"})

response = POST "/api/v1/ai/chat/completions", request
TALK response.choices[0].message.content

Generate Embeddings

request = NEW OBJECT
request.input = "Convert this text to embeddings"
request.model = "text-embedding-3-small"

result = POST "/api/v1/ai/embeddings", request
embedding = result.data[0].embedding

Start Training Job

training_config = NEW OBJECT
training_config.base_model = "llama-2-7b"
training_config.dataset = "my-training-data"
training_config.epochs = 3

job = POST "/api/v1/ai/training/start", training_config
TALK "Training job started: " + job.id

Response Codes

CodeDescription
200Success
201Created
202Accepted (for async operations)
400Bad Request
401Unauthorized
403Forbidden
404Model or resource not found
429Rate limit exceeded
500Internal Server Error

Required Permissions

Endpoint CategoryRequired Role
Model Managementadmin or model_manager
Inferenceuser or higher
Trainingadmin or trainer
Model Configurationadmin

Tool Definition

In botserver, a tool is simply a .bas file. That’s it!

How It Works

  1. Create a .bas file in your .gbdialog/ folder
  2. The LLM automatically discovers it and can call it when needed
  3. No manual registration required - it just works!

Tool Discovery and Execution Flow

LLM Tool Discovery and Execution Pipeline "Send an email to John about the meeting" LLM Analyzes "Need email tool" Tool Discovery Scan .gbdialog/ • send-email.bas ✓ • create-task.bas • get-weather.bas Parameter Collection to → "John" subject → "Meeting" body → (generated) Execute Tool send-email.bas Return Result "Email sent!"

Simple Example

Create get-weather.bas:

' This tool gets weather information
' The LLM will call this when users ask about weather

TALK "Let me check the weather for you..."
weather = GET "/api/weather/San Francisco"
TALK "The weather is: " + weather

That’s a tool! The LLM now knows it can call this when users ask about weather.

Tool with Parameters

Create send-email.bas:

' Send an email to someone
PARAM to AS STRING
PARAM subject AS STRING
PARAM body AS STRING

GET "/email/send" WITH to, subject, body
TALK "Email sent to " + to

The PARAM declarations tell the LLM what parameters this tool accepts.

Making Tools Available

Method 1: Automatic Discovery (Default)

All .bas files in your .gbdialog/ folder are automatically available.

mybot.gbai/
  mybot.gbdialog/
    start.bas           ← Entry point
    get-weather.bas     ← Tool (auto-discovered)
    send-email.bas      ← Tool (auto-discovered)
    create-task.bas     ← Tool (auto-discovered)

Method 2: Manual Registration

In your start.bas, explicitly add tools:

' Register tools for this conversation
USE TOOL "get-weather"
USE TOOL "send-email"
USE TOOL "create-task"

TALK "Hello! I can help with weather, email, and tasks."

Method 3: LLM-Driven Tool Selection

Let the LLM decide which tools to use naturally:

' In start.bas
' Load all available tools - LLM decides when to use them
USE TOOL "weather"
USE TOOL "email"
USE TOOL "enrollment"

TALK "I can help with various tasks. What do you need?"
' The LLM will automatically call the right tool based on user intent

Tool Format Conversion

botserver automatically converts your .bas tools to:

  • MCP (Model Context Protocol) format
  • Groq/OpenAI-compatible function calling format
  • Other LLM provider formats

You never write these formats manually - just write .bas files!

Conversion Pipeline

Tool Format Conversion Pipeline send-email.bas BASIC Parser • Extract PARAM • Parse DESCRIPTION • Analyze code MCP Format OpenAI Function Claude Tool Local Model LLM Provider Receives Native Format

Complete Example

Here’s a real tool from the codebase - enrollment.bas:

PARAM name AS string          LIKE "Abreu Silva"                DESCRIPTION "Required full name of the individual."
PARAM birthday AS date        LIKE "23/09/2001"                 DESCRIPTION "Required birth date of the individual in DD/MM/YYYY format."
PARAM email AS string         LIKE "abreu.silva@example.com"    DESCRIPTION "Required email address for contact purposes."
PARAM personalid AS integer   LIKE "12345678900"                DESCRIPTION "Required Personal ID number of the individual (only numbers)."
PARAM address AS string       LIKE "Rua das Flores, 123 - SP"   DESCRIPTION "Required full address of the individual."

DESCRIPTION  "This is the enrollment process, called when the user wants to enroll. Once all information is collected, confirm the details and inform them that their enrollment request has been successfully submitted."

' The actual tool logic is simple
SAVE "enrollments.csv", id, name, birthday, email, personalid, address
TALK "Successfully enrolled " + name + "!"

' That's it! The LLM handles:
' - Natural conversation to collect parameters
' - Validation and error handling  
' - Confirming details with the user
' - All the complex interaction flow

That’s It!

To create a tool:

  1. ✅ Create a .bas file
  2. ✅ Add PARAM declarations if you need parameters
  3. ✅ Write your logic using TALK, HEAR, CALL, etc.
  4. ✅ Done!

The LLM will automatically:

  • Discover your tool
  • Understand what it does (from comments and code)
  • Know when to call it
  • Pass the right parameters

No JSON schemas, no manual registration, no complex configuration. Just write BASIC!

Best Practices

1. Add Comments

The LLM reads your comments to understand the tool:

' This tool books a meeting room
' It checks availability and sends calendar invites
PARAM room_name AS STRING
PARAM date AS STRING
PARAM attendees AS ARRAY

2. Validate Parameters

Always validate input:

IF room_name IS NULL THEN
    TALK "Please specify which room you want to book."
    RETURN
ENDIF

3. Provide Feedback

Let users know what’s happening:

TALK "Checking room availability..."
available = GET "/calendar/check" WITH room_name, date

IF available THEN
    TALK "Great! Booking the room now..."
    GET "/calendar/book" WITH room_name, date, attendees
    TALK "Meeting room booked successfully!"
ELSE
    TALK "Sorry, that room is not available on " + date
ENDIF

Tool Discovery

The LLM discovers tools by:

  1. Reading .bas files in your .gbdialog/ folder
  2. Extracting comments to understand purpose
  3. Parsing PARAM declarations to understand parameters
  4. Building a function signature automatically

Example tool discovery from send-email.bas:

Function: send-email
Description: Send an email to someone
Parameters:
  - to: STRING (required)
  - subject: STRING (required)  
  - body: STRING (required)

This is generated automatically from your .bas file!

Removing Tools

Dynamic Tool Management

' Remove a specific tool
REMOVE TOOL "send-email"

' Clear all tools
CLEAR TOOLS

' List active tools
tools = LIST TOOLS
TALK "Available tools: " + tools

Next Steps

PARAM Declaration

The PARAM keyword defines input parameters for tools, enabling type checking, validation, and documentation.

Syntax

PARAM parameter_name AS type LIKE "example" DESCRIPTION "description text"

Components

  • parameter_name: The name used to reference the parameter in the script
  • AS type: The data type (string, integer, number, boolean, date, etc.)
  • LIKE "example": An example value showing expected format
  • DESCRIPTION "text": Explanation of what the parameter represents

Supported Types

  • string: Text values (default if no type specified)
  • integer: Whole numbers
  • number: Decimal numbers
  • boolean: True/false values
  • date: Date values
  • datetime: Date and time values
  • array: Lists of values
  • object: Structured data

Examples

Basic Parameter

PARAM username AS string LIKE "john_doe" DESCRIPTION "User's unique identifier"

Multiple Parameters

PARAM first_name AS string LIKE "John" DESCRIPTION "User's first name"
PARAM last_name AS string LIKE "Doe" DESCRIPTION "User's last name" 
PARAM age AS integer LIKE "25" DESCRIPTION "User's age in years"
PARAM email AS string LIKE "john@example.com" DESCRIPTION "User's email address"

Complex Types

PARAM preferences AS object LIKE "{"theme": "dark", "notifications": true}" DESCRIPTION "User preference settings"
PARAM tags AS array LIKE "["urgent", "follow-up"]" DESCRIPTION "Item categorization tags"

Type Validation

Parameters are validated when tools are called:

  • string: Any text value accepted
  • integer: Must be a whole number
  • number: Must be a valid number
  • boolean: Converted from “true”/“false” or 1/0
  • date: Parsed according to locale format

Usage in Tools

Parameters become available as variables in the tool script:

PARAM product_id AS integer LIKE "12345" DESCRIPTION "Product identifier"

REM product_id variable is now available
TALK "Fetching details for product " + product_id

Documentation Generation

Parameter declarations are used to automatically generate:

  • Tool documentation
  • API schemas (OpenAI tools format)
  • MCP (Model Context Protocol) definitions
  • User interface forms

Required vs Optional

All parameters are required by default. For optional parameters, check for empty values:

PARAM phone AS string LIKE "+1-555-0123" DESCRIPTION "Optional phone number"

IF phone != "" THEN
    TALK "We'll contact you at " + phone
ELSE
    TALK "No phone number provided"
END IF

Parameter declarations make tools self-documenting and enable rich integration with AI systems that can understand and use the defined interfaces.

Tool Compilation

botserver compiles BASIC scripts (.bas files) into tool definitions that can be called by the LLM. The compilation process extracts parameters, descriptions, and generates metadata for tool discovery.

Overview

The compilation process reads .bas files from .gbdialog directories and parses parameter declarations along with descriptions. It then generates tool definitions in both MCP and OpenAI formats, stores the compiled tools in the database, and makes them available for LLM invocation.

The Compilation Pipeline

File Detection

The DriveMonitor service watches for changes in .gbdialog directories. It monitors .bas files in drive storage, detects new or modified scripts, and triggers compilation automatically when changes occur.

Source Processing

When a .bas file changes, the compiler downloads the file from drive and creates a local working directory. It then invokes the BasicCompiler to process the script and extract the necessary metadata.

Parameter Extraction

The compiler parses BASIC script headers for PARAM declarations with types and examples, DESCRIPTION statements for tool documentation, and variable names with default values.

Example script header:

PARAM name AS string LIKE "John Smith" DESCRIPTION "User's full name"
PARAM age AS number LIKE 25 DESCRIPTION "User's age"
DESCRIPTION "Processes user registration"

Tool Definition Generation

The compiler creates structured tool definitions from the parsed script. The tool name is derived from the filename without the .bas extension. Parameters are extracted from PARAM declarations, the description comes from the DESCRIPTION statement, and the script path provides a reference to the source file.

Database Storage

Compiled tools are stored in the basic_tools table, which contains tool metadata including name, description, and parameters. The table also stores source script content, bot association, and compilation timestamp for tracking when tools were last updated.

Compilation Output Formats

MCP (Model Context Protocol) Format

The compiler generates MCP-compatible tool definitions:

{
  "name": "user_registration",
  "description": "Processes user registration",
  "input_schema": {
    "type": "object",
    "properties": {
      "name": {
        "type": "string",
        "description": "User's full name"
      },
      "age": {
        "type": "number",
        "description": "User's age"
      }
    },
    "required": ["name", "age"]
  }
}

OpenAI Function Format

The compiler also generates OpenAI-compatible function definitions for API compatibility:

{
  "name": "user_registration",
  "description": "Processes user registration",
  "parameters": {
    "type": "object",
    "properties": {
      "name": {
        "type": "string",
        "description": "User's full name"
      },
      "age": {
        "type": "number",
        "description": "User's age"
      }
    },
    "required": ["name", "age"]
  }
}

Automatic Recompilation

Tools are recompiled automatically when the source .bas file is modified, when the file’s ETag changes in drive storage, or when a manual recompilation is triggered through the system.

Working Directory Structure

The compiler maintains a local working directory structured as ./work/bot-name.gbai/bot-name.gbdialog/ containing the individual tool files like tool1.bas, tool2.bas, and so on. This directory is used for caching compiled scripts, temporary processing during compilation, and debug inspection when troubleshooting issues.

Error Handling

Compilation errors are handled gracefully to ensure the system remains stable. Syntax errors are logged with line numbers for easy debugging. Missing parameters are reported clearly, invalid types are highlighted in error messages, and compilation continues for other tools even when one fails. Common compilation errors include missing DESCRIPTION statements, invalid PARAM syntax, unsupported parameter types, and general script parsing failures.

Tool Activation

After successful compilation, the tool is stored in the database and becomes available for the USE TOOL keyword. The LLM can discover the tool through its metadata and invoke it during conversations with users.

Performance Considerations

Compilation is triggered asynchronously to avoid blocking other operations. Multiple tools can be compiled in parallel for efficiency, and results are cached in the database to avoid redundant processing. Only changed files are recompiled, minimizing unnecessary work.

Debugging Compilation

To debug compilation issues, check the logs for compilation errors that include file names and line numbers. Inspect the working directory files to see the raw script content. Verify that parameter syntax follows the expected format, and test the tool manually with USE TOOL to confirm it functions correctly.

Best Practices

Always include a DESCRIPTION statement to help the LLM understand the tool’s purpose. Use clear parameter names that make the code self-documenting. Provide LIKE examples with realistic values to improve LLM parameter filling accuracy. Test tools after making changes to verify compilation succeeded, and check logs regularly to monitor for compilation errors.

Limitations

Parameters must be declared at the start of the script before any executable code. The compiler only supports basic types including string, number, and boolean. All parameters are required since optional parameters are not currently supported. Nested object parameters and array parameters are also not available in the current implementation.

Summary

The compilation process transforms BASIC scripts into callable tools that the LLM can discover and invoke. This automatic compilation ensures that changes to scripts are immediately available for use in conversations, making development iteration fast and seamless.

MCP Format

Model Context Protocol (MCP) is a standardized format for defining tools that language models can discover and invoke. botserver generates MCP-compatible tool definitions from BASIC scripts.

Overview

MCP provides a structured way to describe:

  • Tool name and purpose
  • Input parameters and types
  • Parameter descriptions and examples
  • Output format expectations

MCP Tool Structure

A compiled MCP tool definition contains:

{
  "name": "tool_name",
  "description": "Tool description from DESCRIPTION statement",
  "input_schema": {
    "type": "object",
    "properties": {
      "param1": {
        "type": "string",
        "description": "Parameter description"
      },
      "param2": {
        "type": "number",
        "description": "Another parameter"
      }
    },
    "required": ["param1", "param2"]
  }
}

From BASIC to MCP

Source BASIC Script

PARAM customer_name AS string LIKE "John Doe" DESCRIPTION "Customer's full name"
PARAM order_amount AS number LIKE 99.99 DESCRIPTION "Total order amount"
PARAM shipping_address AS string LIKE "123 Main St" DESCRIPTION "Delivery address"

DESCRIPTION "Process a new customer order"

# Script logic here
TALK "Processing order for " + customer_name
# ...

Generated MCP Definition

{
  "name": "process_order",
  "description": "Process a new customer order",
  "input_schema": {
    "type": "object",
    "properties": {
      "customer_name": {
        "type": "string",
        "description": "Customer's full name",
        "example": "John Doe"
      },
      "order_amount": {
        "type": "number",
        "description": "Total order amount",
        "example": 99.99
      },
      "shipping_address": {
        "type": "string",
        "description": "Delivery address",
        "example": "123 Main St"
      }
    },
    "required": ["customer_name", "order_amount", "shipping_address"]
  }
}

Parameter Types

MCP supports these parameter types in botserver:

BASIC TypeMCP TypeJSON Schema Type
stringstring“type”: “string”
numbernumber“type”: “number”
booleanboolean“type”: “boolean”

Input Schema

The input_schema follows JSON Schema specification:

Required Fields

  • type: Always “object” for tool parameters
  • properties: Object containing parameter definitions
  • required: Array of required parameter names

Parameter Properties

  • type: Data type of the parameter
  • description: Human-readable description
  • example: Example value from LIKE clause

Tool Discovery

MCP tools are discoverable through:

  1. Tool Listing: LLM can query available tools
  2. Parameter Inspection: LLM examines input schema
  3. Description Matching: LLM matches user intent to tool description

Integration with LLM

When the LLM invokes an MCP tool:

  1. Parameter Collection: LLM extracts values from context
  2. Schema Validation: Parameters validated against input_schema
  3. Tool Execution: BASIC script runs with provided parameters
  4. Result Return: Output returned to LLM for processing

Example Use Cases

Form Processing Tool

{
  "name": "submit_application",
  "description": "Submit a job application form",
  "input_schema": {
    "type": "object",
    "properties": {
      "applicant_name": {
        "type": "string",
        "description": "Full name of applicant"
      },
      "position": {
        "type": "string",
        "description": "Position applying for"
      },
      "experience_years": {
        "type": "number",
        "description": "Years of relevant experience"
      }
    },
    "required": ["applicant_name", "position", "experience_years"]
  }
}

Data Query Tool

{
  "name": "search_inventory",
  "description": "Search product inventory",
  "input_schema": {
    "type": "object",
    "properties": {
      "product_name": {
        "type": "string",
        "description": "Product to search for"
      },
      "min_quantity": {
        "type": "number",
        "description": "Minimum quantity available"
      }
    },
    "required": ["product_name"]
  }
}

Storage and Retrieval

MCP definitions are stored in the basic_tools table:

  • Tool metadata serialized as JSON
  • Indexed for fast retrieval
  • Associated with bot ID
  • Cached for performance

Advantages of MCP Format

  1. Standardized: Compatible with multiple LLM providers
  2. Self-Documenting: Contains all necessary metadata
  3. Type-Safe: Schema validation prevents errors
  4. Discoverable: LLMs can understand tool capabilities
  5. Extensible: Can add custom properties as needed

Limitations in botserver

Current MCP implementation limitations:

  • No nested object parameters
  • No array parameters
  • All parameters are required (no optional)
  • No enum/choice constraints
  • No pattern validation

Best Practices

  1. Clear Descriptions: Make tool purpose obvious
  2. Meaningful Names: Use descriptive parameter names
  3. Provide Examples: LIKE values help LLM understand expected input
  4. Type Accuracy: Use correct types (string vs number)
  5. Complete Documentation: Every parameter needs description

Validation

MCP tools are validated during compilation:

  • Parameter names must be valid identifiers
  • Types must be supported
  • Descriptions cannot be empty
  • Tool name must be unique per bot

Summary

The MCP format provides a structured way to expose BASIC scripts as callable tools for LLMs. By generating MCP-compatible definitions, botserver enables seamless tool discovery and invocation within conversational flows.

Tool Format

botserver generates OpenAI-compatible function definitions from BASIC scripts, enabling integration with OpenAI’s function calling API.

Overview

OpenAI’s function calling format allows GPT models to:

  • Discover available functions
  • Understand parameter requirements
  • Generate structured function calls
  • Process function results

Function Structure

An OpenAI-compatible function definition contains:

{
  "name": "function_name",
  "description": "Function description",
  "parameters": {
    "type": "object",
    "properties": {
      "param1": {
        "type": "string",
        "description": "Parameter description"
      },
      "param2": {
        "type": "number",
        "description": "Another parameter"
      }
    },
    "required": ["param1", "param2"]
  }
}

Conversion from BASIC

Source BASIC Script

PARAM product_id AS string LIKE "SKU-12345" DESCRIPTION "Product identifier"
PARAM quantity AS number LIKE 10 DESCRIPTION "Quantity to order"
PARAM urgent AS boolean LIKE true DESCRIPTION "Rush delivery needed"

DESCRIPTION "Create a purchase order for inventory"

# Script implementation
let order_id = CREATE_ORDER(product_id, quantity, urgent)
TALK "Order created: " + order_id

Generated Function

{
  "name": "create_purchase_order",
  "description": "Create a purchase order for inventory",
  "parameters": {
    "type": "object",
    "properties": {
      "product_id": {
        "type": "string",
        "description": "Product identifier"
      },
      "quantity": {
        "type": "number",
        "description": "Quantity to order"
      },
      "urgent": {
        "type": "boolean",
        "description": "Rush delivery needed"
      }
    },
    "required": ["product_id", "quantity", "urgent"]
  }
}

Integration with OpenAI API

When using OpenAI’s API, the functions are passed in the request:

{
  "model": "gpt-4o",
  "messages": [...],
  "functions": [
    {
      "name": "create_purchase_order",
      "description": "Create a purchase order for inventory",
      "parameters": {...}
    }
  ],
  "function_call": "auto"
}

Parameter Type Mapping

BASIC TypeOpenAI TypeDescription
string“string”Text values
number“number”Numeric values (integer or float)
boolean“boolean”True/false values

Function Calling Flow

  1. User Query: User asks to perform an action
  2. Function Discovery: GPT identifies relevant function
  3. Parameter Extraction: GPT extracts parameters from context
  4. Function Call: GPT generates structured function call
  5. Execution: botserver executes the BASIC script
  6. Result Processing: Output returned to GPT for response

Example Function Calls

Customer Service Function

{
  "name": "check_order_status",
  "description": "Check the status of a customer order",
  "parameters": {
    "type": "object",
    "properties": {
      "order_id": {
        "type": "string",
        "description": "Order reference number"
      },
      "customer_email": {
        "type": "string",
        "description": "Customer email for verification"
      }
    },
    "required": ["order_id", "customer_email"]
  }
}

Data Analysis Function

{
  "name": "generate_sales_report",
  "description": "Generate sales report for specified period",
  "parameters": {
    "type": "object",
    "properties": {
      "start_date": {
        "type": "string",
        "description": "Report start date (YYYY-MM-DD)"
      },
      "end_date": {
        "type": "string",
        "description": "Report end date (YYYY-MM-DD)"
      },
      "region": {
        "type": "string",
        "description": "Sales region to analyze"
      }
    },
    "required": ["start_date", "end_date"]
  }
}

Function Response Handling

When a function is executed:

  1. Script Execution: BASIC script runs with provided parameters
  2. Output Collection: TALK statements and return values collected
  3. Response Format: Results formatted for OpenAI API
  4. Context Update: Function result added to conversation

Differences from MCP Format

AspectOpenAI FormatMCP Format
Schema Locationparametersinput_schema
Example ValuesNot includedIncluded in schema
MetadataMinimalExtended metadata
CompatibilityOpenAI models onlyMultiple providers

Error Handling

Function errors are handled gracefully:

  • Missing parameters return error message
  • Type mismatches caught before execution
  • Script errors logged and returned
  • Timeout protection for long-running scripts

Best Practices

  1. Descriptive Names: Use clear function names
  2. Comprehensive Descriptions: Explain what the function does
  3. Parameter Clarity: Each parameter needs clear description
  4. Error Messages: Provide helpful error feedback
  5. Idempotency: Design functions to be safely retryable

Limitations

Current OpenAI format limitations in botserver:

  • No nested objects in parameters
  • No array parameters
  • No enum constraints
  • All parameters marked as required
  • No custom validation rules

Storage

OpenAI function definitions are stored alongside MCP definitions:

  • Stored in basic_tools table
  • Generated during compilation
  • Cached for performance
  • Updated when script changes

Usage in Conversations

When a user message triggers function calling:

User: "Order 50 units of SKU-12345 urgently"

System: [Identifies create_purchase_order function]
        [Extracts: product_id="SKU-12345", quantity=50, urgent=true]
        [Executes function]

Bot: "Order created: ORD-2024-001. Rush delivery confirmed for 50 units of SKU-12345."

Performance Considerations

  • Functions cached after compilation
  • Parallel function execution supported
  • Rate limiting applied per session
  • Timeout protection (30 seconds default)

Debugging

To debug OpenAI function calls:

  1. Enable debug logging
  2. Check function registration
  3. Verify parameter extraction
  4. Review execution logs
  5. Test with manual invocation

Summary

The OpenAI function format enables seamless integration between BASIC scripts and OpenAI’s GPT models. By automatically generating compatible function definitions, botserver allows natural language interactions to trigger complex business logic implementations.

External APIs

botserver enables integration with external APIs through BASIC scripts, allowing bots to connect with third-party services and extend functionality beyond built-in capabilities.

Overview

External API integration in botserver is achieved through:

  • The GET keyword for HTTP/HTTPS requests
  • LLM function calling for API interactions
  • BASIC script logic for response processing
  • Bot memory for storing API credentials and state

HTTP Requests with GET

The primary method for calling external APIs is the GET keyword:

# Fetch data from an external API
let response = GET "https://api.example.com/data"

# Process the response
let parsed = LLM "Extract the key information from this JSON: " + response
TALK parsed

Supported Protocols

  • HTTP: Basic unencrypted requests
  • HTTPS: Secure encrypted requests (recommended)

API Response Handling

JSON Responses

Most modern APIs return JSON data:

let weather = GET "https://api.weather.com/current?city=Seattle"
# Response: {"temp": 65, "conditions": "cloudy"}

let report = LLM "Create a weather report from: " + weather
TALK report

Text Responses

Plain text responses are used directly:

let quote = GET "https://api.quotes.com/daily"
TALK "Quote of the day: " + quote

Authentication Patterns

API Key in URL

let api_key = GET BOT MEMORY "weather_api_key"
let url = "https://api.weather.com/data?key=" + api_key
let data = GET url

Bearer Token (via Headers)

Currently, botserver’s GET keyword doesn’t support custom headers directly. For APIs requiring Bearer tokens or custom headers, you need to:

  1. Use proxy endpoints that add authentication
  2. Or use APIs that support key-in-URL authentication

Common Integration Patterns

Weather Service

PARAM city AS string LIKE "Seattle" DESCRIPTION "City for weather"
DESCRIPTION "Gets current weather for a city"

let api_key = GET BOT MEMORY "openweather_key"
let url = "https://api.openweathermap.org/data/2.5/weather?q=" + city + "&appid=" + api_key

let response = GET url
let weather = LLM "Describe the weather based on: " + response
TALK weather

News API

DESCRIPTION "Fetches latest news headlines"

let api_key = GET BOT MEMORY "newsapi_key"
let url = "https://newsapi.org/v2/top-headlines?country=us&apiKey=" + api_key

let news = GET url
let summary = LLM "Summarize the top 3 news stories from: " + news
TALK summary

Currency Exchange

PARAM amount AS number LIKE 100 DESCRIPTION "Amount to convert"
PARAM from_currency AS string LIKE "USD" DESCRIPTION "Source currency"
PARAM to_currency AS string LIKE "EUR" DESCRIPTION "Target currency"

DESCRIPTION "Converts currency using exchange rates"

let url = "https://api.exchangerate-api.com/v4/latest/" + from_currency
let rates = GET url

' Parse rates and calculate conversion
let rate = PARSE_JSON(rates, "rates." + to_currency)
let converted = amount * rate
TALK amount + " " + from_currency + " = " + converted + " " + to_currency

Error Handling

Network Failures

let response = GET "https://api.example.com/data"

if (response == "") {
    TALK "Unable to reach the service. Please try again later."
} else {
    # Process successful response
    TALK response
}

API Errors

let data = GET "https://api.service.com/endpoint"

if (data CONTAINS "error") {
    TALK "The service returned an error. Please check your request."
} else {
    # Process valid data
}

Rate Limiting Considerations

When integrating with external APIs:

  1. Respect Rate Limits: Most APIs have usage limits
  2. Cache Responses: Use BOT_MEMORY to store frequently accessed data
  3. Batch Requests: Combine multiple data needs into single calls
  4. Handle 429 Errors: Too Many Requests responses

Caching Pattern

# Check cache first
let cached = GET BOT MEMORY "weather_cache"
let cache_time = GET BOT MEMORY "weather_cache_time"

let current_time = NOW()
let age = current_time - cache_time

if (cached != "" && age < 3600) {
    # Use cached data (less than 1 hour old)
    TALK cached
} else {
    # Fetch fresh data
    let fresh = GET "https://api.weather.com/current"
    SET BOT MEMORY "weather_cache", fresh
    SET BOT MEMORY "weather_cache_time", current_time
    TALK fresh
}

Security Best Practices

Credential Storage

# Store API keys in bot memory, not in scripts
let api_key = GET BOT MEMORY "api_key"

# Never hardcode credentials
# BAD: let key = "sk-1234567890abcdef"
# GOOD: let key = GET BOT MEMORY "api_key"

Input Validation

PARAM user_input AS string LIKE "Seattle" DESCRIPTION "User provided input"

# Sanitize before using in URLs
let safe_input = REPLACE(user_input, " ", "%20")
let url = "https://api.example.com/search?q=" + safe_input

Limitations

Current limitations for external API integration:

  1. No POST/PUT/DELETE: Only GET requests supported
  2. No Custom Headers: Cannot set Authorization headers directly
  3. No Request Body: Cannot send JSON payloads
  4. Timeout Fixed: 30-second timeout cannot be configured
  5. No Streaming: Responses fully buffered before processing

Workarounds

For POST Requests

Create a proxy service that:

  1. Accepts GET requests
  2. Converts to POST internally
  3. Returns the response

For Complex APIs

Use the LLM to:

  1. Interpret API responses
  2. Extract relevant data
  3. Format for user consumption

Example: Complete API Integration

PARAM location AS string LIKE "New York" DESCRIPTION "Location to check"
DESCRIPTION "Provides weather and news for a location"

# Weather API
let weather_key = GET BOT MEMORY "weather_api_key"
let weather_url = "https://api.openweathermap.org/data/2.5/weather?q=" + location + "&appid=" + weather_key
let weather = GET weather_url

# News API  
let news_key = GET BOT MEMORY "news_api_key"
let news_url = "https://newsapi.org/v2/everything?q=" + location + "&apiKey=" + news_key
let news = GET news_url

# Present the information
TALK "Here's your local update for " + location + ":"
TALK "Weather: " + weather
TALK "Latest news: " + news

Best Practices

  1. Store Keys Securely: Use BOT_MEMORY for API credentials
  2. Handle Failures Gracefully: Always check for empty responses
  3. Cache When Possible: Reduce API calls and improve response time
  4. Document API Usage: Comment which APIs your tools depend on
  5. Monitor Usage: Track API calls to avoid exceeding limits
  6. Use HTTPS: Always prefer secure connections
  7. Validate Inputs: Sanitize user inputs before including in URLs

Summary

While botserver’s external API capabilities are currently limited to GET requests, creative use of response processing and bot memory for state management enables integration with many third-party services. For more complex API interactions, consider using proxy services or custom integrations.

Creating an LLM REST Server

General Bots offers an incredibly simple way to transform a Large Language Model (LLM) into a fully functional REST API server. With just a few lines of our proprietary BASIC-like syntax, you can create sophisticated AI-powered applications.

Overview

By defining PARAM declarations and a DESCRIPTION in your .bas file, General Bots automatically:

  1. Creates REST API endpoints callable by the LLM as tools
  2. Generates OpenAI-compatible function calling schemas
  3. Generates MCP (Model Context Protocol) tool definitions
  4. Handles conversation state and context management

Basic Structure

Every LLM-callable tool follows this structure:

PARAM parameter_name AS type LIKE "example" DESCRIPTION "What this parameter is for"

DESCRIPTION "What this tool does. Called when user wants to [action]."

' Your business logic here

Example: Store Chatbot

Here’s how easy it is to create a chatbot for a store:

PARAM operator AS number LIKE 12312312
DESCRIPTION "Operator code."

DESCRIPTION It is a WebService of GB.

products = FIND "products.csv"

BEGIN SYSTEM PROMPT
  You must act as a chatbot that will assist a store attendant by 
  following these rules: Whenever the attendant places an order, it must 
  include the table and the customer's name. Example: A 400ml Pineapple 
  Caipirinha for Rafael at table 10. Orders are based on the products and 
  sides from this product menu: ${JSON.stringify(products)}.

  For each order placed, return a JSON containing the product name, the 
  table, and a list of sides with their respective ids.
END SYSTEM PROMPT

That’s it! With just this simple BASIC code, you’ve created a fully functional LLM-powered chatbot that can handle complex order processing.

REST API Endpoints

The system automatically generates REST API endpoints for your dialogs.

Starting a Conversation

GET http://localhost:1111/llm-server/dialogs/start?operator=123&userSystemId=999

This returns a Process ID (PID), a number like 24795078551392. This PID should be passed within the call chain for maintaining conversation context.

Talking to the Bot

Once you have the PID, you can interact with the LLM:

GET http://localhost:1111/llm-server/dk/talk?pid=4893749837&text=add%20soda

This call acts like talking to the LLM, but it can be used for anything that General Bots can do in a robotic conversation between systems mediated by LLM. The return will be JSON (or any format specified in your BEGIN SYSTEM PROMPT).

Example: Enrollment Process API

Creating a REST API server for any business process is equally straightforward:

PARAM name AS string LIKE "João Silva"
DESCRIPTION "Required full name of the individual."

PARAM birthday AS date LIKE "23/09/2001"
DESCRIPTION "Required birth date of the individual in DD/MM/YYYY format."

PARAM email AS string LIKE "joao.silva@example.com"
DESCRIPTION "Required email address for contact purposes."

PARAM personalid AS integer LIKE "12345678900"
DESCRIPTION "Required Personal ID number of the individual (only numbers)."

PARAM address AS string LIKE "Rua das Flores, 123, São Paulo, SP"
DESCRIPTION "Required full address of the individual."

DESCRIPTION "This is the enrollment process, called when the user wants to enroll. Once all information is collected, confirm the details and inform them that their enrollment request has been successfully submitted. Provide a polite and professional tone throughout the interaction."

SAVE "enrollments.csv", id, name, birthday, email, cpf, rg, address

This creates a full-fledged enrollment system with:

  • Data validation
  • User interaction
  • Data storage
  • Automatic REST API endpoint

The system automatically generates a REST API endpoint that is called by LLM as a tool:

GET http://api.pragmatismo.cloud/llm-server/dialogs/enrollment?birthday=...&name=...

Generated Tool Schemas

MCP Format

For each tool, General Bots generates MCP-compatible schemas:

{
  "name": "enrollment",
  "description": "This is the enrollment process...",
  "input_schema": {
    "type": "object",
    "properties": {
      "name": {
        "type": "string",
        "description": "Required full name of the individual.",
        "example": "João Silva"
      },
      "birthday": {
        "type": "string",
        "description": "Required birth date...",
        "example": "23/09/2001"
      }
    },
    "required": ["name", "birthday", "email", "personalid", "address"]
  }
}

OpenAI Format

Also generates OpenAI function calling format:

{
  "type": "function",
  "function": {
    "name": "enrollment",
    "description": "This is the enrollment process...",
    "parameters": {
      "type": "object",
      "properties": {
        "name": {
          "type": "string",
          "description": "Required full name of the individual."
        }
      },
      "required": ["name", "birthday", "email", "personalid", "address"]
    }
  }
}

Parameter Types

TypeDescriptionExample
stringText values"John Smith"
numberNumeric values42, 3.14
integerWhole numbers100
dateDate values"2024-01-15"
booleanTrue/falsetrue

Advanced: External API Integration

You can combine LLM tools with external API calls:

PARAM location AS string LIKE "Seattle"
DESCRIPTION "City for weather lookup"

DESCRIPTION "Gets current weather for a city"

let api_key = GET BOT MEMORY "openweather_key"
let url = "https://api.openweathermap.org/data/2.5/weather?q=" + location + "&appid=" + api_key

let response = GET url
let weather = LLM "Describe the weather based on: " + response
TALK weather

Best Practices

  1. Clear Descriptions: Write detailed DESCRIPTION text - this is what the LLM uses to decide when to call your tool.

  2. Good Examples: The LIKE clause provides examples that help both the LLM and API consumers understand expected values.

  3. Validation: Add validation logic to handle edge cases:

PARAM email AS string LIKE "user@example.com"
DESCRIPTION "Email address"

IF NOT INSTR(email, "@") > 0 THEN
    TALK "Please provide a valid email address."
    RETURN
END IF
  1. Error Handling: Always handle potential errors gracefully:
result = GET "https://api.example.com/data"
IF result.error THEN
    TALK "Unable to fetch data. Please try again."
    RETURN
END IF
  1. Secure Credentials: Use BOT MEMORY for API keys:
api_key = GET BOT MEMORY "my_api_key"

Deployment

Once your .bas file is saved in the .gbdialog folder, General Bots automatically:

  1. Compiles the tool definition
  2. Generates the REST endpoints
  3. Makes it available to the LLM as a callable tool
  4. Updates when you modify the file

No additional configuration or deployment steps are required!

See Also

Chapter 9: Security

This chapter covers advanced security topics for General Bots.

Overview

General Bots implements a comprehensive security model designed for enterprise deployments:

  • Multi-tenant Architecture: Support for multiple organizations with complete data isolation
  • Role-Based Access Control (RBAC): Fine-grained permissions at every level
  • Knowledge Base Security: Folder-level permissions with Qdrant vector search integration
  • SOC 2 Type II Compliance: Enterprise-grade security controls and audit logging

Security Layers

┌─────────────────────────────────────────────────────────────┐
│                    Organization Layer                        │
│  ┌─────────────────────────────────────────────────────┐    │
│  │                    Bot Layer                         │    │
│  │  ┌─────────────────────────────────────────────┐    │    │
│  │  │              App Layer                       │    │    │
│  │  │  ┌─────────────────────────────────────┐    │    │    │
│  │  │  │        Resource Layer                │    │    │    │
│  │  │  │  (KB folders, files, data)          │    │    │    │
│  │  │  └─────────────────────────────────────┘    │    │    │
│  │  └─────────────────────────────────────────────┘    │    │
│  └─────────────────────────────────────────────────────┘    │
└─────────────────────────────────────────────────────────────┘

Key Concepts

Organizations

Organizations are the top-level tenant in General Bots. Each organization has:

  • Its own subscription and billing
  • Isolated user base and permissions
  • Separate bots and knowledge bases
  • Independent quota management

Users can belong to multiple organizations and switch between them.

Roles and Permissions

General Bots uses a role-based model with predefined roles:

RoleDescription
Global AdminFull access to all resources
Billing AdminManage subscriptions and payments
User AdminManage users, groups, and role assignments
Bot AdminCreate and configure bots
KB ManagerManage knowledge bases and permissions
App DeveloperCreate and publish apps (Forms, Sites, Projects)
EditorEdit content and use apps
ViewerRead-only access

Knowledge Base Security

KB folders can have individual permission settings:

  • Public: Anyone can access
  • Authenticated: Logged-in users only
  • Role-based: Users with specific roles
  • Group-based: Users in specific groups
  • User-based: Named individual users

These permissions are enforced during vector search, ensuring users only see content they’re authorized to access.

In This Chapter

Best Practices

  1. Principle of Least Privilege: Assign the minimum permissions necessary
  2. Use Groups: Manage permissions through groups rather than individual users
  3. Regular Audits: Review permissions and access logs periodically
  4. Secure KB by Default: Set restrictive default permissions on sensitive folders
  5. Enable Audit Logging: Track all permission changes and access attempts

Initial Setup & Admin Bootstrap

When General Bots is installed for the first time, it automatically creates an administrator account. This page explains the bootstrap process and how to access your new installation.

Automatic Bootstrap

On first startup, General Bots checks if any admin users exist in the directory service (Zitadel). If no admin is found, it automatically:

  1. Creates an admin user
  2. Generates a secure random password
  3. Creates a default organization (“General Bots”)
  4. Assigns admin roles to the user
  5. Displays credentials in the server console

Console Output

When bootstrap completes, you’ll see output similar to this in your server console:

╔════════════════════════════════════════════════════════════╗
║                                                            ║
║       🤖 GENERAL BOTS - INITIAL SETUP COMPLETE            ║
║                                                            ║
╠════════════════════════════════════════════════════════════╣
║                                                            ║
║  Administrator account has been created:                   ║
║                                                            ║
║  ┌──────────────────────────────────────────────────────┐  ║
║  │                                                      │  ║
║  │  Username:             admin                         │  ║
║  │  Email:                admin@localhost               │  ║
║  │  Password:             xK3$mP9@vL2nQ7&w              │  ║
║  │                                                      │  ║
║  └──────────────────────────────────────────────────────┘  ║
║                                                            ║
║  Organization: General Bots (abc12345)                     ║
║                                                            ║
╠════════════════════════════════════════════════════════════╣
║                                                            ║
║  ⚠️  IMPORTANT: Save these credentials securely!           ║
║      This information will not be shown again.             ║
║                                                            ║
║  To login, navigate to:                                    ║
║      http://localhost:PORT/auth/login                      ║
║                                                            ║
╚════════════════════════════════════════════════════════════╝

Important: Save these credentials immediately! The password is only displayed once during the initial startup.

First Login

  1. Navigate to http://localhost:PORT/auth/login (replace PORT with your configured port)
  2. Enter the username: admin
  3. Enter the password shown in the console
  4. Click “Sign In”

What Gets Created

ItemValueDescription
UsernameadminDefault administrator username
Emailadmin@localhostDefault admin email
Password(random)14+ character secure password
Organization“General Bots”Default organization
Rolesadmin, org_owner, user_managerFull administrative access

Password Security

The auto-generated password includes:

  • 4+ lowercase letters (a-z)
  • 4+ uppercase letters (A-Z)
  • 4+ digits (0-9)
  • 2+ special characters (!@#$%&*)
  • Randomly shuffled for unpredictability

After First Login

Once logged in as admin, you should:

  1. Change your password (recommended)
  2. Update admin email to a real email address
  3. Create additional users via Settings → Users
  4. Configure your organization settings

Creating Additional Users

As an admin, you can create users through the Settings UI:

  1. Go to Settings → Users
  2. Click “Add User”
  3. Fill in user details:
    • Username
    • Email
    • First/Last name
    • Role (user, admin, etc.)
  4. The user will be created in the directory service (Zitadel)
  5. The user will automatically belong to your organization

Organization Structure

Organization (e.g., "Acme Corp")
├── Users
│   ├── admin (org_owner, admin)
│   ├── john.doe (user)
│   └── jane.smith (bot_operator)
├── Bots
│   ├── sales-bot
│   ├── support-bot
│   └── hr-bot
└── Drive Storage
    ├── acme-sales-bot.gbai/
    ├── acme-support-bot.gbai/
    └── acme-hr-bot.gbai/

Manual Bootstrap (Recovery)

If you need to manually create an admin (e.g., for recovery), you can use the bootstrap endpoint:

1. Set Bootstrap Secret

Add to your environment variables:

export GB_BOOTSTRAP_SECRET=your-secure-random-secret

2. Access Bootstrap Page

Navigate to: http://localhost:PORT/auth/bootstrap

3. Fill in the Form

  • Bootstrap Secret: The value you set in GB_BOOTSTRAP_SECRET
  • Organization Name: Your company/org name
  • Admin Details: Username, email, password

4. Submit

The admin account will be created and you can login normally.

Note: The manual bootstrap endpoint only works when GB_BOOTSTRAP_SECRET is set and no admin users exist.

Troubleshooting

“Admin user already exists”

This means bootstrap already completed. If you’ve lost the password:

  1. Access Zitadel console directly (usually port 8300)
  2. Use Zitadel’s password reset functionality
  3. Or delete the user in Zitadel and restart General Bots

Bootstrap Not Running

Check that:

  1. Zitadel (directory service) is running and healthy
  2. The Zitadel configuration in your .env is correct
  3. Check server logs for connection errors

Cannot Connect to Directory Service

# Check if Zitadel is running
curl http://localhost:8300/healthz

# Check logs
cat botserver-stack/logs/directory/zitadel.log

Security Considerations

  1. First-time setup: Bootstrap only runs once when no admins exist
  2. Console only: Credentials are never logged to files, only displayed in console
  3. Secure password: Auto-generated passwords meet enterprise security requirements
  4. No default passwords: Every installation gets a unique password

API Reference

Bootstrap Endpoint

POST /api/auth/bootstrap
Content-Type: application/json

{
  "bootstrap_secret": "your-secret",
  "organization_name": "My Company",
  "first_name": "John",
  "last_name": "Doe",
  "username": "admin",
  "email": "admin@example.com",
  "password": "<your-secure-password>"
}

Response (Success):

{
  "success": true,
  "message": "Admin user 'admin' created successfully...",
  "user_id": "abc123...",
  "organization_id": "org456..."
}

Response (Error - Admin Exists):

{
  "error": "Admin user already exists",
  "details": "Bootstrap can only be used for initial setup"
}

Next Steps

After completing initial setup:

User Authentication

General Bots uses a directory service component for user authentication and authorization. No passwords are stored internally in General Bots.

Overview

Authentication in General Bots is handled entirely by the directory service, which provides:

  • User identity management
  • OAuth 2.0 / OpenID Connect (OIDC) authentication
  • Single Sign-On (SSO) capabilities
  • Multi-factor authentication (MFA)
  • User and organization management
  • Role-based access control (RBAC)

Architecture

Directory Service Integration

General Bots integrates with the directory service through:

  • DirectoryClient: Client for API communication
  • AuthService: Service layer for authentication operations
  • OIDC Flow: Standard OAuth2/OIDC authentication flow
  • Service Account: For administrative operations

No Internal Password Storage

  • No password_hash columns: Users table only stores directory user IDs
  • No Argon2 hashing: All password operations handled by directory service
  • No password reset logic: Managed through directory service’s built-in flows
  • Session tokens only: General Bots only manages session state

Authentication Flow

Authentication Architecture

Browser General Bots Directory Service PostgreSQL • Sessions • User Refs 1. Login Request 2. Redirect to OIDC 3. Show Login Page 4. Enter Credentials 5. Return Tokens 6. Set Session Cookie

7. Authenticated!

User Sync

User Registration

User Registration Flow

User General Bots Directory

Register Create ► Generate ID ► Hash Password ► Store User User ID ► Create Local Ref ► Start Session Token
  1. User registration request sent to directory service
  2. Directory service creates user account
  3. User ID returned to botserver
  4. General Bots creates local user reference
  5. Session established with General Bots

User Login

User Login Flow

Browser General Bots Directory

GET /login 302 Redirect to Directory Show Login Form Submit Credentials ► Validate ► Generate Tokens Redirect + Tokens /auth/callback ► Validate Tokens ► Create Session ► Store in DB Set Cookie Redirect to App
  1. User redirected to directory service login page
  2. Credentials validated by directory service
  3. OIDC tokens returned via callback
  4. General Bots validates tokens
  5. Local session created
  6. Session token issued to client

Token Validation

Token Validation Flow

Request Flow

Validation Pipeline

Request + Cookie Extract Token from Cookie/Header General Bots Validates Check Session in Local Cache

► Valid? Continue ► Expired?

Directory Refresh Refresh with Directory API Process Request Load User Context Apply Permissions
  1. Client includes session token
  2. General Bots validates local session
  3. Optional: Refresh with directory service if expired
  4. User context loaded from directory service
  5. Request processed with user identity

Directory Service Configuration

Auto-Configuration

During bootstrap, General Bots automatically:

  1. Installs directory service via installer.rs
  2. Configures directory service with PostgreSQL
  3. Creates default organization
  4. Sets up service account
  5. Creates initial admin user
  6. Configures OIDC application

Database Schema

Users Table (Simplified)

ColumnTypeDescription
idUUIDInternal General Bots ID
directory_idTEXTUser ID in directory service
usernameTEXTCached username
emailTEXTCached email
created_atTIMESTAMPTZFirst login time
updated_atTIMESTAMPTZLast sync with directory

Note: No password_hash or any password-related fields exist.

User Sessions Table

ColumnTypeDescription
idUUIDSession ID
user_idUUIDReference to users table
session_tokenTEXTGeneral Bots session token
directory_tokenTEXTCached OIDC token
expires_atTIMESTAMPTZSession expiration
created_atTIMESTAMPTZSession start

Authentication Endpoints

Login Initiation

GET /auth/login

Redirects to Zitadel login page with OIDC parameters.

OAuth Callback

GET /auth/callback?code=...&state=...

Handles return from Zitadel after successful authentication.

Logout

POST /auth/logout

Terminates local session and optionally triggers Zitadel logout.

Session Validation

GET /auth/validate
Headers: Authorization: Bearer {session_token}

Directory Service Features

User Management

  • Create, update, delete users
  • Password reset flows
  • Email verification
  • Profile management
  • Password policies (managed in Zitadel)
  • Account locking
  • Password recovery

Multi-Factor Authentication

Configured in Zitadel:

  • TOTP (Time-based One-Time Passwords)
  • WebAuthn/FIDO2
  • SMS OTP (if configured)
  • Email OTP

Single Sign-On

  • One login for all applications
  • Session management across services
  • Centralized user directory
  • External IdP integration

Organizations

  • Multi-tenant support
  • Organization-specific policies
  • Delegated administration
  • User isolation

Directory Service Integration

Directory Client Implementation

Located in src/directory/client.rs:

  • Manages API communication
  • Handles token refresh
  • Caches access tokens
  • Provides user operations

AuthService

Located in src/directory/mod.rs:

  • High-level authentication operations
  • Session management
  • User profile caching
  • Group/role management

Security Benefits

Centralized Security

  • Professional identity platform
  • Regular security updates
  • Compliance certifications
  • Audit logging

No Password Liability

  • No password storage risks
  • No hashing implementation errors
  • No password database leaks
  • Reduced compliance burden

Advanced Features

  • Passwordless authentication
  • Adaptive authentication
  • Risk-based access control
  • Session security policies

User Operations

Creating Users

Creating users via Directory Client:

  • Username: john_doe
  • Email: john@example.com
  • First name: John
  • Last name: Doe
  • Password: Set through Directory UI or email flow

Getting User Info

User information is fetched from the Directory service using the directory ID.

Managing Sessions

Sessions are managed locally by General Bots but authenticated through Directory Service:

  • Session creation after Directory auth
  • Local session tokens for performance
  • Periodic validation with Zitadel
  • Session termination on logout

Default Users

During bootstrap, the system creates:

  1. Admin User

    • Username: admin (configurable)
    • Email: admin@localhost
    • Password: Randomly generated (displayed once during setup)
    • Role: Administrator
  2. Regular User

    • Username: user
    • Email: user@default
    • Password: Randomly generated (displayed once during setup)
    • Role: Standard user

Groups and Roles

Organization Management

  • Organizations created in Zitadel
  • Users assigned to organizations
  • Roles defined per organization
  • Permissions inherited from roles

Role-Based Access

  • Admin: Full system access
  • User: Standard bot interaction
  • Custom roles: Defined in Zitadel

Monitoring and Audit

Directory Service Audit Logs

  • All authentication events logged
  • User actions tracked
  • Administrative changes recorded
  • Security events monitored

Session Metrics

General Bots tracks:

  • Active sessions count
  • Session creation rate
  • Failed authentication attempts
  • Token refresh frequency

Troubleshooting

Common Issues

  1. Zitadel Connection Failed

    • Check Zitadel is running on port 8080
    • Verify ZITADEL_ISSUER_URL
    • Check network connectivity
  2. Authentication Fails

    • Verify client credentials
    • Check redirect URI configuration
    • Review Zitadel logs
  3. Session Issues

    • Clear browser cookies
    • Check session expiry settings
    • Verify token refresh logic

Best Practices

  1. Use Zitadel UI: Manage users through Zitadel interface
  2. Configure MFA: Enable multi-factor for admin accounts
  3. Regular Updates: Keep Zitadel updated
  4. Monitor Logs: Review authentication logs regularly
  5. Session Timeout: Configure appropriate session duration
  6. Secure Communication: Use HTTPS in production

Migration from Other Systems

When migrating from password-based systems:

  1. Export user data (without passwords)
  2. Import users into Zitadel
  3. Force password reset for all users
  4. Update application to use OIDC flow
  5. Remove password-related code

Summary

General Bots’ integration with the Directory Service provides enterprise-grade authentication without the complexity and risk of managing passwords internally. All authentication operations are delegated to the Directory Service, while General Bots focuses on session management and bot interactions.

Password Security

General Bots delegates all password security to the Directory Service (currently Zitadel, can be migrated to Keycloak), an enterprise-grade identity management platform. No passwords are ever stored, hashed, or managed within General Bots itself.

Overview

Password security is handled entirely by Zitadel, which provides:

  • Industry-standard password hashing (Argon2/bcrypt)
  • Configurable password policies
  • Password history and rotation
  • Breach detection
  • Self-service password recovery

No Internal Password Management

What General Bots Does NOT Do

  • No password storage: No password or hash columns in database
  • No hashing implementation: No Argon2/bcrypt code in botserver
  • No password validation: All validation done by Zitadel
  • No password reset logic: Handled through Zitadel workflows
  • No password policies: Configured in Zitadel admin console

What General Bots DOES Do

  • Redirects to Zitadel for authentication
  • Stores Zitadel user IDs
  • Manages local session tokens
  • Caches user profile information
  • Validates sessions locally for performance

Zitadel Password Security

Hashing Algorithm

Zitadel uses industry-standard algorithms:

  • Default: Argon2id (recommended)
  • Alternative: bcrypt (for compatibility)
  • Configurable: Parameters can be adjusted
  • Automatic: Rehashing on algorithm updates

Password Policies

Configured in Zitadel admin console:

  • Minimum length (default: 8 characters)
  • Maximum length (configurable)
  • Character requirements (uppercase, lowercase, numbers, symbols)
  • Complexity rules
  • Common password blacklist
  • Password history (prevent reuse)
  • Expiration policies

Password Storage in Zitadel

Zitadel stores:

  • Hashed passwords (never plaintext)
  • Salt per password
  • Algorithm identifier
  • Hash parameters
  • Password history
  • Last changed timestamp

Configuration

Setting Password Policies

Access Zitadel admin console:

  1. Navigate to Settings → Password Complexity
  2. Configure requirements:
    • Min/max length
    • Required character types
    • Expiry settings
  3. Save changes (applies immediately)

Example Policy Configuration

In Zitadel UI or API:

{
  "minLength": 12,
  "maxLength": 128,
  "hasUppercase": true,
  "hasLowercase": true,
  "hasNumber": true,
  "hasSymbol": true,
  "passwordHistory": 5,
  "expiryDays": 90
}

Password Reset Flow

User-Initiated Reset

  1. User clicks “Forgot Password” on Zitadel login
  2. Email sent with reset link
  3. User clicks link (time-limited)
  4. New password entered in Zitadel UI
  5. Password validated against policy
  6. Hash updated in Zitadel database
  7. User can login with new password

Admin-Initiated Reset

  1. Admin accesses Zitadel console
  2. Navigates to user management
  3. Triggers password reset
  4. User receives reset email
  5. Same flow as user-initiated

Security Features

Breach Detection

Zitadel includes:

  • Have I Been Pwned integration
  • Checks passwords against breach databases
  • Warns users of compromised passwords
  • Forces reset if detected in breach

Multi-Factor Authentication

Additional security beyond passwords:

  • TOTP (Google Authenticator, etc.)
  • WebAuthn/FIDO2 keys
  • SMS OTP (if configured)
  • Email verification codes

Account Protection

  • Account lockout after failed attempts
  • CAPTCHA after threshold
  • IP-based rate limiting
  • Suspicious activity detection
  • Passwordless options available

Integration Points

Bootstrap Process

During setup, General Bots:

  1. Installs Directory Service (Zitadel)
  2. Configures database connection
  3. Creates admin account with randomly generated password
  4. Password is displayed once during initial setup

Authentication Flow

  1. User enters credentials in Directory Service UI
  2. Directory Service validates password
  3. OIDC tokens issued
  4. General Bots receives tokens
  5. No password ever touches General Bots

Session Management

After Directory Service authentication:

  • General Bots creates local session
  • Session token generated (not password-related)
  • User ID linked to Directory Service ID
  • No password data stored

Default Credentials

Initial Admin Account

Created during bootstrap:

  • Username: admin
  • Password: Randomly generated
  • Displayed once during initial setup
  • Should be stored securely or changed immediately

Initial User Account

Created during bootstrap:

  • Username: user
  • Password: Randomly generated
  • Displayed once during initial setup
  • Must be changed on first login

Best Practices

For Administrators

  1. Secure Initial Passwords: Store or change randomly generated passwords immediately
  2. Configure Policies: Set appropriate password requirements
  3. Enable MFA: Require for admin accounts
  4. Monitor Logs: Review authentication attempts
  5. Update Regularly: Keep Zitadel updated
  6. Test Recovery: Verify password reset works through Directory Service

For Developers

  1. Never Touch Passwords: Let Zitadel handle everything
  2. Use OIDC Flow: Standard OAuth2/OpenID Connect
  3. Validate Tokens: Check with Zitadel when needed
  4. Cache Carefully: Don’t cache sensitive data
  5. Log Safely: Never log authentication details

For Users

  1. Use Strong Passwords: Follow policy requirements
  2. Enable MFA: Add extra security layer
  3. Unique Passwords: Don’t reuse across services
  4. Regular Updates: Change periodically if required
  5. Report Issues: Alert admins of problems

Compliance

Zitadel’s password handling helps meet:

  • GDPR: Data protection requirements
  • NIST 800-63B: Modern password guidelines
  • OWASP: Security best practices
  • PCI DSS: Payment card standards
  • HIPAA: Healthcare requirements
  • SOC 2: Security controls

Troubleshooting

Common Password Issues

  1. Password Reset Not Working

    • Check email configuration
    • Verify SMTP settings in Zitadel
    • Check spam folders
  2. Policy Not Enforced

    • Review Zitadel configuration
    • Check policy is active
    • Verify user’s organization settings
  3. Account Locked

    • Check lockout policy
    • Admin can unlock via console
    • Wait for timeout period
  4. MFA Issues

    • Verify time sync for TOTP
    • Check backup codes
    • Admin can reset MFA

Security Benefits

Delegated Security

  • Professional Implementation: Security experts maintain Zitadel
  • Regular Updates: Security patches applied by Zitadel team
  • Compliance: Certifications maintained by Zitadel
  • No Liability: Password breaches not botserver’s responsibility

Reduced Attack Surface

  • No password code to exploit
  • No hashing vulnerabilities
  • No timing attacks possible
  • No password database to breach

Advanced Features

Available through Zitadel:

  • Passwordless authentication
  • Biometric support
  • Hardware key support
  • Risk-based authentication
  • Adaptive security

Migration Guide

From Internal Passwords

If migrating from a system with internal passwords:

  1. Export Users: Username and email only (no passwords)
  2. Import to Zitadel: Create accounts
  3. Force Reset: All users must set new passwords
  4. Remove Old Code: Delete password-related code
  5. Update Docs: Reflect new authentication flow

Password Policy Migration

  1. Document existing policy
  2. Configure equivalent in Zitadel
  3. Test with sample accounts
  4. Communicate changes to users
  5. Provide support during transition

Summary

General Bots achieves enterprise-grade password security by not handling passwords at all. The Directory Service provides professional identity management with all the security features needed for production deployments. This separation of concerns allows General Bots to focus on bot functionality while delegating security to a specialized platform.

API Endpoints

This chapter provides a comprehensive reference for the API endpoints exposed by General Bots. The platform offers RESTful endpoints for authentication, session management, user operations, and bot interactions, as well as WebSocket connections for real-time communication.

Authentication Endpoints

Authentication in General Bots is delegated to the Directory Service (Zitadel), which implements industry-standard OAuth2 and OpenID Connect protocols. The authentication endpoints serve as integration points with this external identity provider.

Login

POST /api/auth/login
Content-Type: application/json

{
  "email": "user@example.com",
  "password": "<your-password>",
  "remember": true
}

Response (Success):

{
  "success": true,
  "user_id": "abc123...",
  "access_token": "eyJ...",
  "refresh_token": "eyJ...",
  "expires_in": 3600,
  "requires_2fa": false,
  "redirect": "/"
}

Response (2FA Required):

{
  "success": false,
  "requires_2fa": true,
  "session_token": "temp-session-token"
}

Logout

POST /api/auth/logout
Authorization: Bearer <access_token>

Response:

{
  "success": true,
  "message": "Logged out successfully"
}

Get Current User

GET /api/auth/me
Authorization: Bearer <access_token>

Response:

{
  "id": "user-uuid",
  "username": "johndoe",
  "email": "john@example.com",
  "first_name": "John",
  "last_name": "Doe",
  "display_name": "John Doe",
  "roles": ["user", "bot_operator"],
  "organization_id": "org-uuid"
}

Refresh Token

POST /api/auth/refresh
Content-Type: application/json

{
  "refresh_token": "eyJ..."
}

Response:

{
  "success": true,
  "access_token": "eyJ...",
  "refresh_token": "eyJ...",
  "expires_in": 3600
}

Bootstrap Admin (First-Time Setup)

This endpoint is only available when GB_BOOTSTRAP_SECRET is set and no admin users exist.

POST /api/auth/bootstrap
Content-Type: application/json

{
  "bootstrap_secret": "your-secret",
  "organization_name": "My Company",
  "first_name": "John",
  "last_name": "Doe",
  "username": "admin",
  "email": "admin@example.com",
  "password": "<your-secure-password>"
}

Response (Success):

{
  "success": true,
  "message": "Admin user 'admin' created successfully...",
  "user_id": "abc123...",
  "organization_id": "org456..."
}

Anonymous Session (Chat)

Anonymous users can obtain a session for chat without authentication:

GET /api/auth?bot_name=default

Response:

{
  "user_id": "anonymous-uuid",
  "session_id": "session-uuid",
  "status": "authenticated"
}

OAuth Endpoints

OAuth Login

The login process begins when a client application directs the user to the /auth/login endpoint using a GET request. This endpoint does not require any request body or authentication headers, as its purpose is to initiate the OAuth2 flow. Upon receiving this request, the server generates appropriate OAuth2 parameters and redirects the user’s browser to the Zitadel login page, where they can enter their credentials securely within the identity provider’s domain.

OAuth Callback

After successful authentication with the Directory Service, the user’s browser is redirected back to /auth/callback with authorization parameters. This GET endpoint expects two query parameters: a code parameter containing the authorization code issued by the Directory Service, and a state parameter that serves as a CSRF protection mechanism to ensure the callback corresponds to a legitimate login attempt.

When the callback is processed successfully, the server exchanges the authorization code for access tokens, creates a local session, sets a session cookie in the response, and redirects the user to the main application interface. This seamless flow means users typically don’t notice the redirect chain happening in the background.

Logout

To terminate a session, clients send a POST request to /auth/logout. This endpoint requires the current session token in the Authorization header using the Bearer scheme. The server invalidates the session both locally and with the Directory Service, returning a JSON response confirming successful logout. After logout, the session token becomes invalid and cannot be used for further requests.

Session Validation

The /auth/validate endpoint allows clients to verify whether their current session token remains valid without performing any other operation. By sending a GET request with the session token in the Authorization header, clients receive a JSON response indicating whether the token is valid, the associated user identifier, and the session’s expiration timestamp. This endpoint is particularly useful for single-page applications that need to check session status before making other API calls.

Session Management

Session management endpoints provide control over the user’s active sessions and their associations with bots.

Current Session Information

Clients can retrieve information about their current session by sending a GET request to /api/session. The response includes the session identifier, the user’s identifier, the currently selected bot identifier if any, and timestamps indicating when the session was created and when it will expire. This information helps applications understand the current authentication context and present appropriate interface elements.

Creating Bot Sessions

When a user wants to interact with a specific bot, the application creates a bot session by sending a POST request to /api/session/create. The request body contains a JSON object with the target bot’s identifier. If the user has permission to access the requested bot, the server creates a new session linking the user to that bot and returns the session details including its identifier, the associated bot identifier, and the session’s active status.

This separation between authentication sessions and bot sessions allows users to maintain their login while switching between different bots without requiring re-authentication.

Terminating Sessions

To end a specific session, clients send a DELETE request to /api/session/:id, where the path parameter identifies the session to terminate. The server validates that the requester has permission to terminate the specified session, typically by verifying they own it, and then invalidates it. The response confirms whether the termination succeeded.

User Management

User endpoints provide access to profile information and allow limited profile modifications.

Retrieving User Information

The /api/users/me endpoint responds to GET requests with the current user’s profile information. This includes their unique identifier, username, email address, and account creation timestamp. Since user data is managed in the Directory Service, this endpoint essentially proxies information from that system into a format convenient for the application.

Profile Updates

Users can update certain profile fields by sending a PUT request to /api/users/me with a JSON body containing the fields to modify. Supported fields typically include email address, first name, and last name. It’s important to note that these updates are actually propagated to the Directory Service, which serves as the authoritative source for user information. The endpoint validates the requested changes and forwards them to Zitadel for persistence.

Bot Interaction

Real-time communication with bots occurs primarily through WebSocket connections, though REST endpoints exist for bot discovery.

WebSocket Communication

The primary channel for bot interaction is the WebSocket endpoint at /ws. After establishing a connection, clients send JSON-formatted messages containing a message type, the content of the message, and the session identifier. The server processes these messages, routes them to the appropriate bot, and sends responses back through the same WebSocket connection.

This real-time bidirectional communication enables responsive conversational experiences without the overhead of repeated HTTP connections. The WebSocket connection maintains state throughout the conversation, allowing for context-aware responses.

Bot Discovery

Users discover available bots by sending a GET request to /api/bots. The response contains an array of bot objects, each including the bot’s identifier, display name, description, and current operational status. Only bots that the authenticated user has permission to access appear in this list, ensuring users see a curated view appropriate to their organizational role and permissions.

Administrative Endpoints

Administrative endpoints provide system management capabilities for users with appropriate privileges. The system status endpoint at /api/admin/system/status returns health information about the various system components. The metrics endpoint at /api/admin/system/metrics provides operational statistics useful for monitoring and capacity planning. Both endpoints require administrative privileges, which are validated against the user’s roles in the Directory Service.

Group Management

Group management endpoints support the organization’s permission structure. The /api/groups/create endpoint accepts POST requests to establish new groups. The /api/groups/list endpoint returns all groups visible to the requesting user. Individual group membership can be queried through /api/groups/:id/members. These endpoints work in conjunction with the Directory Service to maintain consistent group definitions across the platform.

Rate Limiting

To ensure fair resource allocation and protect against abuse, all API endpoints implement rate limiting. Public endpoints, including the health check, allow 60 requests per hour from unauthenticated clients. Authenticated users can make up to 1000 requests per hour across all endpoints. Administrative users receive a higher limit of 5000 requests per hour to accommodate their management responsibilities.

Rate limit information is communicated through response headers. The X-RateLimit-Limit header indicates the maximum requests allowed in the current window, X-RateLimit-Remaining shows how many requests remain, and X-RateLimit-Reset provides a Unix timestamp indicating when the limit resets. Applications should monitor these headers and implement appropriate backoff strategies when approaching limits.

Error Handling

All API endpoints return errors in a consistent JSON format. The response body contains an error object with a machine-readable code, a human-readable message, and an optional details object providing additional context. Common error codes include UNAUTHORIZED for missing or invalid authentication, FORBIDDEN when the user lacks required permissions, NOT_FOUND for requests targeting non-existent resources, RATE_LIMITED when request quotas are exceeded, and SERVER_ERROR for internal failures.

Clients should implement error handling that examines the error code to determine appropriate recovery actions. Authentication errors might prompt a re-login flow, while rate limiting errors should trigger request throttling.

Cross-Origin Resource Sharing

The API supports Cross-Origin Resource Sharing (CORS) to enable browser-based applications hosted on different domains. In development environments, the server accepts requests from any origin. Production deployments should configure specific allowed origins to prevent unauthorized cross-domain access. The allowed methods include GET, POST, PUT, DELETE, and OPTIONS, with Content-Type and Authorization as permitted headers.

Health Monitoring

The /health endpoint provides a simple way to verify the server is operational. Unlike other endpoints, this one requires no authentication, making it suitable for external monitoring systems and load balancer health checks. The response includes a status indicator and a timestamp, providing basic confirmation that the server can process requests.

Directory User Management Endpoints

These endpoints require admin privileges and interact with the Zitadel directory service.

List Users

GET /api/directory/users/list?page=1&per_page=20&organization_id=org-uuid
Authorization: Bearer <access_token>

Response:

{
  "users": [
    {
      "id": "user-uuid",
      "username": "johndoe",
      "email": "john@example.com",
      "first_name": "John",
      "last_name": "Doe",
      "state": "active",
      "organization_id": "org-uuid",
      "roles": ["user"]
    }
  ],
  "total": 1,
  "page": 1,
  "per_page": 20
}

Create User

POST /api/directory/users/create
Authorization: Bearer <access_token>
Content-Type: application/json

{
  "username": "newuser",
  "email": "newuser@example.com",
  "first_name": "New",
  "last_name": "User",
  "organization_id": "org-uuid",
  "roles": ["user"]
}

Response:

{
  "success": true,
  "message": "User newuser created successfully",
  "user_id": "new-user-uuid"
}

Assign User to Organization

POST /api/directory/users/:user_id/organization
Authorization: Bearer <access_token>
Content-Type: application/json

{
  "organization_id": "org-uuid",
  "roles": ["user", "bot_operator"]
}

Update User Roles

PUT /api/directory/users/:user_id/organization/:org_id/roles
Authorization: Bearer <access_token>
Content-Type: application/json

{
  "roles": ["admin", "org_owner"]
}

Delete User

DELETE /api/directory/users/:user_id/delete
Authorization: Bearer <access_token>

Group Management Endpoints

List Groups

GET /api/directory/groups/list?page=1&per_page=20
Authorization: Bearer <access_token>

Create Group

POST /api/directory/groups/create
Authorization: Bearer <access_token>
Content-Type: application/json

{
  "name": "Marketing Team",
  "description": "Marketing department users"
}

Add Member to Group

POST /api/directory/groups/:group_id/members/add
Authorization: Bearer <access_token>
Content-Type: application/json

{
  "user_id": "user-uuid",
  "roles": ["member"]
}

Implementation Status

The current implementation provides full support for WebSocket communication, administrative endpoints, group management, and health checking. OAuth authentication flows through the Directory Service are functional but continue to evolve. Session management endpoints work for basic scenarios with ongoing enhancements planned. Some user profile endpoints and direct REST messaging capabilities remain under development, with batch operations planned for future releases.

Security Considerations

Several security practices should guide API usage. With the exception of the health endpoint, all API calls require valid authentication. Administrative operations additionally verify that the requester holds appropriate roles within the Directory Service. Session tokens must be treated as secrets, stored securely on clients, and never logged or exposed. Production deployments must use HTTPS to encrypt all API traffic. Applications performing state-changing operations should implement CSRF protection through the state parameter and appropriate token validation.

Effective API integration follows several patterns. Always include the session token in the Authorization header for authenticated requests. Implement graceful handling of token expiration by detecting authentication errors and prompting re-login when necessary. Use exponential backoff for retry logic, starting with short delays and increasing them progressively for repeated failures. Cache responses where appropriate to reduce server load and improve application responsiveness. Prefer WebSocket connections for conversational interactions where real-time response is important. Monitor rate limit headers proactively to avoid hitting limits during normal operation.

Bot Authentication

This section describes how General Bots handles bot authentication through its session-based architecture. Unlike traditional systems where bots might have independent credentials, General Bots implements a model where bots operate within the context of authenticated user sessions.

Overview

Bot authentication in General Bots follows a fundamentally different approach from conventional bot platforms. Rather than assigning credentials directly to bots, the system ties all bot operations to user sessions. When a user authenticates through the Directory Service, they gain access to interact with bots based on their organizational membership and assigned permissions. This design eliminates the complexity of managing separate bot credentials while maintaining robust security through user-based access control.

The key principle underlying this architecture is that bots are resources accessed by users, not independent actors with their own identities. This approach simplifies security management and creates a clear audit trail linking all bot activities to specific authenticated users.

Bot Registration

When the system bootstraps, bots are registered in the database through an automated discovery process. The system scans the templates/ directory for any folder ending in .gbai and creates corresponding entries in the database.

Database Storage

Each bot entry in the bots table contains a UUID primary key that uniquely identifies the bot, the bot’s display name, an organization association that determines which users can access it, and timestamps tracking when the bot was created and last modified. This minimal schema reflects the philosophy that bots themselves don’t require authentication credentials—they simply need to be identifiable and associable with organizations.

Configuration Management

Bot-specific settings are stored separately in the bot_configuration table, which maintains key-value pairs loaded from the bot’s config.csv file. This table holds runtime parameters, feature flags, LLM configuration, and any other settings that control the bot’s behavior. By separating configuration from the core bot record, administrators can update settings without affecting the bot’s fundamental identity or registration status.

Session-Based Bot Access

The session-based access model forms the foundation of how users interact with bots. When a user wants to communicate with a bot, they must first authenticate through the Directory Service using standard OAuth2/OIDC flows. Once authenticated, the user can select from available bots based on their permissions, and the system creates a session that links that specific user to their chosen bot.

Session Structure

The user_sessions table maintains the critical relationship between users and bots. Each session record contains a unique identifier, references to both the user and the selected bot, a session token for subsequent requests, and an expiration timestamp. All operations within that session are automatically scoped to the associated bot, preventing any accidental or intentional cross-bot data access.

This session structure means that when a user sends a message or requests information, the system automatically knows which bot should handle the request and which data stores should be queried. The session token serves as proof of both user authentication and bot selection, streamlining the authorization process for each subsequent request.

Data Isolation

General Bots implements strict data isolation between bots to ensure that information from one bot cannot leak to another. Each bot maintains its own isolated storage for message history, memories, knowledge bases, configuration, and drive bucket contents.

Cross-Bot Protection

The isolation model works at multiple levels. Sessions are locked to a single bot for their entire duration, meaning the system cannot accidentally route requests to the wrong bot. All database queries include the bot identifier as a filter condition, ensuring that even if a bug existed in the application logic, the database layer would prevent cross-bot data access. Storage buckets in the drive system are segregated by bot, with each bot’s files residing in a dedicated bucket that other bots cannot access.

This defense-in-depth approach means that data isolation doesn’t depend on any single mechanism being perfect. Multiple independent safeguards work together to maintain separation between bots.

Bot Discovery and Selection

Users access bots through a discovery process that respects organizational boundaries and permission assignments. The available bots for any given user depend on their organization membership, any direct bot assignments they’ve received, whether specific bots are marked as publicly available, and their role-based access permissions.

When starting a new conversation, users are presented with a list of bots they’re authorized to access. After selecting a bot, the system creates a new session linking the user to that bot, loads the bot’s context including its configuration and any persistent memories, and the conversation begins with the bot’s welcome message or startup script.

Bot Lifecycle

Understanding the bot lifecycle helps administrators manage their bot deployments effectively. Bots move through several states from creation to active operation.

Creation Process

During the bootstrap process, the system discovers bot templates and registers them in the database. For each template found, the system creates a bot record with generated identifiers, loads configuration from the bot’s config.csv file, uploads the bot’s resources to the drive storage system, and indexes any knowledge base documents into the vector database. This automated process means that deploying a new bot is as simple as adding its folder to the templates directory and restarting the server.

Activation Requirements

A bot becomes active and available for user access when its registration is complete, its configuration passes validation, all required resources are available in storage, and no critical errors occurred during initialization. If any of these conditions aren’t met, the bot remains in an inactive state and won’t appear in users’ available bot lists.

Updating Bots

Bot updates follow a similar automated process. Changes to configuration files are detected and applied, modified scripts are reloaded, and knowledge base updates trigger reindexing. Importantly, none of these updates require any authentication changes because bots don’t have their own credentials to manage.

Permission Levels

Bot access is controlled through a hierarchy of visibility settings that administrators configure per bot. At the most open level, public bots can be accessed by anyone with a valid user account. Organization-level bots restrict access to members of the bot’s associated organization. Private bots limit access to specifically assigned users. Admin-level bots require administrative privileges to access.

These permission levels work in conjunction with the Directory Service’s group and role system, allowing fine-grained control over who can access which bots within an organization.

Configuration Settings

Bot identity and access configuration are specified in the bot’s config.csv file. The identity settings include the bot’s display name and its organization association. Access configuration specifies the visibility level, which roles are permitted to access the bot, and operational limits like maximum concurrent sessions.

For example, a customer service bot might be configured with organization-level access, allowing any authenticated member of the organization to interact with it, while an HR bot might restrict access to members of the HR role group.

Security Considerations

The design decision to not give bots their own credentials has significant security implications, all of them positive. Bots cannot authenticate independently, which means there’s no possibility of a bot’s credentials being compromised or misused. Every bot operation requires a valid user context, creating a complete audit trail. There’s no mechanism for unauthorized bot-to-bot communication because bots can’t initiate actions without a user session.

Preventing Bot Impersonation

Because bots have no credentials, they cannot be impersonated through stolen credentials. An attacker would need to compromise an actual user account to interact with a bot, and even then, their actions would be logged against that user account. This makes detecting and investigating security incidents straightforward—every bot interaction traces back to a specific authenticated user.

API Integration

All programmatic access to bots follows the same user-authenticated model as interactive access. API requests must include a valid user session token in the Authorization header, along with the target bot identifier in the request body or URL.

There are no separate bot API keys or service accounts for bot access. This uniformity simplifies the security model and ensures that API access receives the same level of auditing and access control as interactive access through the web interface.

Multi-Bot Scenarios

Users who need to work with multiple bots can do so through several mechanisms. They can end their current bot session and start a new one with a different bot, with their conversation context switching to the new bot while history from each bot remains preserved separately. For users who need simultaneous access to multiple bots, the system supports concurrent sessions with different session identifiers, separate conversation contexts, and fully isolated data access.

This flexibility allows power users to leverage multiple bots for different tasks without the complexity of managing separate credentials or authentication contexts.

Monitoring and Auditing

Administrators can monitor bot access patterns through built-in metrics and logging capabilities. Authentication metrics track sessions per bot, user engagement levels, access attempts, and permission denials. Audit logging captures session creation events, bot selection actions, configuration changes, and any access violations.

These monitoring capabilities support both operational oversight and compliance requirements, providing the visibility needed to understand how bots are being used across the organization.

Best Practices

Successful bot deployment follows several established patterns. Organizing bots by organization groups them logically and simplifies permission management. Configuring appropriate access levels ensures that sensitive bots aren’t accidentally exposed to unauthorized users. Monitoring usage patterns helps identify both popular bots that might need additional resources and underutilized bots that might need better documentation or training. Regular permission audits ensure that access levels remain appropriate as organizational roles change. Maintaining documentation for each bot helps users understand what each bot can do and when to use it. Testing data isolation periodically verifies that the security boundaries between bots remain intact.

Troubleshooting Common Issues

When users report that a bot isn’t accessible, several common causes should be investigated. The user might not be a member of the bot’s organization, they might lack sufficient permissions for the bot’s access level, the bot might not have completed its activation process, or there might be a configuration error preventing the bot from loading properly.

Session-related issues typically stem from expired sessions requiring re-authentication, invalid bot identifiers in API requests, concurrent session limits being exceeded, or database connectivity problems preventing session validation.

Implementation Notes

Bot authentication is not implemented as a separate module but is integrated throughout the session management, user authentication, and database query systems. This integration reflects the fundamental design principle that bot access is a function of user authentication rather than an independent system.

Future versions might consider enhancements such as bot-specific API tokens for automated workflows, service accounts for scheduled bot operations, controlled bot-to-bot communication for complex scenarios, and webhook authentication for external system integration. However, any such features would be implemented as extensions of the user-session model rather than as independent bot credentials.

Summary

The bot authentication model in General Bots achieves security through simplicity. By tying all bot access to authenticated user sessions, the system eliminates an entire class of credential management problems while maintaining complete auditability of all bot interactions. This design allows organizations to focus on building useful bots rather than managing complex authentication infrastructure, while still meeting enterprise security requirements.

General Bots Security Features Guide

Overview

This document provides a comprehensive overview of all security features and configurations available in General Bots, designed for security experts and enterprise deployments. Understanding these features enables organizations to deploy General Bots with confidence in regulated environments.

Feature Flags

Core Security Features

Security features are configured through Cargo.toml or via build flags at compile time. A basic build with desktop UI uses cargo build --features desktop. A full security-enabled build uses cargo build --features "desktop,vectordb,email". A server-only build without desktop UI uses cargo build --no-default-features --features "vectordb,email".

Available Features

The desktop feature provides the Tauri desktop UI with a sandboxed runtime and controlled system access, and is enabled by default. The vectordb feature enables Qdrant integration for AI-powered threat detection and semantic search, and must be explicitly enabled. The email feature provides IMAP and SMTP support, which requires secure credential storage, and must also be explicitly enabled.

Enterprise Security Features

Enterprise-ready security features include built-in encryption for data at rest via the aes-gcm library, comprehensive audit logging capabilities, role-based access control implemented through the Directory Service, multi-factor authentication available via the Directory Service, and SAML/OIDC single sign-on support also through the Directory Service.

Authentication and Authorization

Directory Service Integration

General Bots uses the Directory Service as the primary identity provider. Currently this is Zitadel, though it can be migrated to Keycloak or other OIDC providers. The integration provides OAuth2 and OIDC authentication, JWT token validation, user and group management, permission management, and session handling.

Password Security

Password hashing uses the Argon2id algorithm, which is memory-hard and GPU-resistant. The configuration uses 19456 KB of memory, 2 iterations, parallelism of 1, and a random 32-byte salt. This configuration provides strong protection against both online and offline attacks while maintaining reasonable authentication performance.

Token Management

Access tokens use JWT format with RS256 signing for verifiable authentication. Refresh tokens consist of secure random 256-bit values for session renewal. Session tokens use UUID v4 format with cache storage for fast validation. Token rotation happens automatically when tokens approach expiry, ensuring continuous secure access without user interruption.

Encryption and Cryptography

Cryptographic Libraries

The platform uses well-vetted cryptographic libraries for all security operations. The aes-gcm library version 0.10 provides authenticated encryption using AES-256-GCM. The argon2 library version 0.5 handles password hashing with Argon2id. The sha2 library version 0.10.9 provides cryptographic hashing with SHA-256. The hmac library version 0.12.1 enables message authentication using HMAC-SHA256. The rand library version 0.9.2 provides cryptographic random number generation using ChaCha20.

Data Encryption

Encryption at rest protects stored data throughout the system. Database encryption applies column-level encryption to sensitive fields. File storage encryption uses AES-256-GCM for all uploaded files. Configuration encryption protects secrets using a master key.

Encryption in transit protects data during transmission. All external communications use TLS 1.3 for strong protection. Service-to-service communication uses mutual TLS (mTLS) for bidirectional authentication. Certificate pinning applies to critical services to prevent man-in-the-middle attacks.

Network Security

API Security

Rate limiting through Caddy protects against abuse. Per-IP limits default to 100 requests per minute. Per-user limits default to 1000 requests per hour. These limits are configured in the Caddyfile and can be adjusted for specific deployment requirements.

CORS configuration through Caddy controls cross-origin requests. Origins use a strict whitelist approach. Credentials are enabled for authenticated requests. HTTP methods are explicitly allowed rather than using wildcards.

Input validation protects against injection attacks. Schema validation applies to all inputs before processing. SQL injection prevention uses PostgreSQL prepared statements exclusively. XSS protection applies output encoding to all user-generated content. Path traversal prevention validates all file paths against allowed directories.

WebSocket Security

WebSocket connections require authentication before establishment. Message size limits default to 10MB to prevent resource exhaustion. Heartbeat and ping-pong mechanisms validate connection health. Suspicious activity triggers automatic disconnection to protect the system.

Data Protection

Database Security

PostgreSQL security features provide comprehensive database protection. Row-level security (RLS) restricts access to specific rows based on user context. Column encryption protects personally identifiable information. Audit logging records all database access. Connection pooling limits resource consumption. Prepared statements prevent SQL injection. SSL/TLS connections are enforced for all database communication.

File Storage Security

Drive configuration provides secure object storage. Bucket encryption uses AES-256 for all stored objects. Policy-based access control restricts file access. Versioning enables recovery from accidental changes. Immutable objects support prevents tampering. TLS encryption protects data in transit.

Local storage follows security best practices. Directory permissions are set to 700 for restricted access. File permissions are set to 600 for owner-only access. Temporary files undergo secure deletion to prevent data leakage.

Memory Protection

Memory protection measures prevent sensitive data exposure. Zeroization clears sensitive data from memory after use. Logging configurations exclude secrets from log output. Secure random generation uses cryptographic sources. Protected memory pages safeguard cryptographic keys during operation.

Audit and Compliance

Log Security

Structured logging configuration ensures comprehensive audit trails. Log level uses INFO in production and DEBUG in development. Format uses JSON for machine parsing and analysis. Rotation occurs daily with 30-day retention by default. Sensitive data is automatically redacted from log output.

Audit Events

The system automatically logs security-relevant events including authentication attempts both successful and failed, authorization failures when users attempt unauthorized actions, data access operations for both reads and writes, configuration changes by administrators, administrative actions across the system, API calls with relevant parameters, and security violations when detected.

Compliance Support

GDPR compliance features include data deletion capabilities and data export for portability. SOC2 compliance is supported through comprehensive audit trails and access controls. HIPAA compliance can be achieved with encryption and access logging configuration. PCI DSS requirements are addressed through no credit card storage and tokenization support for payment processing.

Security Configuration

Environment Variables

Required security settings include BOTSERVER_JWT_SECRET as a 256-bit hex string for token signing, BOTSERVER_ENCRYPTION_KEY as a 256-bit hex string for data encryption, and DATABASE_ENCRYPTION_KEY as a 256-bit hex string for database field encryption.

Directory service configuration requires ZITADEL_DOMAIN pointing to your Zitadel instance, ZITADEL_CLIENT_ID with your application client ID, and ZITADEL_CLIENT_SECRET with your application secret.

Drive configuration requires MINIO_ENDPOINT for the storage server address, MINIO_ACCESS_KEY and MINIO_SECRET_KEY for authentication, and MINIO_USE_SSL set to true for encrypted connections.

Cache configuration requires CACHE_URL pointing to the Redis-compatible server and CACHE_PASSWORD for authentication.

Optional security enhancements include BOTSERVER_ENABLE_AUDIT to enable comprehensive audit logging, BOTSERVER_REQUIRE_MFA to enforce multi-factor authentication, BOTSERVER_SESSION_TIMEOUT to set session duration in seconds, BOTSERVER_MAX_LOGIN_ATTEMPTS to limit failed login attempts, and BOTSERVER_LOCKOUT_DURATION to set account lockout time in seconds.

Network security settings include BOTSERVER_ALLOWED_ORIGINS for CORS whitelist, BOTSERVER_RATE_LIMIT_PER_IP for per-IP request limits, BOTSERVER_RATE_LIMIT_PER_USER for per-user request limits, and BOTSERVER_MAX_UPLOAD_SIZE for maximum file upload size in bytes.

Database Configuration

PostgreSQL security settings should be added to postgresql.conf to enable SSL with ssl set to on, specify certificate files with ssl_cert_file and ssl_key_file, configure strong ciphers with ssl_ciphers, enable server cipher preference with ssl_prefer_server_ciphers, and set the ECDH curve with ssl_ecdh_curve. The database connection string should include sslmode=require to enforce encrypted connections.

Caddy Configuration

Caddy provides secure reverse proxy functionality with automatic HTTPS. Global options should disable the admin interface and enable automatic HTTPS. TLS configuration should enforce TLS 1.3 only with strong cipher suites. Security headers should include Strict-Transport-Security, X-Frame-Options, X-Content-Type-Options, X-XSS-Protection, Referrer-Policy, and Content-Security-Policy. Rate limiting should be configured per remote host. The reverse proxy should forward appropriate headers including X-Real-IP, X-Forwarded-For, and X-Forwarded-Proto. Access logging should output to files in JSON format for analysis.

Best Practices

Development Practices

Dependency management requires regular security updates. Run cargo audit to check for known vulnerabilities. Run cargo update to apply security patches. Use cargo audit –deny warnings in CI to prevent vulnerable dependencies.

Code quality is enforced through Cargo.toml lints. Unsafe code is prohibited in application code. Unwrap calls are forbidden in production code paths. Panic macros are not allowed. Complete error handling is required for all fallible operations.

Security testing validates protection mechanisms. Run the security test suite with cargo test –features security_tests. Fuzzing for input validation uses cargo fuzz run api_fuzzer to find edge cases.

Deployment Practices

Container security for LXC deployments requires disabling privileged mode with security.privileged set to false, enabling isolated ID mapping with security.idmap.isolated set to true, and disabling nesting with security.nesting set to false. Applications should run as non-root users within containers.

Container security profiles should specify resource limits including CPU and memory caps. Root device configuration should use appropriate storage pools. Security settings should prevent privilege escalation.

Network policies should restrict traffic appropriately. Ingress should only be allowed from the Caddy proxy. Egress should be limited to PostgreSQL, Drive, Qdrant, and Cache. All other traffic should be blocked. Internal communication between components should use isolated networks.

Monitoring Practices

Security metrics to track include failed authentication rate, unusual API access patterns, resource usage anomalies, and geographic access patterns for detecting account compromise.

Alerting thresholds should trigger warnings at 5 or more failed logins, lock accounts at 10 or more failed logins, alert on unusual geographic access patterns, and issue critical alerts for any privilege escalation attempts.

Incident response capabilities include automatic session termination when threats are detected, account lockout for repeated failures, and comprehensive logging for forensic analysis.

Security Checklist

Before deploying General Bots in production, verify that all environment variables are set with strong random values, TLS is properly configured with valid certificates, database connections use SSL, file storage uses encryption, audit logging is enabled, rate limiting is configured appropriately, security headers are set in the reverse proxy, monitoring and alerting are configured, backup and recovery procedures are tested, and incident response procedures are documented.

See Also

The Security Policy chapter provides organizational security policies and procedures. The Password Security chapter details password requirements and implementation. The User Authentication chapter covers authentication flows and configuration. The Compliance Requirements chapter addresses regulatory compliance in detail.

General Bots Security Policy

Overview

This comprehensive security policy establishes the framework for protecting General Bots systems, data, and operations. It covers information security, access control, data protection, incident response, and ongoing maintenance procedures. All personnel, contractors, and third parties with access to General Bots systems must understand and comply with this policy.

1. Information Security Policy

1.1 Purpose and Scope

This Information Security Policy applies to all users, systems, and data within the General Bots infrastructure. It establishes the standards for protecting confidential information, maintaining system integrity, and ensuring business continuity across all operations.

1.2 Information Classification

We classify information into categories to ensure proper protection and appropriate resource allocation. Unclassified information can be made public without implications for the company, including marketing materials and public documentation. Employee Confidential information encompasses personal employee data such as medical records, salary information, performance reviews, and contact details. Company Confidential information includes business-critical assets such as contracts, source code, business plans, passwords for critical IT systems, client contact records, financial accounts, and strategic plans. Client Confidential information covers client personally identifiable information, passwords to client systems, client business plans, new product information, and market-sensitive information.

1.3 Security Objectives

Our security framework aims to reduce the risk of IT problems through proactive measures and continuous monitoring. We plan for problems and establish procedures to deal with them effectively when they occur. Our systems are designed to keep working even when something goes wrong through redundancy and failover capabilities. We protect company, client, and employee data through encryption, access controls, and monitoring. We keep valuable company information such as plans and designs confidential through strict access controls. We meet our legal obligations under the General Data Protection Regulation and other applicable laws. We fulfill our professional obligations towards our clients and customers through transparent practices and reliable service.

1.4 Roles and Responsibilities

Rodrigo Rodriguez serves as the director with overall responsibility for IT security strategy and policy approval. Pragmatismo Data Center functions as the IT partner organization we use to help with planning and technical support. The Data Protection Officer advises on data protection laws and best practices, reporting directly to senior management. All employees are responsible for following security policies and reporting security incidents promptly. System administrators are responsible for implementing and maintaining security controls according to this policy. Department heads are responsible for ensuring their teams comply with security policies and complete required training.

1.5 Review Process

We review this policy annually, with the next review scheduled for the date indicated in the document control section. Questions, suggestions, or feedback should be directed to security@pragmatismo.com.br for consideration during the review process or for immediate clarification.

2. Access Control Policy

2.1 Access Management Principles

Our access management follows four core principles. The Least Privilege principle ensures users receive only the minimum access rights necessary to perform their job functions. The Need-to-Know principle restricts access to confidential information to those who require it for their specific duties. Separation of Duties divides critical functions among multiple people to prevent fraud and error. Regular Reviews conducted quarterly ensure access rights remain appropriate as roles and responsibilities evolve.

2.2 User Account Management

Account creation follows a controlled process where new accounts are created only upon approval from the user’s manager. Default accounts are disabled immediately after system installation to prevent unauthorized access. Each user has a unique account because shared accounts are strictly prohibited to maintain accountability.

Account modification requires manager approval for any access changes. Privilege escalation requires security team approval in addition to manager approval. All changes are logged and reviewed monthly to detect anomalies.

Account termination procedures ensure accounts are disabled within 2 hours of employment termination. Access is revoked immediately for terminated employees without exception. Contractor accounts expire automatically at contract end. All company devices and access credentials must be returned before departure.

2.3 Access Review Procedures

Monthly reviews examine privileged account usage patterns, check for inactive accounts that have been dormant for more than 30 days, and verify that administrative access justifications remain valid.

Quarterly reviews require department heads to review all team member access, remove unnecessary permissions, and document review results along with any actions taken.

Annual reviews conduct a comprehensive examination of all user accounts, validate role-based access assignments against current organizational structure, and audit system administrator privileges for appropriateness.

3. Password Policy

3.1 Password Requirements

Password complexity requirements mandate a minimum of 12 characters for standard users and 16 characters for administrative accounts. Passwords must include uppercase letters, lowercase letters, numbers, and special characters. Passwords cannot contain the username or common dictionary words.

Password lifetime requirements specify 90-day rotation for standard accounts, 60-day rotation for administrative accounts, and 180-day rotation for service accounts with documented exceptions approved by the security team.

Password history settings ensure the system remembers the last 12 passwords, and users cannot reuse any of these previous passwords when setting a new one.

3.2 Password Storage and Transmission

All passwords are hashed using the Argon2id algorithm, which provides strong resistance against both CPU and GPU-based attacks. Passwords are never stored in plaintext under any circumstances. Passwords are never transmitted via email or unencrypted channels. Password managers are recommended for secure storage of credentials.

3.3 Multi-Factor Authentication

Multi-factor authentication is required for all administrative accounts, remote access connections, access to confidential data, and financial system access.

Acceptable MFA methods include Time-based One-Time Passwords (TOTP) as the preferred method, hardware tokens such as YubiKey, SMS codes only as a backup method due to SIM-swapping risks, and biometric authentication where available and appropriate.

4. Data Protection Policy

4.1 Data Encryption

Encryption at rest protects stored data across all systems. Databases use AES-256-GCM encryption for sensitive fields. File storage applies AES-256-GCM encryption to all uploaded files. Backups are encrypted before transmission and storage. Mobile devices require full-disk encryption.

Encryption in transit protects data during transmission. All external communications use TLS 1.3. Service-to-service communication uses mutual TLS (mTLS). Remote access requires VPN connections. Certificate pinning applies to critical services to prevent man-in-the-middle attacks.

4.2 Data Retention and Disposal

Retention periods define how long different data types are kept. User data is retained as long as the account is active plus 30 days after closure. Audit logs are retained for 7 years to meet compliance requirements. Full backups are retained for 90 days while incremental backups are retained for 30 days. Email is retained for 2 years unless a legal hold applies.

Secure disposal ensures data cannot be recovered after deletion. Digital data undergoes secure deletion with multiple overwrites. Physical media is destroyed through shredding or degaussing. Certificates of destruction are maintained for 3 years as proof of proper disposal.

4.3 Data Privacy and GDPR Compliance

We classify and process only information necessary for the completion of our duties. We limit access to personal data to only those who need it for processing. Our classification system ensures information is protected properly and that we allocate security resources appropriately based on sensitivity levels.

User rights under GDPR are fully supported. Users have the right to access their personal data upon request. Users have the right to correction of inaccurate data. Users have the right to deletion, also known as the right to be forgotten. Users have the right to data portability in machine-readable formats. Users have the right to restrict processing of their data.

Data breach notification follows strict timelines. Breach assessment must be completed within 24 hours of discovery. Notification to authorities occurs within 72 hours if required by regulation. User notification happens without undue delay when their data is affected. All breaches are documented regardless of whether notification is required.

5. Incident Response Plan

5.1 Incident Classification

Incidents are classified into four severity levels to guide response priorities and resource allocation.

Critical incidents (P1) include active data breaches with confirmed data exfiltration, ransomware infections affecting production systems, complete system outages affecting all users, and compromise of administrative credentials. These require immediate response with all available resources.

High priority incidents (P2) include suspected data breaches under investigation, malware infections on non-critical systems, unauthorized access attempts that were detected, and partial system outages affecting critical services.

Medium priority incidents (P3) include failed security controls requiring attention, policy violations without immediate risk, minor system vulnerabilities discovered, and isolated user account compromises.

Low priority incidents (P4) include security alerts requiring investigation, policy clarification needs, security awareness issues, and minor configuration issues.

5.2 Incident Response Procedures

Detection and reporting occurs within the first 0-15 minutes. Security incidents are detected via monitoring systems or reported by users. Initial assessment determines severity level. The incident is logged in the tracking system. The security team is notified immediately for P1 and P2 incidents, or within 1 hour for P3 and P4 incidents.

Containment occurs from 15 minutes to 2 hours after detection. Affected systems are isolated from the network. Compromised accounts are disabled. Evidence is preserved for investigation. Temporary security controls are implemented. Management and stakeholders are notified.

Investigation occurs from 2 to 24 hours after containment. Logs and forensic evidence are gathered. Attack vectors and scope are analyzed. Root cause is identified. Findings are documented. A determination is made whether external authorities need notification.

Eradication typically takes 1-3 days. Malware and unauthorized access are removed. Vulnerabilities are patched. Compromised credentials are reset. Additional security controls are applied. Systems are verified to be clean.

Recovery typically takes 1-5 days. Systems are restored from clean backups if needed. Systems gradually return to production. Enhanced monitoring watches for re-infection. System functionality is validated. User communication and support is provided.

Post-incident review occurs within 1 week. The complete incident timeline is documented. Response effectiveness is analyzed. Lessons learned are identified. Security controls are updated. Detection capabilities are improved. Incident response procedures are updated based on findings.

5.3 Contact Information

Internal contacts for security matters include the Security Team at security@pragmatismo.com.br, IT Support at support@pragmatismo.com.br, and Management through Rodrigo Rodriguez.

External contacts should be maintained in a separate secure document and include local law enforcement authorities, legal counsel, the relevant Data Protection Authority, and the cyber insurance provider.

5.4 Communication Plan

Internal communication follows escalation timelines. The security team and management are notified immediately. Affected department heads are notified within 2 hours. All staff are notified within 4 hours if the impact is widespread. Daily updates continue during active incidents.

External communication follows regulatory requirements. Customers are notified within 24 hours if their data is affected. Partners are notified within 12 hours if systems are shared. Authorities are notified within 72 hours per GDPR requirements. Public and media communication occurs only through the designated spokesperson.

6. Backup and Recovery Procedures

6.1 Backup Schedule

Full backups run weekly on Sundays at 2:00 AM and include all databases, file storage, and configurations. Full backups are retained for 12 weeks and stored in a geographically separate location.

Incremental backups run daily at 2:00 AM and include only changed files and database transactions since the last backup. Incremental backups are retained for 30 days and stored both locally and replicated off-site.

Continuous backups capture database transaction logs every 15 minutes and critical configuration changes immediately. These are retained for 7 days and enable point-in-time recovery to any moment within that window.

6.2 Backup Verification

Automated testing runs continuously. Daily tests verify backup completion. Weekly tests restore sample files. Monthly tests perform full database restoration to an isolated environment.

Manual testing occurs on a scheduled basis. Quarterly tests conduct full disaster recovery drills. Bi-annual tests perform complete system restoration to an alternate site. Annual tests execute a full business continuity exercise with stakeholders.

6.3 Recovery Procedures

Recovery Time Objectives (RTO) define maximum acceptable downtime. Critical systems must recover within 4 hours. Important systems must recover within 24 hours. Non-critical systems must recover within 72 hours.

Recovery Point Objectives (RPO) define maximum acceptable data loss. Critical data has an RPO of 15 minutes. Important data has an RPO of 24 hours. Non-critical data has an RPO of 1 week.

Recovery steps follow a systematic process. First, assess damage and determine recovery scope. Second, verify backup integrity before beginning restoration. Third, restore to an isolated environment first for validation. Fourth, validate data integrity and completeness. Fifth, test system functionality thoroughly. Sixth, switch users to recovered systems. Seventh, monitor for issues during the transition period. Eighth, document the recovery process and timing for future reference.

7. Change Management Procedures

7.1 Change Categories

Standard changes are pre-approved routine modifications. These include security patches applied within 48 hours of release and user account modifications. Standard changes require only manager sign-off without additional approval.

Normal changes are non-emergency modifications requiring testing. These include software updates, new features, and infrastructure modifications. Normal changes require Change Advisory Board approval before implementation.

Emergency changes address critical security issues or outages. These include critical security patches, system outage fixes, and active threat mitigation. Emergency changes receive expedited approval from the Security Director.

7.2 Change Request Process

The change process follows eight steps. Submission requires completing the change request form with full details. Risk assessment evaluates potential security impact. Approval is obtained based on change type requirements. Testing validates the change in a non-production environment. Scheduling places the change during an appropriate maintenance window. Implementation executes the change with a rollback plan ready. Verification confirms the change was successful. Documentation updates configuration records to reflect the change.

7.3 Change Testing Requirements

Test cases must cover functionality validation, security control verification, performance impact assessment, user acceptance testing, and rollback procedure verification.

Test environments progress through stages. Development supports individual developer testing. Staging handles integration and security testing. Pre-production hosts user acceptance testing. Production uses phased rollout with enhanced monitoring.

8. Security Incident Procedures

8.1 Reporting Security Incidents

Incidents can be reported through several channels. Email reports go to security@pragmatismo.com.br. Phone reports use the security hotline. Web reports use the internal incident reporting portal. In-person reports can be made directly to the IT department.

Reportable events include suspicious emails or phishing attempts, lost or stolen devices, unauthorized access or unusual system behavior, malware alerts, data leaks or exposures, policy violations, and any security concerns or vulnerabilities discovered.

Timing requirements specify immediate reporting for critical incidents, reporting within 1 hour for high-priority incidents, and same business day reporting for medium and low priority incidents.

8.2 Employee Response to Incidents

When an incident occurs, employees should report immediately to the security team, preserve evidence by not deleting suspicious emails, disconnect their device from the network if it may be compromised, document what happened while details are fresh, and follow instructions from the security team.

Employees should avoid trying to fix the problem themselves, deleting or modifying potential evidence, discussing the incident on social media, blaming others, or ignoring suspicious activity hoping it will resolve itself.

9. Data Breach Response Procedures

9.1 Immediate Response

Within the first 24 hours, the response team must contain the breach to stop ongoing data exposure, assess the situation to determine scope and data affected, notify the security team and management, preserve logs and forensic data as evidence, and begin documenting the incident timeline.

9.2 Investigation Phase

During the 1-3 day investigation phase, forensic specialists conduct detailed analysis of the breach. The scope determination identifies all affected systems and data. Root cause analysis determines how the breach occurred. Impact analysis assesses damage and ongoing risks. Legal review consults with the legal team on notification obligations.

9.3 Notification Requirements

Internal notification follows escalation timelines. Management is notified immediately. Legal is notified within 2 hours. PR and Communications are notified within 4 hours. Affected departments are notified within 8 hours.

External notification follows regulatory requirements. Data Protection Authorities must be notified within 72 hours per GDPR requirements. Affected individuals must be notified without undue delay. Business partners must be notified within 24 hours if their data is affected. Law enforcement is notified as required by jurisdiction.

9.4 Remediation and Prevention

Following a breach, the organization applies security patches and fixes to close vulnerabilities. Compromised credentials are reset across all affected systems. Monitoring and detection capabilities are enhanced to catch similar attacks. Security controls are updated based on lessons learned. Additional security training is provided to affected teams. Policies are reviewed and updated to address gaps. All lessons learned are implemented to prevent recurrence.

10. Regular Maintenance Tasks

10.1 Weekly Tasks

Security updates are reviewed and critical security patches are applied. Antivirus and antimalware signatures are updated. Security alerts and events are reviewed. Backup completion status is checked. System resource usage is monitored for anomalies.

Automated processes run continuously including vulnerability scans, log analysis and correlation, backup integrity checks, and certificate expiration monitoring.

10.2 Monthly Tasks

Access reviews examine new user accounts created during the month, audit privileged account usage, check for inactive accounts dormant for more than 30 days, review failed login attempts for patterns, and validate group memberships remain appropriate.

System maintenance applies non-critical patches, reviews system performance metrics, updates system documentation, tests disaster recovery procedures, and reviews incident reports from the month.

10.3 Quarterly Tasks

Compliance audits review security policy compliance, audit access controls and permissions, verify encryption implementations, check backup and recovery processes, and validate security configurations against baselines.

Security assessments conduct internal vulnerability assessments, run phishing simulation exercises, deliver security awareness training, review third-party security posture, and update risk assessments.

10.4 Annual Tasks

Penetration testing engages a certified firm for external penetration testing, conducts internal network penetration testing, performs application security testing, executes social engineering assessments, and remediates all findings within 90 days.

Disaster recovery testing conducts a full disaster recovery drill, tests alternate site failover, executes a business continuity exercise, updates recovery procedures based on results, and documents lessons learned.

Policy and documentation work includes annual policy review and updates, security training for all staff, updating security documentation, reviewing vendor security agreements, and strategic security planning for the coming year.

10.5 Bi-Annual Tasks

Disaster recovery testing at the semi-annual level includes complete system restoration to an alternate site, database recovery to a specific point-in-time, application functionality verification, network failover testing, and communication system testing.

Business continuity testing includes testing emergency communication procedures, verifying contact information is current, reviewing and updating the business continuity plan, testing backup data center capabilities, and validating recovery time objectives are achievable.

11. Employees Joining and Leaving

We provide comprehensive training to new staff and ongoing support for existing staff to implement this policy. Initial training covers an introduction to IT security including risks, basic security measures, company policies, and where to get help. Each employee completes appropriate security awareness training. Training covers how to use company systems and security software properly. Staff can request a security health check on their computer, tablet, or phone. Access to systems and resources is granted based on job role requirements. Appropriate security tools are assigned including VPN access, password manager, and MFA devices.

The onboarding security checklist ensures all steps are completed. Background checks are completed where applicable. The security policy acknowledgment is signed. Security training is completed. NDA and confidentiality agreements are signed. User accounts are created with appropriate permissions. MFA is configured for all accounts. Company devices are issued and configured. VPN access is configured if needed. A password manager account is created. Emergency contact information is collected.

When people leave a project or the company, we promptly revoke their access privileges to all systems.

The offboarding security checklist ensures thorough access removal. All user accounts are disabled within 2 hours of departure. VPN and remote access are revoked. The former employee is removed from all groups and distribution lists. Company devices including laptops, phones, and tokens are collected. Access cards and keys are collected. Any shared account passwords the person knew are reset. The person is removed from third-party systems such as GitHub and AWS. Ownership of documents and files is transferred. An exit interview covers ongoing security obligations. Documentation confirms all access revocation is completed.

12. Data Protection Officer Responsibilities

The company ensures the Data Protection Officer is given all appropriate resources to carry out their tasks and maintain their expert knowledge. The DPO reports directly to the highest level of management and must not carry out any other tasks that could result in a conflict of interest.

The DPO’s duties include monitoring compliance with GDPR and other privacy regulations, advising on data protection impact assessments, cooperating with supervisory authorities, acting as the contact point for data subjects exercising their rights, maintaining records of processing activities, providing data protection training to staff, conducting privacy audits, and reviewing privacy policies and procedures for adequacy.

13. Technical Documentation Requirements

13.1 Network Architecture Documentation

Required network documentation includes network topology diagrams showing both logical and physical layouts, IP address allocation schemes, firewall rules and security zone definitions, VPN configurations, DMZ architecture, network device inventory, VLAN configurations, and routing protocols and tables.

This documentation must be updated within 48 hours of any network change to remain accurate.

13.2 System Configuration Documentation

Required system documentation includes server inventory with roles and specifications, operating system versions and patch levels, installed software and versions, service configurations, database schemas and configurations, application architecture diagrams, API documentation, and integration points and dependencies.

This documentation must be updated within 24 hours of configuration changes.

13.3 Security Controls Documentation

Security control documentation covers access control lists, security group configurations, intrusion detection and prevention rules, data loss prevention policies, endpoint protection configurations, email security settings, web filtering rules, and security monitoring dashboards.

This documentation is reviewed monthly with a comprehensive review conducted quarterly.

13.4 Encryption Standards Documentation

Encryption documentation specifies encryption algorithms in use such as AES-256-GCM and TLS 1.3, key management procedures, certificate inventory and renewal schedule, data classification and encryption requirements, encryption at rest implementations, encryption in transit configurations, and cryptographic library versions.

This documentation must be updated immediately upon any encryption-related change.

13.5 Logging and Monitoring Documentation

Logging documentation covers log sources and types collected, log retention periods, log storage locations and capacity, log analysis tools and procedures, alert thresholds and escalation paths, monitoring dashboards and reports, and SIEM configuration and rules.

This documentation is reviewed quarterly with an annual comprehensive audit.

14. Compliance Records Management

14.1 Risk Assessment Reports

Risk assessments are conducted annually for comprehensive organizational assessment, quarterly for targeted assessments of new systems and services, and ad-hoc after significant incidents or changes.

Risk assessment reports contain identified assets and their value to the organization, threat identification and analysis, vulnerability assessment, risk likelihood and impact ratings, risk treatment plans, residual risk acceptance decisions, and review and approval signatures.

Risk assessment records are retained for 7 years.

14.2 Audit Logs

Log types collected include authentication and authorization events, administrative actions, data access operations including reads, writes, and deletes, configuration changes, security events and alerts, system errors and failures, and network traffic logs.

Retention periods vary by log type. Security logs are retained for 7 years. System logs are retained for 1 year. Application logs are retained for 90 days. Network logs are retained for 30 days.

Log protection requirements specify that logs are read-only after creation, encrypted in transit and at rest, backed up daily, and monitored for tampering.

14.3 Training Records

Training requirements include new hire security orientation within the first week of employment, annual security awareness training for all staff, role-specific security training as applicable to job function, phishing simulation exercises quarterly, and incident response training for the security team annually.

Training documentation includes training completion dates, training content and version delivered, assessment scores if applicable, certificates of completion, and refresher training schedules.

Training records are retained for the duration of employment plus 3 years.

14.4 Incident Reports

Incident reports must include the detection date and time, incident classification and severity, systems and data affected, timeline of events, response actions taken, root cause analysis, lessons learned, and corrective actions implemented.

Reports are distributed internally to management, the security team, and affected departments. External distribution follows regulatory and contractual requirements.

Incident reports are retained for 7 years.

14.5 Access Review Records

Review documentation includes the date of review, reviewer name and title, list of accounts reviewed, access changes made, justification for access granted, exceptions and approvals, and follow-up actions required.

Review schedules specify quarterly reviews for standard users, monthly reviews for privileged users, and bi-annual reviews for service accounts.

Access review records are retained for 3 years.

15. Compliance Framework

15.1 Applicable Regulations

GDPR compliance requires data protection impact assessments for high-risk processing, privacy by design and by default in all systems, user consent management, data subject rights fulfillment, and breach notification procedures.

SOC 2 compliance requires security controls documentation, availability monitoring, confidentiality protection measures, privacy practices documentation, and annual audit compliance verification.

ISO 27001 compliance requires an information security management system, risk assessment and treatment processes, security controls implementation, continuous improvement processes, and regular internal audits.

15.2 Compliance Monitoring

Automated monitoring tracks security control effectiveness, policy compliance through scanning, configuration drift detection, vulnerability management status, and patch compliance levels.

Manual reviews include quarterly compliance assessments, annual third-party audits, internal audit programs, management review meetings, and regulatory requirement updates.

16. Third-Party Security

16.1 Vendor Security Assessment

Pre-contract assessment requires security questionnaire completion, security certification review for SOC 2 and ISO 27001, data processing agreement execution, security requirements in the contract, and incident notification requirements.

Ongoing monitoring includes annual security re-assessment, review of security incidents involving the vendor, audit report review, performance measurement against SLAs, and security scorecard maintenance.

16.2 Data Sharing with Third Parties

Data sharing requirements include having a data processing agreement in place, sharing only the minimum necessary data, encryption for all data in transit, access controls and monitoring, and the right to audit vendor security practices.

The approval process requires security team review, legal review of agreements, privacy impact assessment, management approval for sensitive data sharing, and documentation in the vendor register.

17. Vulnerability Management

17.1 Vulnerability Identification

Vulnerabilities are identified through multiple sources including automated vulnerability scanning conducted weekly, annual penetration testing by external firms, security research and advisories from vendors and researchers, bug bounty program submissions, internal security testing, and third-party security assessments.

17.2 Vulnerability Remediation

Response times are based on severity. Critical vulnerabilities must be remediated within 24 hours. High severity vulnerabilities must be remediated within 7 days. Medium severity vulnerabilities must be remediated within 30 days. Low severity vulnerabilities must be remediated within 90 days or formally accepted as risk.

The remediation process follows a structured approach. First, the vulnerability is confirmed and documented. Second, impact and exploitability are assessed. Third, a remediation plan is developed. Fourth, the patch or fix is tested in non-production. Fifth, the change management process is followed. Sixth, the fix is deployed to production. Seventh, verification testing confirms the fix is effective. Eighth, documentation is updated.

17.3 Reporting a Vulnerability

External security researchers can report vulnerabilities by email to security@pragmatismo.com.br. A PGP key is available on the website for encrypted communication. Initial response is provided within 48 hours. A bug bounty program provides rewards for qualifying vulnerabilities.

Internal staff should report vulnerabilities via the internal security portal or email the security team directly for critical issues. Reports should include a description of the vulnerability, affected systems, and steps to reproduce the issue. Response is provided within 24 hours.

18. Security Metrics and KPIs

18.1 Key Performance Indicators

Security metrics track operational effectiveness. Mean time to detect (MTTD) incidents has a target of less than 15 minutes. Mean time to respond (MTTR) to incidents has a target of less than 4 hours. Percentage of systems with latest patches has a target of greater than 95%. Failed login attempts per day are baselined at less than 100. Security training completion rate has a target of 100%. Vulnerabilities remediated within SLA has a target of greater than 90%. Backup success rate has a target of 100%. Access review completion has a target of 100% on schedule.

Reporting occurs at multiple intervals. Weekly reports cover security incidents and critical metrics. Monthly reports provide a comprehensive security dashboard. Quarterly reports analyze metrics trends. Annual reports assess overall security posture.

19. Policy Enforcement

19.1 Policy Violations

Types of violations include unauthorized access attempts, password sharing, installation of unauthorized software, data exfiltration or leakage, policy non-compliance, and failure to report incidents.

Consequences follow progressive discipline. First offense results in a warning and mandatory retraining. Second offense results in a written warning and management review. Third offense results in suspension or termination. Severe violations result in immediate termination and potential legal action.

19.2 Exception Process

Exception requests require written justification, a completed risk assessment, identification of compensating controls, time-limited approval with a maximum of 90 days, approval from both management and the security team, and regular review while the exception remains active.

20. Document Control

This document is owned by Rodrigo Rodriguez, Security Director. The last update date and next review date are indicated in the document header. The current version is 2.0 with approved status.

The change history shows Version 1.0 as the initial policy creation and Version 2.0 as the comprehensive expansion with detailed procedures.

Distribution includes all employees via the internal portal, availability to clients upon request, and a summary published on the company website.

Approval authority, approval date, and next review date are recorded in the document management system.

Contact Information

The Security Team can be reached by email at security@pragmatismo.com.br, by phone at the emergency hotline maintained in internal systems, or through the internal security portal.

Specific inquiries should be directed to appropriate addresses. Security incidents go to security@pragmatismo.com.br. Privacy concerns go to privacy@pragmatismo.com.br. Compliance questions go to compliance@pragmatismo.com.br. General IT support requests go to support@pragmatismo.com.br.

Compliance Requirements Checklist

Overview

This document provides a comprehensive checklist for security and compliance requirements across multiple frameworks (GDPR, SOC 2, ISO 27001, HIPAA, LGPD) using the actual components deployed in General Bots.

Component Stack

ComponentPurposeLicense
CaddyReverse proxy, TLS termination, web serverApache 2.0
PostgreSQLRelational databasePostgreSQL License
General Bots DirectoryIdentity and access management (Zitadel/Keycloak)Apache 2.0
DriveS3-compatible object storageAGPLv3
StalwartMail server (SMTP/IMAP)AGPLv3
QdrantVector databaseApache 2.0
Cache (Valkey)In-memory cache (Redis-compatible)BSD 3-Clause
LiveKitVideo conferencingApache 2.0
UbuntuOperating systemVarious

Compliance Requirements Matrix

Legend

  • ✅ = Implemented and configured
  • ⚠️ = Partially implemented, needs configuration
  • ⬜ = Not yet implemented
  • 🔄 = Automated process
  • 📝 = Manual process required

Network & Web Server (Caddy)

StatusRequirementComponentStandardImplementation
TLS 1.3 ConfigurationCaddyAllAutomatic TLS 1.3 with modern ciphers
Access LoggingCaddyAllJSON format logs to /var/log/caddy/access.log
Rate LimitingCaddyISO 27001Per-IP rate limiting in Caddyfile
⚠️WAF RulesCaddyHIPAAConsider Caddy security plugins or external WAF
Security HeadersCaddyAllHSTS, CSP, X-Frame-Options, X-Content-Type-Options
Reverse Proxy SecurityCaddyAllSecure forwarding with real IP preservation
Certificate ManagementCaddyAllAutomatic Let’s Encrypt with auto-renewal
🔄HTTPS RedirectCaddyAllAutomatic HTTP to HTTPS redirect

Configuration File: /etc/caddy/Caddyfile

app.example.com {
    tls {
        protocols tls1.3
        ciphers TLS_AES_256_GCM_SHA384
    }
    header {
        Strict-Transport-Security "max-age=31536000"
        X-Frame-Options "SAMEORIGIN"
        X-Content-Type-Options "nosniff"
        Content-Security-Policy "default-src 'self'"
    }
    rate_limit {
        zone static {
            key {remote_host}
            events 100
            window 1m
        }
    }
    reverse_proxy localhost:3000
}

Identity & Access Management (General Bots Directory)

StatusRequirementComponentStandardImplementation
MFA ImplementationDirectoryAllTOTP/SMS/Hardware token support
RBAC ConfigurationDirectoryAllRole-based access control with custom roles
Password PolicyDirectoryAllMin 12 chars, complexity requirements, history
OAuth2/OIDC SetupDirectoryISO 27001OAuth 2.0 and OpenID Connect flows
Audit LoggingDirectoryAllComprehensive user activity logs
Session ManagementDirectoryAllConfigurable timeouts and invalidation
SSO SupportDirectoryEnterpriseSAML and OIDC SSO integration
⚠️Password RotationDirectoryHIPAAConfigure 90-day rotation policy
📝Access ReviewsDirectoryAllQuarterly manual review of user permissions

Configuration: Directory Admin Console (http://localhost:9000)

Key Settings:

  • Password min length: 12 characters
  • MFA: Required for admins
  • Session timeout: 8 hours
  • Idle timeout: 30 minutes

Database (PostgreSQL)

StatusRequirementComponentStandardImplementation
Encryption at RestPostgreSQLAllFile-system level encryption (LUKS)
Encryption in TransitPostgreSQLAllTLS/SSL connections enforced
Access ControlPostgreSQLAllRole-based database permissions
Audit LoggingPostgreSQLAllpgAudit extension for detailed logging
Connection PoolingPostgreSQLAllBuilt-in connection management
⚠️Row-Level SecurityPostgreSQLHIPAAConfigure RLS policies for sensitive tables
⚠️Column EncryptionPostgreSQLGDPREncrypt PII columns with pgcrypto
🔄Automated BackupsPostgreSQLAllDaily backups via pg_dump/pg_basebackup
Point-in-Time RecoveryPostgreSQLHIPAAWAL archiving enabled

Configuration: Installed and configured automatically via installer.rs

-- Enable SSL
ssl = on
ssl_cert_file = '/path/to/server.crt'
ssl_key_file = '/path/to/server.key'
ssl_ciphers = 'HIGH:MEDIUM:+3DES:!aNULL'

-- Enable audit logging
shared_preload_libraries = 'pgaudit'
pgaudit.log = 'write, ddl'
pgaudit.log_catalog = off

-- Connection settings
max_connections = 100
password_encryption = scram-sha-256

-- Logging
log_connections = on
log_disconnections = on
log_duration = on
log_statement = 'all'

Object Storage (Drive)

StatusRequirementComponentStandardImplementation
Encryption at RestDriveAllServer-side encryption (SSE-S3)
Encryption in TransitDriveAllTLS for all connections
Bucket PoliciesDriveAllFine-grained access control policies
Object VersioningDriveHIPAAVersion control for data recovery
Access LoggingDriveAllDetailed audit logs for all operations
⚠️Lifecycle RulesDriveLGPDConfigure data retention and auto-deletion
Immutable ObjectsDriveComplianceWORM (Write-Once-Read-Many) support
🔄ReplicationDriveHIPAAMulti-site replication for DR
IAM IntegrationDriveAllIntegration with Directory Service via OIDC

Configuration: /conf/drive/config.env

Bucket Policy Example:

{
  "Version": "2012-10-17",
  "Statement": [
    {
      "Effect": "Allow",
      "Principal": {"AWS": ["arn:aws:iam::*:user/app-user"]},
      "Action": ["s3:GetObject"],
      "Resource": ["arn:aws:s3:::bucket-name/*"]
    }
  ]
}

Email Server (Stalwart)

StatusRequirementComponentStandardImplementation
DKIM SigningStalwartAllDomain key authentication
SPF RecordsStalwartAllSender policy framework
DMARC PolicyStalwartAllDomain-based message authentication
Mail EncryptionStalwartAllTLS for SMTP/IMAP (STARTTLS + implicit)
Content FilteringStalwartAllSpam and malware filtering
⚠️Mail ArchivingStalwartHIPAAConfigure long-term email archiving
Sieve FilteringStalwartAllServer-side mail filtering
AuthenticationStalwartAllOIDC integration with Directory Service
📝Retention PolicyStalwartGDPR/LGPDDefine and implement email retention

Configuration: /conf/mail/config.toml

[server.listener."smtp"]
bind = ["0.0.0.0:25"]
protocol = "smtp"

[server.listener."smtp-submission"]
bind = ["0.0.0.0:587"]
protocol = "smtp"
tls.implicit = false

[server.listener."smtp-submissions"]
bind = ["0.0.0.0:465"]
protocol = "smtp"
tls.implicit = true

[authentication]
mechanisms = ["plain", "login"]
directory = "oidc"

[directory."oidc"]
type = "oidc"
issuer = "http://localhost:9000"

DNS Records:

; SPF Record
example.com. IN TXT "v=spf1 ip4:203.0.113.0/24 -all"

; DKIM Record
default._domainkey.example.com. IN TXT "v=DKIM1; k=rsa; p=MIGfMA0GCS..."

; DMARC Record
_dmarc.example.com. IN TXT "v=DMARC1; p=quarantine; rua=mailto:dmarc@example.com"

Cache (Valkey)

StatusRequirementComponentStandardImplementation
AuthenticationValkeyAllPassword-protected access
TLS SupportValkeyAllEncrypted connections
Access ControlValkeyAllACL-based permissions
⚠️PersistenceValkeyData RecoveryRDB/AOF for data persistence
Memory LimitsValkeyAllMaxmemory policies configured
📝Data ExpirationValkeyGDPRSet TTL for cached personal data

Configuration: /etc/valkey/valkey.conf

# Authentication
requirepass <your-secure-password>

# TLS
tls-port 6380
tls-cert-file /path/to/cert.pem
tls-key-file /path/to/key.pem
tls-protocols "TLSv1.3"

# ACL
aclfile /etc/valkey/users.acl

# Memory management
maxmemory 2gb
maxmemory-policy allkeys-lru

# Persistence
save 900 1
save 300 10

Vector Database (Qdrant)

StatusRequirementComponentStandardImplementation
API AuthenticationQdrantAllAPI key authentication
TLS SupportQdrantAllHTTPS enabled
Access ControlQdrantAllCollection-level permissions
⚠️Data EncryptionQdrantHIPAAFile-system level encryption
🔄Backup SupportQdrantAllSnapshot-based backups
📝Data RetentionQdrantGDPRImplement collection cleanup policies

Configuration: /etc/qdrant/config.yaml

service:
  host: 0.0.0.0
  http_port: 6333
  grpc_port: 6334

security:
  api_key: "your-secure-api-key"
  read_only_api_key: "read-only-key"

storage:
  storage_path: /var/lib/qdrant/storage
  snapshots_path: /var/lib/qdrant/snapshots

telemetry:
  enabled: false

Operating System (Ubuntu)

StatusRequirementComponentStandardImplementation
⚠️System HardeningUbuntuAllApply CIS Ubuntu Linux benchmarks
Automatic UpdatesUbuntuAllUnattended-upgrades for security patches
⚠️Audit DaemonUbuntuAllConfigure auditd for system events
Firewall RulesUbuntuAllUFW configured with restrictive rules
⚠️Disk EncryptionUbuntuAllLUKS full-disk encryption
⚠️AppArmorUbuntuAllEnable mandatory access control
📝User ManagementUbuntuAllDisable root login, use sudo
📝SSH HardeningUbuntuAllKey-based auth only, disable password auth

Firewall Configuration:

# UFW firewall rules
ufw default deny incoming
ufw default allow outgoing
ufw allow 22/tcp    # SSH
ufw allow 80/tcp    # HTTP
ufw allow 443/tcp   # HTTPS
ufw allow 25/tcp    # SMTP
ufw allow 587/tcp   # SMTP submission
ufw allow 993/tcp   # IMAPS
ufw enable

Automatic Updates:

# /etc/apt/apt.conf.d/50unattended-upgrades
Unattended-Upgrade::Allowed-Origins {
    "${distro_id}:${distro_codename}-security";
};
Unattended-Upgrade::Automatic-Reboot "true";
Unattended-Upgrade::Automatic-Reboot-Time "03:00";

Audit Rules: /etc/audit/rules.d/audit.rules

# Monitor authentication
-w /var/log/auth.log -p wa -k auth_log
-w /etc/passwd -p wa -k user_modification
-w /etc/group -p wa -k group_modification

# Monitor network
-a always,exit -F arch=b64 -S connect -k network_connect

# Monitor file access
-w /etc/shadow -p wa -k shadow_modification

Cross-Component Requirements

Monitoring & Logging

StatusRequirementImplementationStandard
Centralized LoggingAll logs to /var/log/ with rotationAll
⚠️Log AggregationELK Stack or similar SIEMISO 27001
Health MonitoringPrometheus + GrafanaAll
📝Alert ConfigurationSet up alerts for security eventsAll
Metrics CollectionComponent-level metricsAll

Backup & Recovery

StatusRequirementImplementationStandard
🔄Automated BackupsDaily automated backupsAll
Backup EncryptionAES-256 encrypted backupsAll
Off-site StorageDrive replication to secondary siteHIPAA
📝Backup TestingQuarterly restore testsAll
Retention Policy90 days for full, 30 for incrementalAll

Backup Script: /usr/local/bin/backup-system.sh

#!/bin/bash
BACKUP_DATE=$(date +%Y%m%d_%H%M%S)

# PostgreSQL backup
pg_dump -h localhost -U postgres generalbots | \
  gzip | \
  openssl enc -aes-256-cbc -salt -out /backup/pg_${BACKUP_DATE}.sql.gz.enc

# Drive backup
mc mirror drive/generalbots /backup/drive_${BACKUP_DATE}/

# Qdrant snapshot
curl -X POST "http://localhost:6333/collections/botserver/snapshots"

Network Security

StatusRequirementImplementationStandard
Network SegmentationComponent isolation via firewallAll
Internal TLSTLS between all componentsISO 27001
⚠️VPN AccessWireGuard VPN for admin accessAll
Rate LimitingCaddy rate limitingAll
📝DDoS ProtectionCloudFlare or similarProduction

Compliance-Specific Requirements

GDPR

StatusRequirementImplementation
Data EncryptionAES-256 at rest, TLS 1.3 in transit
Right to AccessAPI endpoints for data export
Right to DeletionData deletion workflows implemented
Right to PortabilityJSON export functionality
Consent ManagementZitadel consent flows
📝Data Processing RecordsDocument all data processing activities
Breach NotificationIncident response plan includes 72h notification

SOC 2

StatusRequirementImplementation
Access ControlsRBAC via Zitadel
Audit LoggingComprehensive logging across all components
Change ManagementVersion control and deployment procedures
MonitoringReal-time monitoring with Prometheus
📝Risk AssessmentAnnual risk assessment required
EncryptionData encrypted at rest and in transit

ISO 27001

StatusRequirementImplementation
Asset InventoryDocumented component list
Access ControlZitadel RBAC
CryptographyModern encryption standards
📝Physical SecurityData center security documentation
Operations SecurityAutomated patching and monitoring
📝Incident ManagementDocumented incident response procedures
📝Business ContinuityDR plan and testing

HIPAA

StatusRequirementImplementation
EncryptionPHI encrypted at rest and in transit
Access ControlsRole-based access with MFA
Audit ControlsComprehensive audit logging
⚠️Integrity ControlsChecksums and versioning
Transmission SecurityTLS 1.3 for all communications
📝Business Associate AgreementsRequired for third-party vendors
⚠️Email ArchivingStalwart archiving configuration needed

LGPD (Brazilian GDPR)

StatusRequirementImplementation
Data EncryptionSame as GDPR
User RightsSame as GDPR
ConsentZitadel consent management
📝Data Protection OfficerDesignate DPO
⚠️Data RetentionConfigure lifecycle policies in Drive
Breach NotificationSame incident response as GDPR

Implementation Priority

High Priority (Critical for Production)

  1. ✅ TLS 1.3 everywhere (Caddy, PostgreSQL, Drive, Stalwart)
  2. ✅ MFA for all admin accounts (Zitadel)
  3. ✅ Firewall configuration (UFW)
  4. ✅ Automated security updates (unattended-upgrades)
  5. 🔄 Automated encrypted backups

Medium Priority (Required for Compliance)

  1. ⚠️ Disk encryption (LUKS)
  2. ⚠️ Audit daemon (auditd)
  3. ⚠️ WAF rules (Caddy plugins or external)
  4. 📝 Access reviews (quarterly)
  5. ⚠️ Email archiving (Stalwart)

Lower Priority (Enhanced Security)

  1. ⚠️ VPN access (WireGuard)
  2. ⚠️ Log aggregation (ELK Stack)
  3. ⚠️ AppArmor/SELinux
  4. 📝 CIS hardening
  5. 📝 Penetration testing

Verification Checklist

Weekly Tasks

  • Review security logs (Caddy, PostgreSQL, Zitadel)
  • Check backup completion status
  • Review failed authentication attempts
  • Update security patches

Monthly Tasks

  • Access review for privileged accounts
  • Review audit logs for anomalies
  • Test backup restoration
  • Update vulnerability database

Quarterly Tasks

  • Full access review for all users
  • Compliance check (run automated checks)
  • Security configuration audit
  • Disaster recovery drill

Annual Tasks

  • Penetration testing
  • Full compliance audit
  • Risk assessment update
  • Security policy review
  • Business continuity test

Quick Start Implementation

# 1. Enable firewall
sudo ufw enable
sudo ufw allow 22,80,443,25,587,993/tcp

# 2. Configure automatic updates
sudo apt install unattended-upgrades
sudo dpkg-reconfigure --priority=low unattended-upgrades

# 3. Enable PostgreSQL SSL
sudo -u postgres psql -c "ALTER SYSTEM SET ssl = 'on';"
sudo systemctl restart postgresql

# 4. Set Drive encryption
mc admin config set drive/ server-side-encryption-s3 on

# 5. Configure Zitadel MFA
# Via web console: Settings > Security > MFA > Require for admins

# 6. Enable Caddy security headers
# Add to Caddyfile (see Network & Web Server section)

# 7. Set up daily backups
sudo crontab -e
# Add: 0 2 * * * /usr/local/bin/backup-system.sh

Support & Resources

  • Internal Security Team: security@pragmatismo.com.br
  • Compliance Officer: compliance@pragmatismo.com.br
  • Documentation: https://docs.pragmatismo.com.br
  • Component Documentation: See “Component Security Documentation” in security-features.md

Document Control

  • Version: 1.0
  • Last Updated: 2024-01-15
  • Next Review: 2024-07-15
  • Owner: Security Team
  • Approved By: CTO

Role-Based Access Control (RBAC) Overview

General Bots implements a comprehensive Role-Based Access Control (RBAC) system designed as a secure, enterprise-grade alternative to Microsoft 365 / Google Workspace permission models. This system provides fine-grained access control across all suite applications.

Why RBAC?

RBAC is the industry standard for enterprise access control, used by:

  • Microsoft Azure Active Directory
  • Google Workspace Admin
  • AWS IAM
  • Kubernetes
  • All major enterprise platforms

Benefits Over Direct Permissions

ApproachProsCons
Direct User PermissionsSimple for small teamsUnmanageable at scale, audit nightmare
RBAC (Roles)Scalable, auditable, principle of least privilegeInitial setup complexity
RBAC + GroupsBest of both worlds, mirrors org structureRequires planning

Core Concepts

1. Users

Individual accounts that authenticate to the system. Users can be:

  • Internal employees - Full organization members
  • External guests - Partners, contractors, clients
  • Service accounts - For API integrations

2. Roles

Named collections of permissions. Roles define what actions can be performed.

┌─────────────────────────────────────────────────────────┐
│                     ROLE: Standard User                  │
├─────────────────────────────────────────────────────────┤
│ Permissions:                                             │
│   ✓ mail.read, mail.send                                │
│   ✓ calendar.read, calendar.write                       │
│   ✓ drive.read, drive.write, drive.share                │
│   ✓ docs.read, docs.write, docs.collaborate             │
│   ✓ meet.join, meet.create                              │
│   ✓ chat.read, chat.write                               │
│   ✓ tasks.read, tasks.write                             │
│   ✗ users.manage (NOT included)                         │
│   ✗ settings.organization (NOT included)                │
└─────────────────────────────────────────────────────────┘

3. Groups

Collections of users, typically mirroring organizational structure:

  • Departments (IT, HR, Finance, Sales)
  • Teams (Project Alpha, Support Team)
  • Access levels (Managers, External Contractors)

4. Permissions

Granular capabilities following the pattern: resource.action

mail.read          → Can read emails
mail.send          → Can send emails  
mail.admin         → Full mail administration

drive.read         → Can view files
drive.write        → Can upload/edit files
drive.share        → Can share with others
drive.share_external → Can share outside organization
drive.admin        → Full drive administration

Permission Inheritance

                    ┌──────────────┐
                    │  Permission  │
                    │  mail.send   │
                    └──────┬───────┘
                           │
              ┌────────────┴────────────┐
              │                         │
              ▼                         ▼
       ┌─────────────┐          ┌─────────────┐
       │    Role     │          │    Role     │
       │ Standard    │          │   Guest     │
       │   User      │          │   User      │
       └──────┬──────┘          └─────────────┘
              │                        
    ┌─────────┴─────────┐              
    │                   │              
    ▼                   ▼              
┌─────────┐      ┌─────────────┐      
│  Group  │      │   Direct    │      
│  Sales  │      │ Assignment  │      
│  Team   │      │             │      
└────┬────┘      └──────┬──────┘      
     │                  │              
     ▼                  ▼              
┌─────────┐      ┌─────────────┐      
│  User   │      │    User     │      
│  Alice  │      │    Bob      │      
└─────────┘      └─────────────┘      

Users inherit permissions from:

  1. Direct role assignments - Roles assigned directly to the user
  2. Group memberships - Roles assigned to groups the user belongs to

Comparison with Office 365 / Google Workspace

Office 365 Equivalent Roles

General Bots RoleOffice 365 Equivalent
global_adminGlobal Administrator
billing_adminBilling Administrator
user_adminUser Administrator
exchange_adminExchange Administrator
sharepoint_adminSharePoint Administrator
teams_adminTeams Administrator
security_adminSecurity Administrator
compliance_adminCompliance Administrator
helpdesk_adminHelpdesk Administrator
reports_readerReports Reader

Google Workspace Equivalent Roles

General Bots RoleGoogle Workspace Equivalent
global_adminSuper Admin
user_adminUser Management Admin
groups_adminGroups Admin
sharepoint_adminDrive & Docs Admin
exchange_adminGmail Admin
teams_adminMeet & Chat Admin

Built-in System Roles

Administrative Roles

RoleDescriptionTypical Use
Global AdministratorFull system controlIT Director, CTO
Billing AdministratorSubscription & paymentsFinance team
Compliance AdministratorAudit, DLP, retentionLegal, Compliance
Security AdministratorThreats, access policiesSecurity team
User AdministratorUser & group managementHR, IT Helpdesk
Groups AdministratorGroup management onlyTeam leads
Helpdesk AdministratorPassword resets, supportIT Support

Service-Specific Admin Roles

RoleManages
Mail AdministratorMailboxes, mail flow, distribution lists
Drive AdministratorFile storage, sharing policies, quotas
Meet & Chat AdministratorVideo meetings, chat settings
Knowledge AdministratorKnowledge base, document libraries

End-User Roles

RoleDescriptionBest For
Power UserFull productivity + automationDevelopers, analysts
Standard UserNormal productivity accessRegular employees
Guest UserLimited external accessPartners, contractors
ViewerRead-only accessAuditors, observers

Permission Categories

Permissions are organized into logical categories:

Administration (admin)

  • org.* - Organization settings
  • users.* - User management
  • groups.* - Group management
  • roles.* - Role management
  • dns.* - DNS and domains

Compliance (compliance)

  • audit.* - Audit logs
  • compliance.* - Compliance policies
  • dlp.* - Data loss prevention
  • retention.* - Data retention
  • ediscovery.* - Legal discovery

Security (security)

  • security.* - Security settings
  • threats.* - Threat management
  • secrets.* - API keys and secrets

Productivity Apps

CategoryPermissions
mailread, send, delete, organize, delegate, admin
calendarread, write, share, delegate, rooms
driveread, write, delete, share, sync, admin
docsread, write, comment, share, templates
sheetread, write, share, macros, connections
slidesread, write, share, present
meetjoin, create, host, record, webinar
chatread, write, channels, external
tasksread, write, assign, projects, automation

AI & Bots (ai)

  • bots.* - Bot configuration
  • ai.* - AI assistant features
  • kb.* - Knowledge base
  • conversations.* - Bot conversations
  • attendant.* - Human handoff

Automation (automation)

  • autotask.* - Automated tasks
  • workflows.* - Workflow definitions
  • intents.* - AI intent management

Best Practices

1. Use Groups for Department Access

Group: Sales Team
  └── Role: Standard User
  └── Role: CRM Access (custom)
  
Group: IT Department  
  └── Role: Standard User
  └── Role: Helpdesk Administrator

2. Principle of Least Privilege

Start with the minimum permissions and add as needed:

  • New employees → Standard User
  • After training → Add specific permissions
  • Temporary access → Set expiration dates

3. Use Time-Limited Assignments

-- Role expires in 30 days
expires_at: 2025-08-14T00:00:00Z

4. Regular Access Reviews

  • Quarterly review of admin roles
  • Monthly review of external access
  • Automated alerts for unused permissions

5. Audit Everything

All permission changes are logged:

  • Who made the change
  • What was changed
  • When it happened
  • Why (if documented)

Migration from Other Platforms

From Microsoft 365

  1. Export Azure AD groups and roles
  2. Map to equivalent General Bots roles
  3. Import users and create groups
  4. Assign group-role mappings
  5. Verify with test accounts

From Google Workspace

  1. Export Google Admin directory
  2. Map organizational units to groups
  3. Map admin roles to equivalent roles
  4. Import and test

See Migration Guide for detailed instructions.

API Reference

List All Roles

GET /api/rbac/roles

Assign Role to User

POST /api/rbac/users/{user_id}/roles/{role_id}
Content-Type: application/json

{
  "expires_at": "2025-12-31T23:59:59Z"  // Optional
}

Add User to Group

POST /api/rbac/users/{user_id}/groups/{group_id}

Get User’s Effective Permissions

GET /api/rbac/users/{user_id}/permissions

Response:

{
  "user_id": "uuid",
  "direct_roles": [...],
  "group_roles": [...],
  "groups": [...],
  "effective_permissions": [
    "mail.read",
    "mail.send",
    "drive.read",
    ...
  ]
}

Next Steps

Permissions Matrix

This chapter documents the permission system in General Bots, explaining how role-based access control governs what users can do within the platform. Understanding this permission model is essential for administrators configuring access policies and developers building applications that respect security boundaries.

Understanding the Permission Model

General Bots implements a role-based access control (RBAC) system that integrates with Zitadel, the platform’s Directory Service. The permission architecture consists of three interconnected layers that work together to determine what any given user can do.

At the highest level, realms establish permission boundaries that typically correspond to organizations. Within each realm, groups collect users who share common access needs. Permissions represent specific actions that can be granted to groups, and users inherit the combined permissions of all groups to which they belong.

This layered approach provides flexibility while maintaining manageability. Rather than assigning permissions directly to individual users, administrators create groups with appropriate permission sets and then add users to those groups. When a user’s responsibilities change, their access can be adjusted simply by modifying their group memberships.

User Context and System Context

APIs in General Bots operate in one of two security contexts, each with distinct characteristics and use cases.

User context operations execute on behalf of an authenticated user, using their OAuth token for authorization. When an API operates in user context, it sees and modifies only resources that belong to or are shared with that user. Reading files, sending messages, accessing calendars, managing tasks, and viewing email all occur in user context. The principle of least privilege applies naturally here—users can only access what they own or what has been explicitly shared with them.

System context operations execute on behalf of the bot or system itself, using a service account token. These operations have broader access because they serve cross-cutting concerns that don’t belong to any individual user. Bot-initiated messages, scheduled task execution, system monitoring, cross-user analytics, and backup operations all require system context to function properly.

The distinction between contexts ensures that normal user operations remain appropriately scoped while still allowing the system to perform necessary administrative functions.

File Operations

The drive system provides file storage and management capabilities with granular permission controls. Listing files through the API shows different results depending on context—in user context, only the user’s files appear, while system context reveals all files within the bot’s storage. Similarly, file uploads target the user’s folder in user context but can write to any location in the bot’s storage when operating in system context.

File deletion and sharing follow the same pattern. Users can delete and share their own files, while system context permits these operations on any file. The corresponding permissions are files:read for viewing and downloading, files:write for uploading and modifying, files:delete for removal, and files:share for granting access to others.

Communication Capabilities

Email functionality provides access to messaging through the organization’s mail system. Reading inbox contents and drafts requires the email:read permission and operates strictly in user context—there’s no meaningful system context for reading another user’s email. Sending messages requires email:send and can operate in either context, with user context messages appearing to come from the user and system context messages appearing to come from the bot.

Meeting integration enables video conferencing coordination. Viewing room information uses meet:read, with user context showing only visible rooms and system context revealing all rooms. Creating meetings requires meet:create, where user context establishes the creator as organizer while system context creates bot-organized meetings. Joining requires meet:join and inviting others requires meet:invite, with system context allowing invitations to any meeting regardless of ownership.

Calendar operations manage scheduling and appointments. Reading events with calendar:read shows user events in user context or bot calendar events in system context. Creating events requires calendar:write and targets the appropriate calendar based on context. Booking appointments with calendar:book makes the user an attendee in user context or establishes the bot as organizer in system context.

Task management follows similar patterns. The tasks:read permission shows user tasks in user context or all tasks in system context. Creating and modifying tasks with tasks:write assigns tasks appropriately based on context. Completing tasks with tasks:complete allows users to mark their own tasks complete or, in system context, to complete any task.

Administrative Functions

Administrative endpoints provide system management capabilities reserved for privileged users. Managing users requires admin:users, managing bot configurations requires admin:bots, modifying system configuration requires admin:config, and accessing monitoring data requires admin:monitor. All administrative operations execute in system context and require explicit administrative privileges.

These elevated permissions should be granted sparingly, typically only to IT staff responsible for system operation. The audit system tracks all administrative actions to maintain accountability.

Permission Definitions

The permission system defines specific capabilities organized by functional area. Core permissions govern fundamental platform features: chat:read allows viewing conversation history, chat:write enables sending messages, and the file permissions control document management as described above.

Communication permissions extend to the various messaging channels: email read and send capabilities, meeting room operations, and calendar management. Productivity permissions cover task management operations.

Administrative permissions form a separate category with broader impact: admin:users for user management, admin:groups for group administration, admin:bots for bot configuration, admin:config for system settings, admin:monitor for accessing operational metrics, and admin:backup for data protection operations.

Default Group Configuration

General Bots creates several default groups during initialization, each designed for common organizational roles.

The Administrators group receives all permissions, including the complete set of administrative capabilities. Members of this group can perform any operation in the system. This group should contain only trusted IT personnel responsible for platform operation.

The Managers group provides access to productivity features plus basic monitoring capabilities. Managers can fully utilize chat, files including sharing, email, meetings, calendar, and tasks. They can also view monitoring data to understand system usage but cannot modify system configuration or manage users.

The Users group establishes standard access for regular employees. Users can participate in chat, work with files without sharing capabilities, read and send email, view and join meetings, manage their calendars, and handle their tasks. This permission set enables full participation in daily work without administrative capabilities.

The Guests group provides minimal access for anonymous or temporary users. Guests can only participate in chat, without access to any other system features. This restricted access suits scenarios where external parties need limited interaction with bots.

Permission Configuration

Configuring permissions involves coordinating settings between Zitadel and the General Bots configuration.

In Zitadel, administrators access the admin console and navigate to Organization settings, then to Roles. Here they create roles that correspond to the permissions defined in General Bots. These roles are then assigned to groups, and users are added to appropriate groups based on their organizational responsibilities.

The config.csv file for each bot can map Zitadel roles to General Bots permissions. The permission mapping entries define which local permissions correspond to each Zitadel role. The default anonymous permission setting establishes what capabilities unauthenticated users receive.

Anonymous Access Considerations

The chat interface supports anonymous users who haven’t authenticated, though with significant restrictions. Anonymous users can chat with the default bot only, using a session that exists solely on the server. They cannot access conversation history, the drive, email, tasks, meetings, or any settings. Essentially, anonymous access provides a preview of bot capabilities without exposing organizational resources.

Organizations can customize the default anonymous permissions if they want to provide different capabilities to unauthenticated users, though most deployments restrict anonymous access to basic chat functionality.

Permission Checking in Scripts

BASIC scripts can query user roles to implement conditional logic based on permissions. By retrieving the role from the session, scripts can present different options or perform different actions depending on the user’s access level.

For example, a script might offer administrative functions only to users with the admin role, provide reporting features to managers, and present standard assistance to regular users. This capability allows bots to adapt their behavior to each user’s organizational context.

Audit Trail

All permission checks are logged, creating a comprehensive audit trail of access attempts. Administrators can query these logs through the admin API to review permission-related events. Each log entry captures the timestamp, user identifier, attempted action, accessed resource, result indicating whether access was allowed or denied, and when denied, the reason for denial.

This audit capability supports security reviews, compliance requirements, and troubleshooting access issues. Organizations with regulatory obligations can demonstrate that appropriate access controls are in place and functioning correctly.

For deeper understanding of the authentication and authorization system, the User Authentication chapter explains the login and session management processes. The User Context vs System Context chapter provides detailed exploration of how context affects API behavior. The Security Policy chapter establishes guidelines for secure platform operation. The API Endpoints chapter documents the full API surface including permission requirements for each endpoint.

RBAC Configuration Guide

This guide covers how to configure Role-Based Access Control (RBAC) in General Bots, including role management, group setup, permission assignment, and best practices for enterprise deployments.

Overview

General Bots RBAC provides:

  • Hierarchical Roles: Roles inherit permissions from parent roles
  • Group-based Access: Organize users into groups for easier management
  • Permission Inheritance: Permissions flow down through the hierarchy
  • Resource-level Control: Fine-grained access to bots, apps, and knowledge bases

Default Roles

General Bots includes predefined system roles that cannot be deleted:

RoleHierarchy LevelDescription
Owner100Full organization control, including deletion
Admin90Manage all resources except organization deletion
Manager70Create and manage bots, KB, and apps
Member50Standard access to organization resources
Viewer30Read-only access to bots and knowledge bases
Guest10Limited public access only

Role Hierarchy

Higher-level roles automatically inherit permissions from lower-level roles:

Owner (100)
  └── Admin (90)
        └── Manager (70)
              └── Member (50)
                    └── Viewer (30)
                          └── Guest (10)

An Admin can manage any role below them (Manager, Member, Viewer, Guest) but cannot modify Owner permissions.

Configuring Roles

Creating a Custom Role

Custom roles can extend the default hierarchy:

  1. Navigate to SettingsAccess ControlRoles
  2. Click Create Role
  3. Configure:
    • Name: Internal identifier (lowercase, no spaces)
    • Display Name: Human-readable name
    • Hierarchy Level: Position in hierarchy (1-99)
    • Parent Roles: Roles to inherit from
    • Permissions: Additional permissions

Permission Format

Permissions use a colon-separated format:

resource:action
resource:action:scope

Examples:

PermissionDescription
bot:createCreate new bots
bot:viewView bot details
bot:editEdit bot configuration
bot:deleteDelete bots
bot:*All bot permissions
kb:readRead knowledge base content
kb:writeWrite to knowledge bases
kb:adminAdminister KB settings
app:createCreate apps (Forms, Sites)
app:viewView apps
app:editEdit apps
org:manageManage organization settings
org:billingAccess billing information
org:membersManage organization members
*Wildcard - all permissions

Role Configuration Example

name: content_editor
display_name: Content Editor
hierarchy_level: 55
parent_roles:
  - member
permissions:
  - kb:read
  - kb:write
  - bot:view
  - app:view
  - app:edit

Configuring Groups

Groups provide an additional layer of organization for users.

Default Groups

GroupDescription
everyoneAll authenticated users
developersUsers who create bots and apps
content_managersUsers who manage knowledge bases
supportSupport team with analytics access

Creating Groups

  1. Navigate to SettingsAccess ControlGroups
  2. Click Create Group
  3. Configure:
    • Name: Internal identifier
    • Display Name: Human-readable name
    • Parent Group: Optional hierarchy
    • Permissions: Group-specific permissions

Group Hierarchy

Groups can have parent-child relationships:

everyone
├── developers
│     └── senior_developers
├── content_managers
│     └── kb_admins
└── support
      └── tier2_support

Child groups inherit permissions from parent groups.

Assigning Permissions

To Users

Assign roles directly to users:

User: john@company.com
Roles: manager, content_editor
Groups: developers, content_managers

To Bots

Control who can access each bot:

bot_id: my-support-bot
visibility: organization
allowed_roles:
  - member
  - viewer
allowed_groups:
  - support
denied_users: []

To Apps

Control app access (Forms, Sites, Dashboards):

app_id: customer-feedback-form
app_type: form
visibility: public
allowed_roles: []
submission_requires_auth: false

To Knowledge Base Folders

See KB Permissions Guide for detailed folder configuration.

Permission Inheritance Resolution

When checking if a user has permission, the system evaluates:

  1. Direct User Permissions: Explicitly assigned to the user
  2. Role Permissions: From all assigned roles (including inherited)
  3. Group Permissions: From all group memberships (including inherited)

Resolution Example

User: alice@company.com
Direct Permissions: analytics:export
Roles: manager (inherits from member, viewer)
Groups: content_managers (inherits from everyone)

Effective Permissions:
├── analytics:export (direct)
├── org:members:view (from manager)
├── bot:create (from manager)
├── bot:edit (from manager)
├── bot:view (from member, inherited by manager)
├── kb:read (from member)
├── kb:write (from content_managers)
├── kb:admin (from content_managers)
└── basic:access (from everyone)

Wildcard Permissions

Use wildcards for broad access:

PatternMatches
*All permissions
bot:*All bot permissions
kb:*All knowledge base permissions
app:*All app permissions
org:*All organization permissions

Configuration via API

List Roles

GET /api/settings/rbac/roles
Authorization: Bearer <token>

Create Role

POST /api/settings/rbac/roles
Authorization: Bearer <token>
Content-Type: application/json

{
  "name": "custom_role",
  "display_name": "Custom Role",
  "description": "A custom role for specific needs",
  "hierarchy_level": 45,
  "parent_roles": ["member"],
  "permissions": ["kb:read", "kb:write"]
}

Assign Role to User

POST /api/settings/rbac/users/{user_id}/roles/{role_id}
Authorization: Bearer <token>
Content-Type: application/json

{
  "expires_at": "2025-12-31T23:59:59Z"
}

Add User to Group

POST /api/settings/rbac/users/{user_id}/groups/{group_id}
Authorization: Bearer <token>

Get Effective Permissions

GET /api/settings/rbac/users/{user_id}/permissions
Authorization: Bearer <token>

Response:

{
  "user_id": "550e8400-e29b-41d4-a716-446655440000",
  "permissions": [
    "bot:view",
    "bot:create",
    "kb:read",
    "kb:write"
  ],
  "sources": [
    {
      "permission": "bot:view",
      "source_type": "role",
      "source_name": "member"
    },
    {
      "permission": "kb:write",
      "source_type": "group",
      "source_name": "content_managers"
    }
  ]
}

Audit Logging

All permission changes are logged for compliance:

EventLogged Data
Role AssignmentActor, target user, role, timestamp
Role RevocationActor, target user, role, timestamp
Group AdditionActor, target user, group, timestamp
Group RemovalActor, target user, group, timestamp
Permission GrantActor, target, permission, timestamp
Access DeniedActor, resource, required permission

Access audit logs at SettingsSecurityAudit Log.

Best Practices

1. Use Groups Over Direct Assignment

Instead of assigning roles to individual users, create groups:

✓ Create "Sales Team" group with viewer + CRM permissions
✓ Add users to the group
✗ Assign roles individually to 50 users

2. Follow Least Privilege

Start with minimal permissions and add as needed:

✓ New users get "viewer" role by default
✓ Promote to "member" after onboarding
✗ Give everyone "admin" for convenience

3. Use Role Expiration

For temporary access, set expiration dates:

POST /api/settings/rbac/users/{user_id}/roles/{role_id}
{
  "expires_at": "2025-03-01T00:00:00Z"
}

4. Regular Permission Reviews

Schedule quarterly reviews:

  1. Export current permissions
  2. Review access patterns in audit logs
  3. Remove unused permissions
  4. Update role definitions as needed

5. Document Custom Roles

Maintain documentation for custom roles:

## Custom Role: Project Lead

**Purpose**: Lead project teams with limited admin access

**Permissions**:
- All member permissions
- bot:create, bot:edit
- app:create, app:edit
- org:members:view

**Assigned To**: Project leads and tech leads
**Created**: 2025-01-15
**Last Review**: 2025-01-21

Troubleshooting

User Cannot Access Resource

  1. Check user’s effective permissions:

    GET /api/settings/rbac/users/{user_id}/permissions
    
  2. Verify resource permissions:

    GET /api/bots/{bot_id}/access
    
  3. Check audit log for denied access attempts

Permission Not Working After Assignment

  1. Clear user’s session cache
  2. User may need to log out and back in
  3. Check if permission is blocked by a deny rule

Role Changes Not Reflected

  1. Permission cache has 5-minute TTL
  2. Force refresh: User logs out/in
  3. Check if role assignment has expiration date

Organization Management Guide

This guide covers how to manage organizations in General Bots, including creating organizations, managing members, configuring settings, and switching between organizations.

Overview

Organizations are the top-level tenant in General Bots, providing:

  • Complete Data Isolation: Each organization has separate bots, knowledge bases, and data
  • Independent Billing: Separate subscriptions and quotas per organization
  • Member Management: Users can belong to multiple organizations
  • Custom Settings: Branding, security policies, and preferences per organization

Creating an Organization

Via UI

  1. Click your profile avatar in the top-right corner
  2. Click Create Organization
  3. Fill in the organization details:
    • Name: Display name for the organization
    • Slug: URL-friendly identifier (auto-generated from name)
    • Description: Optional description
  4. Click Create

Via API

POST /api/organizations
Authorization: Bearer <token>
Content-Type: application/json

{
  "name": "Acme Corporation",
  "description": "Main organization for Acme Corp"
}

Response:

{
  "organization": {
    "id": "550e8400-e29b-41d4-a716-446655440000",
    "name": "Acme Corporation",
    "slug": "acme-corporation",
    "description": "Main organization for Acme Corp",
    "plan_id": "free",
    "owner_id": "user-uuid",
    "created_at": "2025-01-21T10:00:00Z"
  },
  "roles": [...],
  "groups": [...],
  "owner_member": {...},
  "owner_role": {...}
}

When you create an organization, you automatically become the owner with full permissions.

Organization Structure

Each organization includes:

Organization
├── Settings
│   ├── General (name, logo, website)
│   ├── Security (2FA, SSO, IP whitelist)
│   ├── Branding (colors, custom CSS)
│   └── Billing (plan, quotas)
├── Members
│   ├── Users
│   ├── Roles
│   └── Groups
├── Bots
│   └── [bot configurations]
├── Apps
│   ├── Forms
│   ├── Sites
│   └── Projects
└── Knowledge Bases
    └── [.gbkb packages]

Managing Members

Inviting Members

  1. Navigate to SettingsMembers
  2. Click Invite Member
  3. Enter email address and select role
  4. Click Send Invitation
POST /api/organizations/{org_id}/invitations
Authorization: Bearer <token>
Content-Type: application/json

{
  "email": "newuser@company.com",
  "role": "member"
}

Member Roles

When inviting members, assign an appropriate role:

RoleTypical Use
AdminDepartment heads, IT managers
ManagerTeam leads, project managers
MemberRegular employees
ViewerStakeholders, external reviewers

Accepting Invitations

Invited users receive an email with a link to accept the invitation:

  1. Click the invitation link
  2. Sign in or create an account
  3. Review organization details
  4. Click Accept Invitation

Removing Members

  1. Navigate to SettingsMembers
  2. Find the member in the list
  3. Click the menu icon (⋮)
  4. Select Remove from Organization
DELETE /api/organizations/{org_id}/members/{user_id}
Authorization: Bearer <token>

Organization Settings

General Settings

SettingDescription
NameDisplay name
SlugURL identifier
DescriptionOrganization description
Logo URLLogo image URL
WebsiteOrganization website

Security Settings

SettingDescriptionDefault
Require 2FARequire two-factor authenticationfalse
Allowed Email DomainsRestrict sign-ups to specific domains[]
SSO EnabledEnable Single Sign-Onfalse
SSO ProviderSSO provider configurationnull
IP WhitelistAllowed IP addresses[]
Audit Log RetentionDays to keep audit logs90

Custom Branding

SettingDescription
Primary ColorMain brand color
Secondary ColorAccent color
Logo URLCustom logo
Favicon URLBrowser favicon
Custom CSSAdditional styling

Configuration Example

settings:
  allow_public_bots: false
  require_2fa: true
  allowed_email_domains:
    - company.com
    - subsidiary.com
  default_user_role: member
  max_members: 100
  sso_enabled: true
  sso_provider: okta
  audit_log_retention_days: 365
  ip_whitelist:
    - 10.0.0.0/8
    - 192.168.1.0/24
  custom_branding:
    primary_color: "#0066cc"
    secondary_color: "#004499"
    logo_url: "https://cdn.company.com/logo.svg"

Switching Organizations

Users who belong to multiple organizations can switch between them.

Via UI

  1. Click your profile avatar or the organization name
  2. A dropdown shows all your organizations
  3. Click the organization to switch to

Via API

POST /api/user/switch-organization
Authorization: Bearer <token>
Content-Type: application/json

{
  "org_id": "target-organization-uuid"
}

The response includes a new session token scoped to the selected organization.

Organization Switcher Component

The organization switcher displays:

  • Current organization name and logo
  • User’s role in current organization
  • List of other organizations
  • Quick actions (create new, manage current)
<div class="org-selector">
  <div class="selected-org">
    <div class="org-avatar">AC</div>
    <div class="org-info">
      <span class="org-name">Acme Corporation</span>
      <span class="org-role">Admin</span>
    </div>
  </div>
  <div class="org-dropdown">
    <div class="org-dropdown-item">
      <div class="org-avatar">XY</div>
      <span>XYZ Partners</span>
    </div>
    <div class="org-dropdown-actions">
      <button>+ Create Organization</button>
    </div>
  </div>
</div>

Quotas and Limits

Each organization has quotas based on their plan:

QuotaFreeProEnterprise
Members550Unlimited
Bots220Unlimited
Storage1 GB50 GB1 TB
API Calls/month10,000500,000Unlimited
Messages/month1,000100,000Unlimited

Checking Usage

GET /api/organizations/{org_id}/usage
Authorization: Bearer <token>

Response:

{
  "organization_id": "org-uuid",
  "period": "2025-01",
  "quotas": {
    "members": { "used": 12, "limit": 50, "percent": 24 },
    "storage_mb": { "used": 5120, "limit": 51200, "percent": 10 },
    "api_calls": { "used": 45000, "limit": 500000, "percent": 9 },
    "messages": { "used": 8500, "limit": 100000, "percent": 8.5 }
  }
}

Deleting an Organization

Only the organization owner can delete an organization.

  1. Navigate to SettingsGeneral
  2. Scroll to Danger Zone
  3. Click Delete Organization
  4. Type the organization name to confirm
  5. Click Delete Permanently
DELETE /api/organizations/{org_id}
Authorization: Bearer <token>
Content-Type: application/json

{
  "confirmation": "organization-name"
}

Deletion is permanent and removes:

  • All organization data
  • All bots and configurations
  • All knowledge bases
  • All apps (Forms, Sites, Projects)
  • All member associations
  • All billing data

Multi-Organization Patterns

Separate Environments

Use organizations for dev/staging/production:

Acme Corp - Development
Acme Corp - Staging
Acme Corp - Production

Department Isolation

Separate organizations per department:

Acme Corp - Sales
Acme Corp - Support
Acme Corp - Engineering

Client Projects

Agencies can create organizations per client:

Client A - Project X
Client B - Project Y
Internal - Agency Tools

API Reference

List User’s Organizations

GET /api/user/organizations
Authorization: Bearer <token>

Get Organization Details

GET /api/organizations/{org_id}
Authorization: Bearer <token>

Update Organization

PATCH /api/organizations/{org_id}
Authorization: Bearer <token>
Content-Type: application/json

{
  "name": "New Name",
  "description": "Updated description"
}

List Organization Members

GET /api/organizations/{org_id}/members
Authorization: Bearer <token>

Update Organization Settings

PUT /api/organizations/{org_id}/settings
Authorization: Bearer <token>
Content-Type: application/json

{
  "require_2fa": true,
  "allowed_email_domains": ["company.com"]
}

Best Practices

1. Clear Naming Conventions

Use consistent naming for multiple organizations:

[Company] - [Environment/Purpose]
Acme Corp - Production
Acme Corp - Development

2. Appropriate Role Assignment

Don’t over-assign admin roles:

  • One or two owners maximum
  • Admins for department heads
  • Managers for team leads
  • Members for everyone else

3. Regular Member Audits

Review members quarterly:

  • Remove departed employees
  • Adjust roles as responsibilities change
  • Check for inactive accounts

4. Security Configuration

For enterprise organizations:

  • Enable 2FA requirement
  • Configure SSO if available
  • Set IP whitelist for office networks
  • Increase audit log retention

5. Quota Monitoring

Set up alerts for quota usage:

  • 80% warning for planning
  • 90% critical for immediate action
  • Monitor trends over time

Knowledge Base Permissions Guide

This guide explains how to configure folder-level permissions in General Bots Knowledge Bases (.gbkb), enabling fine-grained access control that integrates with Qdrant vector search.

Overview

Knowledge Base permissions allow you to:

  • Control access to specific folders within a KB
  • Filter search results based on user permissions
  • Integrate with RBAC roles and groups
  • Support public, authenticated, and restricted content

Permission File Format

Each .gbkb can include a kb.permissions.yaml file at its root:

version: 1
default_access: authenticated

folders:
  public:
    access: all
    index_visibility: all
    
  sales:
    access: role_based
    roles: [sales_team, management]
    index_visibility: role_based
    
  hr:
    access: group_based
    groups: [hr_department]
    index_visibility: group_based
    
  executive:
    access: user_based
    users: [ceo@company.com, cfo@company.com]
    index_visibility: user_based
    
  internal:
    access: authenticated
    index_visibility: authenticated

inheritance: true

Configuration Options

Version

version: 1

Schema version for forward compatibility. Currently only version 1 is supported.

Default Access

default_access: authenticated

Access level for folders without explicit configuration:

ValueDescription
allAnyone can access, including anonymous users
authenticatedOnly logged-in users (default)
role_basedRequires specific roles
group_basedRequires group membership
user_basedSpecific users only
noneNo access allowed

Folder Permissions

Each folder entry supports these options:

folders:
  folder_name:
    access: <access_level>
    roles: [role1, role2]           # For role_based access
    groups: [group1, group2]        # For group_based access
    users: [email1, email2, uuid1]  # For user_based access
    index_visibility: <level>       # Search result visibility
    inherit_parent: true|false      # Override inheritance

Access Levels

LevelDescriptionRequirements
allPublic accessNone
authenticatedLogged-in usersValid session
role_basedRole membershipUser has any listed role
group_basedGroup membershipUser is in any listed group
user_basedNamed usersUser ID or email matches
noneBlockedNo one can access

Index Visibility

Controls whether content appears in search results:

folders:
  confidential:
    access: role_based
    roles: [management]
    index_visibility: role_based  # Only management sees in search
    
  semi_public:
    access: role_based
    roles: [employees]
    index_visibility: all  # Everyone sees titles, only employees can open

Use index_visibility to:

  • Show content exists without revealing details
  • Hide sensitive content from search entirely
  • Create “teaser” content that requires authentication

Inheritance

inheritance: true

When enabled, subfolders inherit parent permissions unless explicitly configured:

documents/
├── public/           # access: all
│   └── guides/       # inherits: all
├── internal/         # access: authenticated
│   ├── policies/     # inherits: authenticated
│   └── hr/           # explicit: group_based (overrides)
└── restricted/       # access: user_based
    └── legal/        # inherits: user_based

Disable inheritance per folder:

folders:
  parent:
    access: role_based
    roles: [managers]
    
  parent/child:
    access: authenticated
    inherit_parent: false  # Does NOT inherit role_based

Qdrant Integration

When documents are indexed, permission metadata is stored with each vector:

{
  "id": "doc-123",
  "vector": [...],
  "payload": {
    "content": "Document text...",
    "folder": "sales/reports",
    "access_level": "role_based",
    "allowed_roles": ["sales_team", "management"],
    "allowed_groups": [],
    "allowed_users": [],
    "is_public": false,
    "requires_auth": true
  }
}

Search Filtering

When a user searches, the system automatically adds permission filters:

For anonymous users:

{
  "must": [
    { "key": "is_public", "match": { "value": true } }
  ]
}

For authenticated users:

{
  "should": [
    { "key": "is_public", "match": { "value": true } },
    { "key": "access_level", "match": { "value": "authenticated" } },
    { "key": "allowed_roles", "match": { "any": ["sales_team"] } },
    { "key": "allowed_groups", "match": { "any": ["sales_department"] } },
    { "key": "allowed_users", "match": { "any": ["user-uuid", "user@email.com"] } }
  ],
  "min_should": { "min_count": 1 }
}

Complete Example

Directory Structure

my-kb.gbkb/
├── kb.permissions.yaml
├── public/
│   ├── faq.md
│   └── getting-started.md
├── products/
│   ├── catalog.md
│   └── pricing.md
├── internal/
│   ├── processes/
│   │   └── onboarding.md
│   └── policies/
│       └── code-of-conduct.md
├── hr/
│   ├── benefits.md
│   └── salary-bands.md
└── executive/
    ├── board-minutes.md
    └── financials.md

Permission Configuration

version: 1
default_access: authenticated
inheritance: true

folders:
  public:
    access: all
    index_visibility: all
    
  products:
    access: all
    index_visibility: all
    
  products/pricing:
    access: role_based
    roles: [sales_team, account_managers]
    index_visibility: authenticated
    inherit_parent: false
    
  internal:
    access: authenticated
    index_visibility: authenticated
    
  internal/policies:
    access: authenticated
    index_visibility: all
    
  hr:
    access: group_based
    groups: [hr_department, management]
    index_visibility: group_based
    
  executive:
    access: user_based
    users:
      - ceo@company.com
      - cfo@company.com
      - coo@company.com
    index_visibility: none

Access Matrix

FolderAnonymousAuthenticatedSalesHRExecutive
public
products
products/pricing
internal
hr
executive

API Usage

Check Folder Access

GET /api/kb/{kb_id}/folders/{path}/access
Authorization: Bearer <token>

Response:

{
  "allowed": true,
  "reason": "Role matched: sales_team",
  "matched_rule": "roles: [sales_team, management]",
  "index_visible": true
}

Get Folder Permissions

GET /api/kb/{kb_id}/folders/{path}/permissions
Authorization: Bearer <token>

Response:

{
  "folder": "products/pricing",
  "access": "role_based",
  "roles": ["sales_team", "account_managers"],
  "groups": [],
  "users": [],
  "index_visibility": "authenticated",
  "inherit_parent": false,
  "effective_access": "role_based"
}

Update Folder Permissions

PUT /api/kb/{kb_id}/folders/{path}/permissions
Authorization: Bearer <token>
Content-Type: application/json

{
  "access": "group_based",
  "groups": ["premium_customers"],
  "index_visibility": "all"
}

Best Practices

1. Start Restrictive

Default to authenticated access and open up as needed:

default_access: authenticated

folders:
  public:
    access: all  # Explicitly mark public content

2. Use Groups Over Users

Prefer group-based access for easier management:

# ✓ Good - easy to maintain
folders:
  hr:
    access: group_based
    groups: [hr_team]

# ✗ Avoid - hard to maintain
folders:
  hr:
    access: user_based
    users: [alice@co.com, bob@co.com, carol@co.com]

3. Document Sensitive Folders

Add comments explaining access decisions:

folders:
  # Financial data - SOC 2 requires strict access
  financials:
    access: user_based
    users: [cfo@company.com, controller@company.com]
    index_visibility: none

4. Regular Permission Audits

Export and review permissions quarterly:

GET /api/kb/{kb_id}/permissions/export

5. Test with Different Users

Verify access works correctly:

  1. Test anonymous access
  2. Test basic authenticated user
  3. Test each role/group combination
  4. Verify search results match expectations

Troubleshooting

  1. Check index_visibility setting
  2. Verify user has required role/group
  3. Re-index the document after permission changes

Access Denied Despite Correct Role

  1. Check if folder has inherit_parent: false
  2. Verify role name matches exactly (case-sensitive)
  3. Check for deny rules at parent level

Inheritance Not Working

  1. Confirm inheritance: true at root level
  2. Check for inherit_parent: false on subfolder
  3. Verify parent folder has explicit permissions

SOC 2 Type II Compliance

This guide covers General Bots’ SOC 2 Type II compliance implementation, including security controls, audit logging, evidence collection, and compliance reporting.

Overview

SOC 2 Type II is a security framework developed by the American Institute of CPAs (AICPA) that evaluates how organizations manage customer data based on five Trust Service Criteria:

  1. Security - Protection against unauthorized access
  2. Availability - System accessibility as agreed
  3. Processing Integrity - Accurate and timely processing
  4. Confidentiality - Protection of confidential information
  5. Privacy - Personal information handling

General Bots implements controls across all five criteria to ensure enterprise-grade security.

Trust Service Criteria Implementation

Security (Common Criteria)

ControlImplementationStatus
CC1.1 - Integrity & EthicsCode of conduct, security policies
CC2.1 - CommunicationSecurity awareness training
CC3.1 - Risk AssessmentQuarterly risk assessments
CC4.1 - MonitoringContinuous security monitoring
CC5.1 - Control ActivitiesAccess controls, encryption
CC6.1 - Logical AccessRBAC, MFA, session management
CC7.1 - System OperationsChange management, incident response
CC8.1 - Change ManagementDocumented change procedures
CC9.1 - Risk MitigationVendor management, BCP

Availability

ControlImplementation
A1.1 - Capacity ManagementAuto-scaling, resource monitoring
A1.2 - Recovery OperationsAutomated backups, disaster recovery
A1.3 - Recovery TestingQuarterly DR tests

Processing Integrity

ControlImplementation
PI1.1 - Processing AccuracyInput validation, data integrity checks
PI1.2 - Processing CompletenessTransaction logging, audit trails
PI1.3 - Processing TimelinessSLA monitoring, performance metrics

Confidentiality

ControlImplementation
C1.1 - Confidential InformationData classification, encryption at rest
C1.2 - DisposalSecure deletion, data retention policies

Privacy

ControlImplementation
P1.1 - NoticePrivacy policy, cookie consent
P2.1 - Choice and ConsentOpt-in/opt-out mechanisms
P3.1 - CollectionData minimization
P4.1 - Use and RetentionPurpose limitation, retention schedules
P5.1 - AccessData export (GDPR Article 15)
P6.1 - DisclosureThird-party data sharing controls
P7.1 - QualityData accuracy verification
P8.1 - MonitoringPrivacy impact assessments

Audit Logging

Event Categories

General Bots logs the following security-relevant events:

CategoryEvents Logged
AuthenticationLogin, logout, MFA events, password changes
AuthorizationPermission grants, role assignments, access denials
Data AccessRead operations on sensitive data
Data ModificationCreate, update, delete operations
AdministrativeConfiguration changes, user management
SecurityFailed auth attempts, suspicious activity

Log Structure

{
  "id": "uuid",
  "timestamp": "2025-01-21T10:30:00Z",
  "organization_id": "org-uuid",
  "actor_id": "user-uuid",
  "actor_email": "user@company.com",
  "actor_ip": "192.168.1.100",
  "action": "role_assign",
  "resource_type": "role",
  "resource_id": "role-uuid",
  "resource_name": "admin",
  "details": {
    "description": "Assigned role 'admin' to user",
    "before_state": null,
    "after_state": {"role": "admin"},
    "changes": [{"field": "role", "old_value": null, "new_value": "admin"}]
  },
  "result": "success",
  "metadata": {}
}

Log Retention

Log TypeRetention PeriodStorage
Security Events7 yearsImmutable storage
Access Logs2 yearsStandard storage
Application Logs90 daysStandard storage
Debug Logs30 daysEphemeral storage

Accessing Audit Logs

GET /api/compliance/audit-logs
Authorization: Bearer <token>

Query parameters:

ParameterDescription
organization_idFilter by organization
actor_idFilter by user
actionFilter by action type
resource_typeFilter by resource type
start_dateStart of date range
end_dateEnd of date range
pagePage number
per_pageResults per page

Security Controls

Access Control

Multi-Factor Authentication (MFA)

  • TOTP-based authentication
  • Hardware security key support (FIDO2/WebAuthn)
  • SMS backup codes (optional)

Session Management

  • Configurable session timeout (default: 8 hours)
  • Concurrent session limits
  • Session invalidation on password change
  • IP-based session binding (optional)

Password Policy

  • Minimum 12 characters
  • Complexity requirements
  • Password history (last 10)
  • Account lockout after 5 failed attempts

Encryption

Data at Rest

  • AES-256 encryption for all stored data
  • Encrypted database columns for PII
  • Encrypted file storage (MinIO with server-side encryption)

Data in Transit

  • TLS 1.3 for all connections
  • Perfect Forward Secrecy
  • HSTS with preloading
  • Certificate pinning (mobile apps)

Network Security

  • Web Application Firewall (WAF)
  • DDoS protection
  • Rate limiting per endpoint
  • IP allowlisting (enterprise)

Compliance Reporting

Generating Compliance Reports

POST /api/compliance/reports
Authorization: Bearer <token>
Content-Type: application/json

{
  "report_type": "soc2",
  "period_start": "2025-01-01",
  "period_end": "2025-03-31",
  "criteria": ["security", "availability", "confidentiality"]
}

Report Types

TypeDescriptionFrequency
soc2Full SOC 2 compliance reportQuarterly
access_reviewUser access reviewMonthly
vulnerabilityVulnerability assessmentWeekly
incidentSecurity incident reportAs needed

Evidence Collection

The compliance module automatically collects evidence for audit:

User Access Evidence

  • Current user list with roles
  • Permission assignment history
  • Access review sign-offs

Change Management Evidence

  • Deployment logs
  • Configuration change records
  • Approval workflows

Security Evidence

  • Vulnerability scan results
  • Penetration test reports
  • Security training completion

Exporting Evidence

GET /api/compliance/evidence/export
Authorization: Bearer <token>

Query parameters:

ParameterDescription
criteriaSOC 2 criteria (CC6.1, A1.1, etc.)
period_startEvidence period start
period_endEvidence period end
formatExport format (json, csv, pdf)

Incident Response

Incident Classification

SeverityDescriptionResponse Time
CriticalData breach, system compromise15 minutes
HighService outage, failed controls1 hour
MediumSuspicious activity, minor issues4 hours
LowInformational, potential risk24 hours

Incident Response Process

  1. Detection - Automated monitoring or manual report
  2. Triage - Classify severity, assign responder
  3. Containment - Isolate affected systems
  4. Eradication - Remove threat
  5. Recovery - Restore services
  6. Lessons Learned - Post-incident review

Incident Logging

POST /api/compliance/incidents
Authorization: Bearer <token>
Content-Type: application/json

{
  "title": "Suspicious login activity detected",
  "severity": "medium",
  "description": "Multiple failed login attempts from unusual location",
  "affected_systems": ["authentication"],
  "detected_at": "2025-01-21T10:00:00Z",
  "detected_by": "automated_monitoring"
}

Vendor Management

Third-Party Risk Assessment

All vendors handling customer data undergo:

  • Security questionnaire
  • SOC 2 report review (if available)
  • Contract review for security requirements
  • Annual reassessment

Key Vendors

VendorServiceSOC 2Data Access
PostgreSQLDatabaseN/A (self-hosted)Full
MinIOObject StorageN/A (self-hosted)Full
QdrantVector DBN/A (self-hosted)Full
RedisCachingN/A (self-hosted)Session data

Business Continuity

Recovery Objectives

MetricTargetCurrent
RTO (Recovery Time Objective)4 hours2 hours
RPO (Recovery Point Objective)1 hour15 minutes
MTTR (Mean Time to Recovery)2 hours45 minutes

Backup Strategy

Data TypeFrequencyRetentionLocation
DatabaseEvery 15 minutes30 daysOff-site
FilesHourly90 daysOff-site
ConfigurationOn changeForeverGit
LogsDailyPer retention policyOff-site

Disaster Recovery

  • Multi-region deployment capability
  • Automated failover
  • Quarterly DR testing
  • Documented recovery procedures

Configuration

Enabling SOC 2 Features

Add to your .env:

SOC2_COMPLIANCE_ENABLED=true
SOC2_AUDIT_LOG_RETENTION_DAYS=2555
SOC2_EVIDENCE_COLLECTION=true
SOC2_INCIDENT_AUTO_CREATE=true

Compliance Dashboard

Access the compliance dashboard at:

/admin/compliance

Features:

  • Real-time compliance status
  • Control effectiveness metrics
  • Open findings and remediation
  • Upcoming audit timeline

API Reference

Get Compliance Status

GET /api/compliance/status
Authorization: Bearer <token>

Response:

{
  "overall_status": "compliant",
  "last_assessment": "2025-01-15T00:00:00Z",
  "criteria": {
    "security": {"status": "compliant", "controls_passed": 45, "controls_total": 45},
    "availability": {"status": "compliant", "controls_passed": 12, "controls_total": 12},
    "confidentiality": {"status": "compliant", "controls_passed": 8, "controls_total": 8}
  },
  "open_findings": 0,
  "next_audit": "2025-04-01"
}

List Control Evidence

GET /api/compliance/controls/{control_id}/evidence
Authorization: Bearer <token>

Create Finding

POST /api/compliance/findings
Authorization: Bearer <token>
Content-Type: application/json

{
  "control_id": "CC6.1",
  "title": "MFA not enforced for admin accounts",
  "severity": "high",
  "description": "Admin accounts can bypass MFA requirement",
  "remediation_plan": "Update policy to require MFA for all admin roles",
  "due_date": "2025-02-01"
}

Best Practices

For Administrators

  1. Enable all logging - Ensure comprehensive audit trails
  2. Regular access reviews - Monthly review of user permissions
  3. Monitor dashboards - Daily check of compliance status
  4. Document exceptions - Record all policy exceptions with justification
  5. Test controls - Quarterly verification of control effectiveness

For Developers

  1. Follow secure coding standards - No hardcoded secrets, input validation
  2. Use security modules - SafeCommand, sql_guard, error_sanitizer
  3. Log security events - Use audit logging for sensitive operations
  4. Handle errors properly - Never expose internal details

For Organizations

  1. Assign compliance owner - Dedicated person for SOC 2
  2. Schedule regular audits - Annual Type II assessment
  3. Train employees - Security awareness program
  4. Maintain documentation - Keep policies current
  5. Plan for incidents - Test incident response procedures

Security Matrix Reference

Endpoint Security Checklist

Chapter 10: Configuration & Deployment

This chapter covers bot configuration and deployment options. Each bot’s behavior is controlled by a simple CSV configuration file in its .gbot package.

Configuration System

botserver uses a straightforward name-value CSV format for configuration:

name,value
setting_name,setting_value
another_setting,another_value

File Location

mybot.gbai/
└── mybot.gbot/
    └── config.csv

Configuration Categories

Server Settings

  • Web server binding and ports
  • Site generation paths
  • Service endpoints

LLM Configuration

  • Model paths (local GGUF files)
  • Service URLs
  • Cache settings
  • Server parameters (when embedded)

Prompt Management

  • Context compaction levels
  • History retention
  • Token management

Email Integration

  • SMTP server settings
  • Authentication credentials
  • Sender configuration

Theme Customization

  • Color schemes
  • Logo URLs
  • Bot titles

Custom Database

  • External database connections
  • Authentication details

Key Features

Simple Format

  • Plain CSV with name-value pairs
  • No complex syntax
  • Human-readable

Flexible Structure

  • Empty rows for visual grouping
  • Optional settings with defaults
  • Extensible for custom needs

Local-First

  • Designed for local LLM models
  • Self-hosted services
  • No cloud dependency by default

Example Configurations

Minimal Setup

Just the essentials to run a bot:

name,value
llm-url,http://localhost:8081
llm-model,../../../../data/llm/model.gguf

Production Setup

Full configuration with all services:

name,value
,
server_host,0.0.0.0
server_port,8080
,
llm-url,http://localhost:8081
llm-model,../../../../data/llm/production-model.gguf
llm-cache,true
,
email-server,smtp.company.com
email-from,bot@company.com
,
theme-title,Company Assistant

Configuration Philosophy

  1. Defaults Work: Most settings have sensible defaults
  2. Local First: Assumes local services, not cloud APIs
  3. Simple Values: All values are strings, parsed as needed
  4. No Magic: What you see is what you get

See Also


General Bots

config.csv Format

The config.csv file is the heart of bot configuration in General Bots. Located in each bot’s .gbot package, it uses a simple, human-readable format that anyone can edit.

Why CSV?

We chose CSV because:

  • No syntax errors - Just name,value pairs
  • Spreadsheet compatible - Edit in Excel, Google Sheets, or any text editor
  • Human readable - No brackets, no indentation wars
  • Git friendly - Clean diffs, easy merges

Basic Format

name,value
server-port,8080
llm-model,../../../../data/llm/model.gguf

That’s it. No quotes, no special characters, just names and values.

Visual Organization

Use empty rows to group related settings:

name,value

# Server settings
server-host,0.0.0.0
server-port,8080

# LLM settings (see Configuration Management for details)
llm-url,http://localhost:8081
llm-model,model.gguf

# Email settings
email-from,bot@example.com
email-server,smtp.example.com

Key Points

  • Case matters: server-port not Server-Port
  • No spaces: Around commas or in names
  • Paths: Can be relative or absolute
  • Booleans: Use true or false
  • Numbers: Just write them directly

Quick Example

A complete working configuration:

name,value
server-port,8080
llm-url,http://localhost:8081
llm-model,../../../../data/llm/DeepSeek-R3-Distill-Qwen-1.5B-Q3_K_M.gguf
episodic-memory-threshold,4

Four lines. Bot configured. That’s the General Bots way.

LLM Configuration

Basic LLM settings in config.csv:

  • llm-url - Where your LLM server is (local or cloud)
  • llm-model - Which model to use
  • llm-key - API key if using cloud services like Groq

For detailed LLM configuration including GPU settings, cache, performance tuning, and hardware-specific recommendations, see Configuration Management.

Where to Find Settings

For the complete list of available settings and detailed explanations, see Configuration Management.

Philosophy

Configuration should be boring. You should spend time on your bot’s personality and capabilities, not fighting with config files. CSV keeps it simple so you can focus on what matters.

Configuration Parameters

Complete reference of all available parameters in config.csv.

Server Parameters

Web Server

ParameterDescriptionDefaultType
server-hostServer bind address0.0.0.0IP address
server-portServer listen port8080Number (1-65535)
sites-rootGenerated sites directory/tmpPath

MCP Server

ParameterDescriptionDefaultType
mcp-serverEnable MCP protocol serverfalseBoolean

LLM Parameters

Core LLM Settings

ParameterDescriptionDefaultType
llm-keyAPI key for LLM servicenoneString
llm-urlLLM service endpointhttp://localhost:8081URL
llm-modelModel path or identifierRequiredPath/String
llm-modelsAvailable model aliases for routingdefaultSemicolon-separated

LLM Cache

ParameterDescriptionDefaultType
llm-cacheEnable response cachingfalseBoolean
llm-cache-ttlCache time-to-live3600Seconds
llm-cache-semanticSemantic similarity cachetrueBoolean
llm-cache-thresholdSimilarity threshold0.95Float (0-1)

Embedded LLM Server

ParameterDescriptionDefaultType
llm-serverRun embedded serverfalseBoolean
llm-server-pathServer binary pathbotserver-stack/bin/llm/build/binPath
llm-server-hostServer bind address0.0.0.0IP address
llm-server-portServer port8081Number
llm-server-gpu-layersGPU offload layers0Number
llm-server-n-moeMoE experts count0Number
llm-server-ctx-sizeContext size4096Tokens
llm-server-n-predictMax predictions1024Tokens
llm-server-parallelParallel requests6Number
llm-server-cont-batchingContinuous batchingtrueBoolean
llm-server-mlockLock in memoryfalseBoolean
llm-server-no-mmapDisable mmapfalseBoolean
llm-server-reasoning-formatReasoning output format for llama.cppnoneString

Hardware-Specific LLM Tuning

For RTX 3090 (24GB VRAM)

You can run impressive models with proper configuration:

  • DeepSeek-R3-Distill-Qwen-7B: Set llm-server-gpu-layers to 35-40
  • Qwen2.5-32B-Instruct (Q4_K_M): Fits with llm-server-gpu-layers to 40-45
  • DeepSeek-V3 (with MoE): Set llm-server-n-moe to 2-4 to run even 120B models! MoE only loads active experts
  • Optimization: Use llm-server-ctx-size of 8192 for longer contexts

For RTX 4070/4070Ti (12-16GB VRAM)

Mid-range cards work great with quantized models:

  • Qwen2.5-14B (Q4_K_M): Set llm-server-gpu-layers to 25-30
  • DeepSeek-R3-Distill-Llama-8B: Fully fits with layers at 32
  • Tips: Keep llm-server-ctx-size at 4096 to save VRAM

For CPU-Only (No GPU)

Modern CPUs can still run capable models:

  • DeepSeek-R3-Distill-Qwen-1.5B: Fast on CPU, great for testing
  • Phi-3-mini (3.8B): Excellent CPU performance
  • Settings: Set llm-server-mlock to true to prevent swapping
  • Parallel: Increase llm-server-parallel to CPU cores -2
  • Best Overall: DeepSeek-R3-Distill series (1.5B to 70B)
  • Best Small: Qwen2.5-3B-Instruct-Q5_K_M
  • Best Medium: DeepSeek-R3-Distill-Qwen-14B-Q4_K_M
  • Best Large: DeepSeek-V3, Qwen2.5-32B, or GPT2-120B-GGUF (with MoE enabled)

Pro Tip: The llm-server-n-moe parameter is magic for large models - it enables Mixture of Experts, letting you run 120B+ models on consumer hardware by only loading the experts needed for each token!

Local vs Cloud: A Practical Note

General Bots excels at local deployment - you own your hardware, your data stays private, and there are no recurring costs. However, if you need cloud inference:

Groq is the speed champion - They use custom LPU (Language Processing Unit) chips instead of GPUs, delivering 10x faster inference than traditional cloud providers. Their hardware is purpose-built for transformers, avoiding the general-purpose overhead of NVIDIA GPUs.

This isn’t about market competition - it’s about architecture. NVIDIA GPUs are designed for many tasks, while Groq’s chips do one thing incredibly well: transformer inference. If speed matters and you’re using cloud, Groq is currently the fastest option available.

For local deployment, stick with General Bots and the configurations above. For cloud bursts or when you need extreme speed, consider Groq’s API with these settings:

llm-url,https://api.groq.com/openai/v1
llm-key,your-groq-api-key
llm-model,mixtral-8x7b-32768

Embedding Parameters

ParameterDescriptionDefaultType
embedding-urlEmbedding service endpointhttp://localhost:8082URL
embedding-modelEmbedding model pathRequired for KBPath

Email Parameters

ParameterDescriptionDefaultType
email-fromSender addressRequired for emailEmail
email-serverSMTP hostnameRequired for emailHostname
email-portSMTP port587Number
email-userSMTP usernameRequired for emailString
email-passSMTP passwordRequired for emailString
email-read-pixelEnable read tracking pixel in HTML emailsfalseBoolean

Email Read Tracking

When email-read-pixel is enabled, a 1x1 transparent tracking pixel is automatically injected into HTML emails sent via the API. This allows you to:

  • Track when emails are opened
  • See how many times an email was opened
  • Get the approximate location (IP) and device (user agent) of the reader

API Endpoints for tracking:

EndpointMethodDescription
/api/email/tracking/pixel/{tracking_id}GETServes the tracking pixel (called by email client)
/api/email/tracking/status/{tracking_id}GETGet read status for a specific email
/api/email/tracking/listGETList all sent emails with tracking status
/api/email/tracking/statsGETGet overall tracking statistics

Example configuration:

email-read-pixel,true
server-url,https://yourdomain.com

Note: The server-url parameter is used to generate the tracking pixel URL. Make sure it’s accessible from the recipient’s email client.

Privacy considerations: Email tracking should be used responsibly. Consider disclosing tracking in your email footer for transparency.

Theme Parameters

ParameterDescriptionDefaultType
theme-color1Primary colorNot setHex color
theme-color2Secondary colorNot setHex color
theme-logoLogo URLNot setURL
theme-titleBot display titleNot setString
bot-nameBot display nameNot setString
welcome-messageInitial greeting messageNot setString

Custom Database Parameters

These parameters configure external database connections for use with BASIC keywords like MariaDB/MySQL connections.

ParameterDescriptionDefaultType
custom-serverDatabase server hostnamelocalhostHostname
custom-portDatabase port5432Number
custom-databaseDatabase nameNot setString
custom-usernameDatabase userNot setString
custom-passwordDatabase passwordNot setString

Website Crawling Parameters

ParameterDescriptionDefaultType
website-expiresCache expiration for crawled content1dDuration
website-max-depthMaximum crawl depth3Number
website-max-pagesMaximum pages to crawl100Number

Image Generator Parameters

ParameterDescriptionDefaultType
image-generator-modelDiffusion model pathNot setPath
image-generator-stepsInference steps4Number
image-generator-widthOutput width512Pixels
image-generator-heightOutput height512Pixels
image-generator-gpu-layersGPU offload layers20Number
image-generator-batch-sizeBatch size1Number

Video Generator Parameters

ParameterDescriptionDefaultType
video-generator-modelVideo model pathNot setPath
video-generator-framesFrames to generate24Number
video-generator-fpsFrames per second8Number
video-generator-widthOutput width320Pixels
video-generator-heightOutput height576Pixels
video-generator-gpu-layersGPU offload layers15Number
video-generator-batch-sizeBatch size1Number

BotModels Service Parameters

ParameterDescriptionDefaultType
botmodels-enabledEnable BotModels servicetrueBoolean
botmodels-hostBotModels bind address0.0.0.0IP address
botmodels-portBotModels port8085Number

Generator Parameters

ParameterDescriptionDefaultType
default-generatorDefault content generatorallString

Teams Channel Parameters

ParameterDescriptionDefaultType
teams-app-idMicrosoft Teams App IDNot setString
teams-app-passwordMicrosoft Teams App PasswordNot setString
teams-tenant-idMicrosoft Teams Tenant IDNot setString
teams-bot-idMicrosoft Teams Bot IDNot setString

SMS Parameters

ParameterDescriptionDefaultType
sms-providerSMS provider (twilio, aws, vonage, messagebird, custom)Not setString
sms-fallback-providerFallback provider if primary failsNot setString

Twilio Parameters

ParameterDescriptionDefaultType
twilio-account-sidTwilio Account SIDNot setString
twilio-auth-tokenTwilio Auth TokenNot setString
twilio-phone-numberTwilio phone number (E.164 format)Not setString
twilio-messaging-service-sidMessaging Service SID for routingNot setString
twilio-status-callbackWebhook URL for delivery statusNot setURL

AWS SNS Parameters

ParameterDescriptionDefaultType
aws-access-key-idAWS Access Key IDNot setString
aws-secret-access-keyAWS Secret Access KeyNot setString
aws-regionAWS Region (e.g., us-east-1)Not setString
aws-sns-sender-idSender ID (alphanumeric)Not setString
aws-sns-message-typePromotional or TransactionalTransactionalString

Vonage (Nexmo) Parameters

ParameterDescriptionDefaultType
vonage-api-keyVonage API KeyNot setString
vonage-api-secretVonage API SecretNot setString
vonage-fromSender number or alphanumeric IDNot setString
vonage-callback-urlDelivery receipt webhookNot setURL

MessageBird Parameters

ParameterDescriptionDefaultType
messagebird-access-keyMessageBird Access KeyNot setString
messagebird-originatorSender number or nameNot setString
messagebird-report-urlStatus report webhookNot setURL

Custom Provider Parameters

ParameterDescriptionDefaultType
sms-custom-urlAPI endpoint URLNot setURL
sms-custom-methodHTTP method (POST, GET)POSTString
sms-custom-auth-headerAuthorization header valueNot setString
sms-custom-body-templateJSON body with {{to}}, {{message}} placeholdersNot setString
sms-custom-fromSender number for custom providerNot setString

Example: Twilio Configuration

sms-provider,twilio
twilio-account-sid,ACxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
twilio-auth-token,your_auth_token
twilio-phone-number,+15551234567

Example: AWS SNS Configuration

sms-provider,aws
aws-access-key-id,AKIAIOSFODNN7EXAMPLE
aws-secret-access-key,wJalrXUtnFEMI/K7MDENG/bPxRfiCYEXAMPLEKEY
aws-region,us-east-1
aws-sns-message-type,Transactional

See SMS Provider Configuration for detailed setup instructions.

WhatsApp Parameters

ParameterDescriptionDefaultType
whatsapp-api-keyAccess token from Meta BusinessNot setString
whatsapp-phone-number-idPhone number ID from WhatsApp BusinessNot setString
whatsapp-verify-tokenToken for webhook verificationNot setString
whatsapp-business-account-idWhatsApp Business Account IDNot setString
whatsapp-api-versionGraph API versionv17.0String

Example: WhatsApp Configuration

whatsapp-api-key,EAABs...your_access_token
whatsapp-phone-number-id,123456789012345
whatsapp-verify-token,my-secret-verify-token
whatsapp-business-account-id,987654321098765

See WhatsApp Channel Configuration for detailed setup instructions.

Multi-Agent Parameters

Agent-to-Agent (A2A) Communication

ParameterDescriptionDefaultType
a2a-enabledEnable agent-to-agent communicationtrueBoolean
a2a-timeoutDefault delegation timeout30Seconds
a2a-max-hopsMaximum delegation chain depth5Number
a2a-retry-countRetry attempts on failure3Number
a2a-queue-sizeMaximum pending messages100Number
a2a-protocol-versionA2A protocol version1.0String
a2a-persist-messagesPersist A2A messages to databasefalseBoolean

Bot Reflection

ParameterDescriptionDefaultType
bot-reflection-enabledEnable bot self-analysistrueBoolean
bot-reflection-intervalMessages between reflections10Number
bot-reflection-promptCustom reflection prompt(none)String
bot-reflection-typesReflection types to performConversationQualitySemicolon-separated
bot-improvement-auto-applyAuto-apply suggested improvementsfalseBoolean
bot-improvement-thresholdScore threshold for improvements (0-10)6.0Float

Reflection Types

Available values for bot-reflection-types:

  • ConversationQuality - Analyze conversation quality and user satisfaction
  • ResponseAccuracy - Analyze response accuracy and relevance
  • ToolUsage - Analyze tool usage effectiveness
  • KnowledgeRetrieval - Analyze knowledge retrieval performance
  • Performance - Analyze overall bot performance

Example:

bot-reflection-enabled,true
bot-reflection-interval,10
bot-reflection-types,ConversationQuality;ResponseAccuracy;ToolUsage
bot-improvement-auto-apply,false
bot-improvement-threshold,7.0

Memory Parameters

User Memory (Cross-Bot)

ParameterDescriptionDefaultType
user-memory-enabledEnable user-level memorytrueBoolean
user-memory-max-keysMaximum keys per user1000Number
user-memory-default-ttlDefault time-to-live (0=no expiry)0Seconds

Episodic Memory (Context Compaction)

ParameterDescriptionDefaultType
episodic-memory-enabledEnable episodic memory systemtrueBoolean
episodic-memory-thresholdExchanges before compaction triggers4Number
episodic-memory-historyRecent exchanges to keep in full2Number
episodic-memory-modelModel for summarizationfastString
episodic-memory-max-episodesMaximum episodes per user100Number
episodic-memory-retention-daysDays to retain episodes365Number
episodic-memory-auto-summarizeEnable automatic summarizationtrueBoolean

Episodic memory automatically manages conversation context to stay within LLM token limits. When conversation exchanges exceed episodic-memory-threshold, older messages are summarized and only the last episodic-memory-history exchanges are kept in full. See Chapter 03 - Episodic Memory for details.

Model Routing Parameters

These parameters configure multi-model routing for different task types. Requires multiple llama.cpp server instances.

ParameterDescriptionDefaultType
llm-modelsAvailable model aliasesdefaultSemicolon-separated
model-routing-strategyRouting strategy (manual/auto/load-balanced/fallback)autoString
model-defaultDefault model aliasdefaultString
model-fastModel for fast/simple tasks(configured)Path/String
model-qualityModel for quality/complex tasks(configured)Path/String
model-codeModel for code generation(configured)Path/String
model-fallback-enabledEnable automatic fallbacktrueBoolean
model-fallback-orderOrder to try on failurequality,fast,localComma-separated

Multi-Model Example

llm-models,default;fast;quality;code
llm-url,http://localhost:8081
model-routing-strategy,auto
model-default,fast
model-fallback-enabled,true
model-fallback-order,quality,fast

Hybrid RAG Search Parameters

General Bots uses hybrid search combining dense (embedding) and sparse (BM25 keyword) search for optimal retrieval. The BM25 implementation is powered by Tantivy, a full-text search engine library similar to Apache Lucene.

ParameterDescriptionDefaultType
rag-hybrid-enabledEnable hybrid dense+sparse searchtrueBoolean
rag-dense-weightWeight for semantic results0.7Float (0-1)
rag-sparse-weightWeight for keyword results0.3Float (0-1)
rag-reranker-enabledEnable LLM rerankingfalseBoolean
rag-reranker-modelModel for rerankingcross-encoder/ms-marco-MiniLM-L-6-v2String
rag-reranker-top-nCandidates for reranking20Number
rag-max-resultsMaximum results to return10Number
rag-min-scoreMinimum relevance score threshold0.0Float (0-1)
rag-rrf-kRRF smoothing constant60Number
rag-cache-enabledEnable search result cachingtrueBoolean
rag-cache-ttlCache time-to-live3600Seconds

BM25 Sparse Search (Tantivy)

BM25 is a keyword-based ranking algorithm that excels at finding exact term matches. It’s powered by Tantivy when the vectordb feature is enabled.

ParameterDescriptionDefaultType
bm25-enabledEnable/disable BM25 sparse searchtrueBoolean
bm25-k1Term frequency saturation (0.5-3.0 typical)1.2Float
bm25-bDocument length normalization (0.0-1.0)0.75Float
bm25-stemmingApply word stemming (running→run)trueBoolean
bm25-stopwordsFilter common words (the, a, is)trueBoolean

Switching Search Modes

Hybrid Search (Default - Best for most use cases)

bm25-enabled,true
rag-dense-weight,0.7
rag-sparse-weight,0.3

Uses both semantic understanding AND keyword matching. Best for general queries.

Dense Only (Semantic Search)

bm25-enabled,false
rag-dense-weight,1.0
rag-sparse-weight,0.0

Uses only embedding-based search. Faster, good for conceptual/semantic queries where exact words don’t matter.

Sparse Only (Keyword Search)

bm25-enabled,true
rag-dense-weight,0.0
rag-sparse-weight,1.0

Uses only BM25 keyword matching. Good for exact term searches, technical documentation, or when embeddings aren’t available.

BM25 Parameter Tuning

The k1 and b parameters control BM25 behavior:

  • bm25-k1 (Term Saturation): Controls how much additional term occurrences contribute to the score

    • Lower values (0.5-1.0): Diminishing returns for repeated terms
    • Higher values (1.5-2.0): More weight to documents with many term occurrences
    • Default 1.2 works well for most content
  • bm25-b (Length Normalization): Controls document length penalty

    • 0.0: No length penalty (long documents scored equally)
    • 1.0: Full length normalization (strongly penalizes long documents)
    • Default 0.75 balances length fairness

Tuning for specific content:

# For short documents (tweets, titles)
bm25-b,0.3

# For long documents (articles, manuals)
bm25-b,0.9

# For code search (exact matches important)
bm25-k1,1.5
bm25-stemming,false

Code Sandbox Parameters

ParameterDescriptionDefaultType
sandbox-enabledEnable code sandboxtrueBoolean
sandbox-runtimeIsolation backend (lxc/docker/firecracker/process)lxcString
sandbox-timeoutMaximum execution time30Seconds
sandbox-memory-mbMemory limit in megabytes256MB
sandbox-cpu-percentCPU usage limit50Percent
sandbox-networkAllow network accessfalseBoolean
sandbox-python-packagesPre-installed Python packages(none)Comma-separated
sandbox-allowed-pathsAccessible filesystem paths/data,/tmpComma-separated

Example: Python Sandbox

sandbox-enabled,true
sandbox-runtime,lxc
sandbox-timeout,60
sandbox-memory-mb,512
sandbox-cpu-percent,75
sandbox-network,false
sandbox-python-packages,numpy,pandas,requests,matplotlib
sandbox-allowed-paths,/data,/tmp,/uploads

SSE Streaming Parameters

ParameterDescriptionDefaultType
sse-enabledEnable Server-Sent EventstrueBoolean
sse-heartbeatHeartbeat interval30Seconds
sse-max-connectionsMaximum concurrent connections1000Number

Parameter Types

Boolean

Values: true or false (case-sensitive)

Number

Integer values, must be within valid ranges:

  • Ports: 1-65535
  • Tokens: Positive integers
  • Percentages: 0-100

Float

Decimal values:

  • Thresholds: 0.0 to 1.0
  • Weights: 0.0 to 1.0

Path

File system paths:

  • Relative: ../../../../data/model.gguf
  • Absolute: /opt/models/model.gguf

URL

Valid URLs:

  • HTTP: http://localhost:8081
  • HTTPS: https://api.example.com

String

Any text value (no quotes needed in CSV)

Email

Valid email format: user@domain.com

Hex Color

HTML color codes: #RRGGBB format

Semicolon-separated

Multiple values separated by semicolons: value1;value2;value3

Comma-separated

Multiple values separated by commas: value1,value2,value3

Required vs Optional

Always Required

  • None - all parameters have defaults or are optional

Required for Features

  • LLM: llm-model must be set
  • Email: email-from, email-server, email-user
  • Embeddings: embedding-model for knowledge base
  • Custom DB: custom-database if using external database

Configuration Precedence

  1. Built-in defaults (hardcoded)
  2. config.csv values (override defaults)
  3. Environment variables (if implemented, override config)

Special Values

  • none - Explicitly no value (for llm-key)
  • Empty string - Unset/use default
  • false - Feature disabled
  • true - Feature enabled

Performance Tuning

For Local Models

llm-server-ctx-size,8192
llm-server-n-predict,2048
llm-server-parallel,4
llm-cache,true
llm-cache-ttl,7200

For Production

llm-server-cont-batching,true
llm-cache-semantic,true
llm-cache-threshold,0.90
llm-server-parallel,8
sse-max-connections,5000

For Low Memory

llm-server-ctx-size,2048
llm-server-n-predict,512
llm-server-mlock,false
llm-server-no-mmap,false
llm-cache,false
sandbox-memory-mb,128

For Multi-Agent Systems

a2a-enabled,true
a2a-timeout,30
a2a-max-hops,5
a2a-retry-count,3
a2a-persist-messages,true
bot-reflection-enabled,true
bot-reflection-interval,10
user-memory-enabled,true

For Hybrid RAG

rag-hybrid-enabled,true
rag-dense-weight,0.7
rag-sparse-weight,0.3
rag-reranker-enabled,true
rag-max-results,10
rag-min-score,0.3
rag-cache-enabled,true
bm25-enabled,true
bm25-k1,1.2
bm25-b,0.75

For Dense-Only Search (Faster)

bm25-enabled,false
rag-dense-weight,1.0
rag-sparse-weight,0.0
rag-max-results,10

For Code Execution

sandbox-enabled,true
sandbox-runtime,lxc
sandbox-timeout,30
sandbox-memory-mb,512
sandbox-network,false
sandbox-python-packages,numpy,pandas,requests

Validation Rules

  1. Paths: Model files must exist
  2. URLs: Must be valid format
  3. Ports: Must be 1-65535
  4. Emails: Must contain @ and domain
  5. Colors: Must be valid hex format
  6. Booleans: Exactly true or false
  7. Weights: Must sum to 1.0 (e.g., rag-dense-weight + rag-sparse-weight)

LLM Configuration

Configuration for Language Model integration in botserver, supporting both local GGUF models and external API services.

Local Model Configuration

botserver is designed to work with local GGUF models by default. The minimal configuration requires only a few settings in your config.csv:

llm-key,none
llm-url,http://localhost:8081
llm-model,../../../../data/llm/DeepSeek-R3-Distill-Qwen-1.5B-Q3_K_M.gguf

Model Path

The llm-model parameter accepts relative paths like ../../../../data/llm/model.gguf, absolute paths like /opt/models/model.gguf, or model names when using external APIs like gpt-5.

Supported Model Formats

botserver supports GGUF quantized models for CPU and GPU inference. Quantization levels include Q3_K_M, Q4_K_M, and Q5_K_M for reduced memory usage with acceptable quality trade-offs, while F16 and F32 provide full precision for maximum quality.

LLM Server Configuration

Running Embedded Server

botserver can run its own LLM server for local inference:

llm-server,true
llm-server-path,botserver-stack/bin/llm/build/bin
llm-server-host,0.0.0.0
llm-server-port,8081

Server Performance Parameters

Fine-tune server performance based on your hardware capabilities:

llm-server-gpu-layers,0
llm-server-ctx-size,4096
llm-server-n-predict,1024
llm-server-parallel,6
llm-server-cont-batching,true
ParameterDescriptionImpact
llm-server-gpu-layersLayers to offload to GPU0 = CPU only, higher = more GPU
llm-server-ctx-sizeContext window sizeMore context = more memory
llm-server-n-predictMax tokens to generateLimits response length
llm-server-parallelConcurrent requestsHigher = more throughput
llm-server-cont-batchingContinuous batchingImproves multi-user performance

Memory Management

Memory settings control how the model interacts with system RAM:

llm-server-mlock,false
llm-server-no-mmap,false

The mlock option locks the model in RAM to prevent swapping, which improves performance but requires sufficient memory. The no-mmap option disables memory mapping and loads the entire model into RAM, using more memory but potentially improving access patterns.

Cache Configuration

Basic Cache Settings

Caching reduces repeated LLM calls for identical inputs, significantly improving response times and reducing API costs:

llm-cache,false
llm-cache-ttl,3600

Semantic Cache

Semantic caching matches similar queries, not just identical ones, providing cache hits even when users phrase questions differently:

llm-cache-semantic,true
llm-cache-threshold,0.95

The threshold parameter controls how similar queries must be to trigger a cache hit. A value of 0.95 requires 95% similarity. Lower thresholds produce more cache hits but may return less accurate cached responses.

External API Configuration

Groq and OpenAI-Compatible APIs

For cloud inference, Groq offers the fastest performance among major providers:

llm-key,gsk-your-groq-api-key
llm-url,https://api.groq.com/openai/v1
llm-model,mixtral-8x7b-32768

Local API Servers

When running your own inference server or using another local service:

llm-key,none
llm-url,http://localhost:8081
llm-model,local-model-name

Configuration Examples

Minimal Local Setup

The simplest configuration for getting started with local models:

name,value
llm-url,http://localhost:8081
llm-model,../../../../data/llm/model.gguf

High-Performance Local

Optimized for maximum throughput on capable hardware:

name,value
llm-server,true
llm-server-gpu-layers,32
llm-server-ctx-size,8192
llm-server-parallel,8
llm-server-cont-batching,true
llm-cache,true
llm-cache-semantic,true

Low-Resource Setup

Configured for systems with limited RAM or CPU:

name,value
llm-server-ctx-size,2048
llm-server-n-predict,512
llm-server-parallel,2
llm-cache,false
llm-server-mlock,false

External API

Using a cloud provider for inference:

name,value
llm-key,sk-...
llm-url,https://api.anthropic.com
llm-model,claude-sonnet-4.5
llm-cache,true
llm-cache-ttl,7200

Performance Tuning

For Responsiveness

When response speed is the priority, decrease llm-server-ctx-size and llm-server-n-predict to reduce processing time. Enable both llm-cache and llm-cache-semantic to serve repeated queries instantly.

For Quality

When output quality matters most, increase llm-server-ctx-size and llm-server-n-predict to give the model more context and generation headroom. Use higher quantization models like Q5_K_M or F16 for better accuracy. Either disable semantic cache entirely or raise the threshold to avoid returning imprecise cached responses.

For Multiple Users

Supporting concurrent users requires enabling llm-server-cont-batching and increasing llm-server-parallel to handle multiple requests simultaneously. Enable caching to reduce redundant inference calls. If available, GPU offloading significantly improves throughput under load.

Model Selection Guidelines

Small Models (1-3B parameters)

Small models like DeepSeek-R3-Distill-Qwen-1.5B deliver fast responses with low memory usage. They work well for simple tasks, quick interactions, and resource-constrained environments.

Medium Models (7-13B parameters)

Medium-sized models such as Llama-2-7B and Mistral-7B provide balanced performance suitable for general-purpose applications. They require moderate memory but handle a wide range of tasks competently.

Large Models (30B+ parameters)

Large models like Llama-2-70B and Mixtral-8x7B offer the best quality for complex reasoning tasks. They require substantial memory and compute resources but excel at nuanced understanding and generation.

Troubleshooting

Model Won’t Load

If the model fails to load, first verify the file path exists and is accessible. Check that your system has sufficient RAM for the model size. Ensure the GGUF file version is compatible with your llama.cpp build.

Slow Responses

Slow generation typically indicates resource constraints. Reduce context size, enable caching to avoid redundant inference, use GPU offloading if hardware permits, or switch to a smaller quantized model.

Out of Memory

Memory errors require reducing resource consumption. Lower llm-server-ctx-size and llm-server-parallel values. Switch to more aggressively quantized models (Q3 instead of Q5). Disable llm-server-mlock to allow the OS to manage memory more flexibly.

Connection Refused

Connection errors usually indicate server configuration issues. Verify llm-server is set to true if expecting botserver to run the server. Check that the configured port is not already in use by another process. Ensure firewall rules allow connections on the specified port.

Best Practices

Start with smaller models and scale up only as needed, since larger models consume more resources without always providing proportionally better results. Enable caching for any production deployment to reduce costs and improve response times. Monitor RAM usage during operation to catch memory pressure before it causes problems. Test model responses thoroughly before deploying to production to ensure quality meets requirements. Document which models you’re using and their performance characteristics. Track changes to your config.csv in version control to maintain a history of configuration adjustments.

Configuration Management

Configuration in General Bots is designed to be simple and transparent. Each bot uses a config.csv file for settings, with additional environment variables for system-level configuration.

The config.csv File

Located in your bot’s .gbot package, this file controls all bot-specific settings using simple name-value pairs.

File Format

name,value
setting_name,setting_value
another_setting,another_value
  • Empty rows are used for visual grouping
  • No quotes needed for string values
  • Case-sensitive names
  • Comments not supported (keep it simple)

Core Configuration Sections

Server Configuration

server-host,0.0.0.0
server-port,8080
sites-root,/tmp
NameDescriptionDefaultExample
server-hostBind address for the web server0.0.0.00.0.0.0
server-portPort for the web interface80808080
sites-rootDirectory for generated sites/tmp/tmp

LLM Configuration - Overview

For detailed LLM configuration, see the tables below. The basic settings are:

llm-key,none
llm-url,http://localhost:8081
llm-model,../../../../data/llm/DeepSeek-R3-Distill-Qwen-1.5B-Q3_K_M.gguf

Core LLM Settings

NameDescriptionDefaultExample
llm-keyAPI key for LLM servicenonegsk-... for Groq
llm-urlLLM service endpointhttp://localhost:8081https://api.groq.com/openai/v1
llm-modelModel path or nameRequiredmixtral-8x7b-32768

Model Path Options

  • Local GGUF: ../../../../data/llm/model.gguf
  • Absolute path: /opt/models/model.gguf
  • Cloud model name: mixtral-8x7b-32768 (for Groq)

Supported Formats

  • GGUF: Quantized models (Q3_K_M, Q4_K_M, Q5_K_M, Q8_0)
  • API Models: Any Groq or OpenAI-compatible model

LLM Cache Settings

llm-cache,false
llm-cache-ttl,3600
llm-cache-semantic,true
llm-cache-threshold,0.95
NameDescriptionDefaultType
llm-cacheEnable response cachingfalseBoolean
llm-cache-ttlCache time-to-live in seconds3600Number
llm-cache-semanticUse semantic similaritytrueBoolean
llm-cache-thresholdSimilarity threshold for cache hits0.95Float

Cache Strategy Tips:

  • Enable for production to reduce API costs
  • Semantic cache finds similar (not just identical) queries
  • Lower threshold (0.90) = more hits but less precision
  • Higher threshold (0.98) = fewer hits but exact matches

Context Management

episodic-memory-threshold,4
episodic-memory-history,2
NameDescriptionDefaultRange
episodic-memory-thresholdMessages before compaction41-10
episodic-memory-historyMessages to keep in historyNot set1-20

Embedding Configuration

embedding-url,http://localhost:8082
embedding-model,../../../../data/llm/bge-small-en-v1.5-f32.gguf
NameDescriptionDefaultType
embedding-urlEmbedding service endpointhttp://localhost:8082URL
embedding-modelPath to embedding modelRequired for KBPath

LLM Server Settings (When Self-Hosting)

llm-server,true
llm-server-path,botserver-stack/bin/llm/build/bin
llm-server-host,0.0.0.0
llm-server-port,8081
llm-server-gpu-layers,0
llm-server-n-moe,0
llm-server-ctx-size,4096
llm-server-n-predict,1024
llm-server-parallel,6
llm-server-cont-batching,true
llm-server-mlock,false
llm-server-no-mmap,false

Performance Parameters

ParameterDescriptionDefaultImpact
llm-server-gpu-layersLayers to offload to GPU00=CPU only, higher=more GPU usage
llm-server-n-moeMoE experts count0Enables 120B+ models on consumer GPUs
llm-server-ctx-sizeContext window (tokens)4096More context = more memory
llm-server-n-predictMax output tokens1024Limits response length
llm-server-parallelConcurrent requests6Higher = more throughput
llm-server-cont-batchingContinuous batchingtrueBetter multi-user performance
llm-server-mlockLock model in RAMfalsePrevents swapping to disk
llm-server-no-mmapDisable memory mappingfalseUses more RAM but may be faster

Hardware-Specific Settings

RTX 3090 (24GB VRAM)

  • Set llm-server-gpu-layers to 35-45 for 7B-32B models
  • Enable llm-server-n-moe 2-4 for 120B+ models
  • Can run DeepSeek-V3 with proper MoE settings

RTX 4070/Ti (12-16GB)

  • Set llm-server-gpu-layers to 25-30 for 7B-14B models
  • Keep llm-server-ctx-size at 4096 to save VRAM

CPU-Only Setup

  • Keep llm-server-gpu-layers at 0
  • Enable llm-server-mlock to prevent swapping
  • Set llm-server-parallel to CPU cores -2

Email Configuration

email-from,from@domain.com
email-server,mail.domain.com
email-port,587
email-user,user@domain.com
email-pass,password

All email parameters are required if you want to send emails from your bot.

Custom Database (Optional)

custom-server,localhost
custom-port,5432
custom-database,mycustomdb
custom-username,dbuser
custom-password,dbpass

Configuration Examples

Minimal Configuration

name,value
server-port,8080
llm-url,http://localhost:8081
llm-model,../../../../data/llm/model.gguf

Production Configuration (Groq Cloud)

name,value
,
server-host,0.0.0.0
server-port,443
sites-root,/var/www/sites
,
# Groq is 10x faster than traditional cloud providers
llm-key,gsk-your-groq-api-key
llm-url,https://api.groq.com/openai/v1
llm-model,mixtral-8x7b-32768
,
llm-cache,true
llm-cache-ttl,7200
llm-cache-semantic,true
llm-cache-threshold,0.95
,
episodic-memory-threshold,6
,
email-from,bot@company.com
email-server,smtp.company.com
email-port,587
email-user,bot@company.com
email-pass,secure-password

Local Development (Self-Hosted)

name,value
,
server-port,3000
,
# Run your own LLM server
llm-server,true
llm-server-gpu-layers,35
llm-server-ctx-size,8192
llm-server-n-predict,2048
llm-model,../../../../data/llm/DeepSeek-R3-Distill-Qwen-7B-Q4_K_M.gguf
,
# Disable cache for development
llm-cache,false
episodic-memory-threshold,2

Configuration Priority

Settings are applied in this order (later overrides earlier):

  1. Default values in code
  2. config.csv settings

Best Practices

  1. Keep it Simple: Only configure what you need to change
  2. Use Groups: Empty rows make the file readable
  3. Test Locally: Verify settings before production
  4. Secure Secrets: Use environment variables for passwords in production
  5. Document Changes: Comment significant changes in version control

Validation

The system validates configuration on startup:

  • Missing required values cause clear error messages
  • Invalid URLs or paths are detected early
  • Port conflicts are reported
  • Model file existence is verified

Hot Reload

Some settings support hot reload without restart:

  • Cache settings
  • Context parameters
  • Email configuration

Others require restart:

  • Server ports
  • LLM model changes
  • Database connections

Troubleshooting

Common Issues

Port Already in Use

  • Change server-port to an available port
  • Check for other services on the same port

Model Not Found

  • Verify the path in llm-model is correct
  • Ensure the GGUF file exists
  • Use absolute paths if relative paths fail

LLM Server Won’t Start

  • Check llm-server-gpu-layers doesn’t exceed your GPU capability
  • Reduce llm-server-ctx-size if out of memory
  • Set llm-server-gpu-layers to 0 for CPU-only
  • Verify model file exists at the specified path
  • Check available VRAM with nvidia-smi (if using GPU)

Cache Not Working

  • Ensure llm-cache is set to true
  • Check llm-cache-threshold isn’t too high (0.95 is usually good)
  • Verify Valkey/Redis is running

Quick Model Recommendations

Best Models by Hardware

24GB+ VRAM (RTX 3090, 4090)

  • DeepSeek-V3 (with MoE enabled)
  • Qwen2.5-32B-Instruct-Q4_K_M
  • DeepSeek-R3-Distill-Qwen-14B (runs fast with room to spare)

12-16GB VRAM (RTX 4070, 4070Ti)

  • DeepSeek-R3-Distill-Llama-8B
  • Qwen2.5-14B-Q4_K_M
  • Mistral-7B-Instruct-Q5_K_M

8GB VRAM or CPU-Only

  • DeepSeek-R3-Distill-Qwen-1.5B
  • Phi-3-mini-4k-instruct
  • Qwen2.5-3B-Instruct-Q5_K_M

Cloud API (Fastest)

  • Groq: mixtral-8x7b-32768
  • Groq: llama-3.1-70b-versatile

Summary

General Bots configuration is intentionally simple - a CSV file with name-value pairs. No complex YAML, no nested JSON, just straightforward settings that anyone can edit. Start with minimal configuration and add settings as needed.

For LLM configuration, the key decision is local vs cloud:

  • Local: Full control, no recurring costs, complete privacy
  • Cloud (Groq): 10x faster inference, pay-per-use, no hardware needed

Drive Integration

Multimodal Configuration

General Bots integrates with botmodels—a Python service for multimodal AI tasks—to enable image generation, video creation, audio synthesis, and vision capabilities directly from BASIC scripts.

General Bots

Architecture

┌─────────────┐     HTTPS      ┌─────────────┐
│  botserver  │ ────────────▶  │  botmodels  │
│   (Rust)    │                │  (Python)   │
└─────────────┘                └─────────────┘
      │                              │
      │ BASIC Keywords               │ AI Models
      │ - IMAGE                      │ - Stable Diffusion
      │ - VIDEO                      │ - Zeroscope
      │ - AUDIO                      │ - TTS/Whisper
      │ - SEE                        │ - BLIP2

When a BASIC script calls a multimodal keyword, botserver forwards the request to botmodels, which runs the appropriate AI model and returns the generated content.

Configuration

Add these settings to your bot’s config.csv file to enable multimodal capabilities.

BotModels Service

KeyDefaultDescription
botmodels-enabledfalseEnable botmodels integration
botmodels-host0.0.0.0Host address for botmodels service
botmodels-port8085Port for botmodels service
botmodels-api-keyAPI key for authentication
botmodels-httpsfalseUse HTTPS for connection

Image Generation

KeyDefaultDescription
image-generator-modelPath to image generation model
image-generator-steps4Inference steps (more = higher quality, slower)
image-generator-width512Output image width in pixels
image-generator-height512Output image height in pixels
image-generator-gpu-layers20Layers to offload to GPU
image-generator-batch-size1Batch size for generation

Video Generation

KeyDefaultDescription
video-generator-modelPath to video generation model
video-generator-frames24Number of frames to generate
video-generator-fps8Output frames per second
video-generator-width320Output video width in pixels
video-generator-height576Output video height in pixels
video-generator-gpu-layers15Layers to offload to GPU
video-generator-batch-size1Batch size for generation

Example Configuration

key,value
botmodels-enabled,true
botmodels-host,0.0.0.0
botmodels-port,8085
botmodels-api-key,your-secret-key
botmodels-https,false
image-generator-model,../../../../data/diffusion/sd_turbo_f16.gguf
image-generator-steps,4
image-generator-width,512
image-generator-height,512
image-generator-gpu-layers,20
video-generator-model,../../../../data/diffusion/zeroscope_v2_576w
video-generator-frames,24
video-generator-fps,8

BASIC Keywords

Once configured, these keywords become available in your scripts.

IMAGE

Generate an image from a text prompt:

file = IMAGE "a sunset over mountains with purple clouds"
SEND FILE TO user, file

The keyword returns a path to the generated image file.

VIDEO

Generate a video from a text prompt:

file = VIDEO "a rocket launching into space"
SEND FILE TO user, file

Video generation is more resource-intensive than image generation. Expect longer processing times.

AUDIO

Generate speech audio from text:

file = AUDIO "Hello, welcome to our service!"
SEND FILE TO user, file

SEE

Analyze an image or video and get a description:

' Describe an image
caption = SEE "/path/to/image.jpg"
TALK caption

' Describe a video
description = SEE "/path/to/video.mp4"
TALK description

The SEE keyword uses vision models to understand visual content and return natural language descriptions.

Starting BotModels

Before using multimodal features, start the botmodels service:

cd botmodels
python -m uvicorn src.main:app --host 0.0.0.0 --port 8085

For production with HTTPS:

python -m uvicorn src.main:app \
    --host 0.0.0.0 \
    --port 8085 \
    --ssl-keyfile key.pem \
    --ssl-certfile cert.pem

BotModels API Endpoints

The botmodels service exposes these REST endpoints:

EndpointMethodDescription
/api/image/generatePOSTGenerate image from prompt
/api/video/generatePOSTGenerate video from prompt
/api/speech/generatePOSTGenerate speech from text
/api/speech/totextPOSTTranscribe audio to text
/api/vision/describePOSTDescribe an image
/api/vision/describe_videoPOSTDescribe a video
/api/vision/vqaPOSTVisual question answering
/api/healthGETHealth check

All endpoints except /api/health require the X-API-Key header for authentication.

Model Paths

Configure model paths relative to the botmodels service directory. Typical layout:

data/
├── diffusion/
│   ├── sd_turbo_f16.gguf          # Stable Diffusion
│   └── zeroscope_v2_576w/         # Zeroscope video
├── tts/
│   └── model.onnx                 # Text-to-speech
├── whisper/
│   └── model.bin                  # Speech-to-text
└── vision/
    └── blip2/                     # Vision model

GPU Acceleration

Both image and video generation benefit significantly from GPU acceleration. Configure GPU layers based on your hardware:

GPU VRAMRecommended GPU Layers
4GB8-12
8GB15-20
12GB+25-35

Lower GPU layers if you experience out-of-memory errors.

Troubleshooting

“BotModels is not enabled”

Set botmodels-enabled=true in your config.csv.

Connection refused

Verify botmodels service is running and check host/port configuration. Test connectivity:

curl http://localhost:8085/api/health

Authentication failed

Ensure botmodels-api-key in config.csv matches the API_KEY environment variable in botmodels.

Model not found

Verify model paths are correct and models are downloaded to the expected locations.

Out of memory

Reduce gpu-layers or batch-size. Video generation is particularly memory-intensive.

Security Considerations

Use HTTPS in production. Set botmodels-https=true and configure SSL certificates on the botmodels service.

Use strong API keys. Generate cryptographically random keys for the botmodels-api-key setting.

Restrict network access. Limit botmodels service access to trusted hosts only.

Consider GPU isolation. Run botmodels on a dedicated GPU server if sharing resources with other services.

Performance Tips

Image generation runs fastest with SD Turbo models and 4-8 inference steps. More steps improve quality but increase generation time linearly.

Video generation is the most resource-intensive operation. Keep frame counts low (24-48) for reasonable response times.

Batch processing improves throughput when generating multiple items. Increase batch-size if you have sufficient GPU memory.

Caching generated content when appropriate. If multiple users request similar content, consider storing results.

See Also

Secrets Management

General Bots uses a layered approach to configuration and secrets management. The goal is to keep .env minimal - containing only Vault connection info - while all sensitive data is stored securely in Vault.

Configuration Layers

┌─────────────────────────────────────────────────────────────────────────────┐
│                         Configuration Hierarchy                              │
├─────────────────────────────────────────────────────────────────────────────┤
│                                                                             │
│  ┌─────────────┐     ┌─────────────┐     ┌─────────────┐     ┌───────────┐ │
│  │    .env     │     │   Zitadel   │     │   Vault     │     │config.csv │ │
│  │(Vault ONLY) │     │  (Identity) │     │  (Secrets)  │     │(Bot Config)│ │
│  └──────┬──────┘     └──────┬──────┘     └──────┬──────┘     └─────┬─────┘ │
│         │                   │                   │                   │       │
│         ▼                   ▼                   ▼                   ▼       │
│  • VAULT_ADDR        • User accounts     • Directory URL       • Bot params │
│  • VAULT_TOKEN       • Organizations     • Database creds      • LLM config │
│                      • Projects          • API keys            • Features   │
│                      • Applications      • Drive credentials   • Behavior   │
│                      • MFA settings      • Encryption keys                 │
│                      • SSO/OAuth         • ALL service secrets             │
│                                                                             │
└─────────────────────────────────────────────────────────────────────────────┘

What Goes Where?

.env (Vault Connection ONLY)

The .env file should contain ONLY Vault connection info:

# .env - ONLY Vault connection
# Everything else comes from Vault!

VAULT_ADDR=https://localhost:8200
VAULT_TOKEN=hvs.your-root-token

That’s it. Two variables only.

Why so minimal?

  • .env files can be accidentally committed to git
  • Environment variables may appear in logs
  • Reduces attack surface if server is compromised
  • Single point of secret management (Vault)
  • Easy rotation - change in Vault, not in files

Zitadel (Identity & Access)

Zitadel manages user-facing identity:

WhatExample
User accountsjohn@example.com
OrganizationsAcme Corp
ProjectsProduction Bot
ApplicationsWeb UI, Mobile App
MFA settingsTOTP, SMS, WebAuthn
SSO providersGoogle, Microsoft
User metadataDepartment, Role

Not stored in Zitadel:

  • Service passwords
  • API keys
  • Encryption keys

Vault (Service Secrets)

Vault manages machine-to-machine secrets:

PathContents
gbo/driveMinIO access key and secret
gbo/tablesPostgreSQL username and password
gbo/cacheRedis password
gbo/llmOpenAI, Anthropic, Groq API keys
gbo/encryptionMaster encryption key, data keys
gbo/emailSMTP credentials
gbo/meetLiveKit API key and secret
gbo/almForgejo admin password, runner token

config.csv (Bot Configuration)

The bot’s config.csv contains non-sensitive configuration:

# Bot behavior - NOT secrets
llm-provider,anthropic
llm-model,claude-sonnet-4.5
llm-temperature,0.7
llm-max-tokens,4096

# Feature flags
feature-voice-enabled,true
feature-file-upload,true

# Vault references for sensitive values
llm-api-key,vault:gbo/llm/openai_key

Note: Most service credentials (database, drive, cache) are fetched automatically from Vault at startup. You only need vault: references in config.csv for bot-specific secrets like LLM API keys.

How Secrets Flow

At Startup

1. botserver starts
2. Reads .env for VAULT_ADDR and VAULT_TOKEN (only 2 variables)
3. Connects to Vault
4. Fetches ALL service credentials:
   - gbo/directory → Zitadel URL, client_id, client_secret
   - gbo/tables → Database host, port, username, password
   - gbo/drive → MinIO endpoint, accesskey, secret
   - gbo/cache → Redis host, port, password
   - gbo/llm → API keys for all providers
   - gbo/encryption → Master encryption keys
5. Connects to all services using Vault credentials
6. Reads config.csv for bot configuration
7. For keys referencing Vault (vault:path/key):
   - Fetches from Vault automatically
8. System ready

At Runtime

1. User sends message
2. Bot processes, needs LLM
3. Reads config.csv: llm-api-key = vault:gbo/llm/openai_key
4. Fetches from Vault (cached for performance)
5. Calls OpenAI API
6. Returns response

Setting Up Vault

Initial Setup

When you run ./botserver install secrets, it:

  1. Downloads and installs Vault
  2. Initializes with a single unseal key
  3. Creates initial secret paths
  4. Outputs root token to conf/vault/init.json
# Check Vault status
./botserver status secrets

# View init credentials (protect this file!)
cat botserver-stack/conf/vault/init.json

Storing Secrets

Use the Vault CLI or API:

# Directory (Zitadel) - includes URL, no longer in .env
vault kv put gbo/directory \
  url=https://localhost:9000 \
  project_id=your-project-id \
  client_id=your-client-id \
  client_secret=your-client-secret

# Database - includes host/port, no longer in .env
vault kv put gbo/tables \
  host=localhost \
  port=5432 \
  database=botserver \
  username=gbuser \
  password=secure-password

# Drive (MinIO)
vault kv put gbo/drive \
  endpoint=https://localhost:9000 \
  accesskey=minioadmin \
  secret=minioadmin123

# Cache (Redis)
vault kv put gbo/cache \
  host=localhost \
  port=6379 \
  password=redis-secret

# LLM API keys
vault kv put gbo/llm \
  openai_key=sk-xxxxx \
  anthropic_key=sk-ant-xxxxx \
  groq_key=gsk_xxxxx \
  deepseek_key=sk-xxxxx

# Encryption keys
vault kv put gbo/encryption \
  master_key=your-32-byte-key

# Vector database (Qdrant)
vault kv put gbo/vectordb \
  url=https://localhost:6334 \
  api_key=optional-api-key

# Observability (InfluxDB)
vault kv put gbo/observability \
  url=http://localhost:8086 \
  org=pragmatismo \
  bucket=metrics \
  token=your-influx-token

Automatic Management

Secrets are managed automatically - you don’t need a UI for day-to-day operations:

ActionHow It Works
Service startupFetches credentials from Vault
Key rotationUpdate in Vault, services reload
New bot deploymentInherits organization secrets
LLM provider changeUpdate config.csv, key fetched automatically

Emergency Access

For emergency situations (lost credentials, key rotation), admins can:

  1. Access Vault UI: https://localhost:8200/ui
  2. Use Vault CLI: vault kv get gbo/llm
  3. Check init.json: Contains unseal key and root token
# Emergency: unseal Vault after restart
UNSEAL_KEY=$(cat botserver-stack/conf/vault/init.json | jq -r '.unseal_keys_b64[0]')
vault operator unseal $UNSEAL_KEY

Vault Auto-Unseal Options

When Vault restarts (server reboot, container restart), it starts in a sealed state and cannot serve secrets until unsealed. This section covers 4 local options for auto-unseal without depending on big tech cloud providers.

Comparison Table

OptionSecurityCostComplexityBest For
Secrets File⭐⭐⭐ MediumFreeLowDevelopment, Small Production
TPM⭐⭐⭐⭐ HighFree (if hardware has TPM)MediumServers with TPM 2.0
HSM⭐⭐⭐⭐⭐ Highest$500-$2000+HighEnterprise, Compliance
Transit (2nd Vault)⭐⭐⭐⭐ HighFreeMediumMulti-server setups

Option 1: Secrets File (Default)

Store unseal keys in a separate file with restricted permissions. This is the default for botserver.

How it works:

  • Unseal keys stored in /opt/gbo/secrets/vault-unseal-keys
  • File has chmod 600 (root only)
  • botserver reads this file at startup to auto-unseal
  • Keys are never logged or exposed

Setup:

# Create secrets directory
mkdir -p /opt/gbo/secrets
chmod 700 /opt/gbo/secrets

# After vault init, save unseal keys (replace with your actual keys)
cat > /opt/gbo/secrets/vault-unseal-keys << 'EOF'
VAULT_UNSEAL_KEY_1=xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
VAULT_UNSEAL_KEY_2=xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
VAULT_UNSEAL_KEY_3=xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
EOF

chmod 600 /opt/gbo/secrets/vault-unseal-keys
chown root:root /opt/gbo/secrets/vault-unseal-keys

In your .env:

VAULT_ADDR=http://10.16.164.168:8200
VAULT_TOKEN=<root-token>
VAULT_UNSEAL_KEYS_FILE=/opt/gbo/secrets/vault-unseal-keys

Security considerations:

  • ✅ Separate from .env (which might be in git, logs)
  • ✅ Only root can read
  • ⚠️ Anyone with root access can unseal
  • ⚠️ Backup this file securely (encrypted)

Option 2: TPM (Trusted Platform Module)

Use server’s TPM hardware chip to store unseal keys. Keys never leave the hardware.

Requirements:

  • TPM 2.0 chip (most modern servers have this)
  • Linux with tpm2-tools installed

Check if your server has TPM:

# Check for TPM device
ls -la /dev/tpm*

# Install TPM tools
apt install tpm2-tools

# Check TPM status
tpm2_getcap properties-fixed

Setup Vault with TPM seal:

# /opt/gbo/conf/vault/config.hcl
seal "pkcs11" {
  lib            = "/usr/lib/x86_64-linux-gnu/libtpm2_pkcs11.so"
  slot           = "0"
  pin            = "userpin"
  key_label      = "vault-unseal"
  hmac_key_label = "vault-hmac"
}

storage "file" {
  path = "/opt/gbo/data/vault"
}

listener "tcp" {
  address     = "0.0.0.0:8200"
  tls_disable = 1
}

Cost: Free (hardware already in server)

Security considerations:

  • ✅ Keys never leave TPM hardware
  • ✅ Cannot be extracted even with root access
  • ✅ Tied to physical server
  • ⚠️ If server dies, keys are lost (need backup strategy)

Option 3: HSM (Hardware Security Module)

Dedicated hardware device for cryptographic operations. Highest security for enterprise/compliance.

Popular HSM Options:

DevicePriceForm FactorBest For
YubiHSM 2~$650USB stickSmall business, startups
Nitrokey HSM 2~$109USB stickBudget-conscious
Thales Luna$5,000-$20,000PCIe/NetworkEnterprise
AWS CloudHSM~$1.50/hrCloudHybrid setups
SoftHSMFreeSoftwareTesting only

Setup with YubiHSM 2:

# Install YubiHSM connector
apt install yubihsm-connector yubihsm-shell

# Start connector
systemctl enable yubihsm-connector
systemctl start yubihsm-connector
# /opt/gbo/conf/vault/config.hcl
seal "pkcs11" {
  lib         = "/usr/lib/x86_64-linux-gnu/libyubihsm_pkcs11.so"
  slot        = "0"
  pin         = "0001password"
  key_label   = "vault-unseal-key"
  mechanism   = "0x1085"  # CKM_SHA256_HMAC
}

Setup with Nitrokey HSM 2 (Budget Option):

# Install OpenSC
apt install opensc

# Initialize Nitrokey
sc-hsm-tool --initialize --so-pin 3537363231383830 --pin 648219

# Create key for Vault
pkcs11-tool --module /usr/lib/opensc-pkcs11.so --login --pin 648219 \
  --keypairgen --key-type EC:secp256k1 --label vault-key
# /opt/gbo/conf/vault/config.hcl
seal "pkcs11" {
  lib       = "/usr/lib/opensc-pkcs11.so"
  slot      = "0"
  pin       = "648219"
  key_label = "vault-key"
}

Security considerations:

  • ✅ FIPS 140-2 certified (compliance)
  • ✅ Tamper-resistant hardware
  • ✅ Keys cannot be extracted
  • ✅ Audit logging built-in
  • ⚠️ Higher cost
  • ⚠️ Physical device management

Option 4: Transit Auto-Unseal (Second Vault)

Use a second “unsealer” Vault instance to unseal the main one. Both can be local.

Architecture:

┌─────────────────┐      unseals      ┌─────────────────┐
│  Unsealer Vault │ ───────────────► │   Main Vault    │
│  (minimal data) │                   │ (all secrets)   │
│  manual unseal  │                   │  auto-unseal    │
└─────────────────┘                   └─────────────────┘

Setup Unsealer Vault:

# Create separate container for unsealer
botserver install vault --container --tenant unsealer

# Initialize unsealer (manual unseal - use Shamir)
lxc exec unsealer-vault -- /opt/gbo/bin/vault operator init \
  -key-shares=5 -key-threshold=3

# Enable transit secrets engine
lxc exec unsealer-vault -- /opt/gbo/bin/vault secrets enable transit

# Create auto-unseal key
lxc exec unsealer-vault -- /opt/gbo/bin/vault write -f transit/keys/autounseal

Configure Main Vault to use Transit:

# /opt/gbo/conf/vault/config.hcl (main vault)
seal "transit" {
  address         = "http://unsealer-vault-ip:8200"
  token           = "unsealer-vault-token"
  disable_renewal = "false"

  key_name   = "autounseal"
  mount_path = "transit/"
}

Security considerations:

  • ✅ No cloud dependency
  • ✅ Separation of concerns
  • ✅ Unsealer can be on separate network
  • ⚠️ Still need to unseal the unsealer manually (or use TPM/HSM for it)
  • ⚠️ More infrastructure to manage

Recommendation by Use Case

ScenarioRecommended Option
Development/TestingSecrets File
Single Server ProductionTPM (if available) or Secrets File
Compliance Required (PCI, HIPAA)HSM (YubiHSM 2 or Nitrokey)
Multi-Server ClusterTransit Auto-Unseal
Enterprise (budget available)Thales Luna HSM
Budget-Conscious ProductionNitrokey HSM 2 (~$109)

Quick Cost Summary

SolutionOne-Time CostMonthly Cost
Secrets File$0$0
TPM$0 (built-in)$0
Nitrokey HSM 2~$109$0
YubiHSM 2~$650$0
Thales Luna (Network)$15,000+Support contract
AWS CloudHSM$0~$1,100/month
Azure Dedicated HSM$0~$4,500/month

Note: All 4 options work completely locally without internet and without depending on AWS, Azure, or Google Cloud. You maintain full control of your keys.

Migrating from Environment Variables

If you’re currently using environment variables:

Before (Old Way)

# .env - TOO MANY SECRETS!
DATABASE_URL=postgres://user:password@localhost/db
DIRECTORY_URL=https://localhost:9000
DIRECTORY_CLIENT_ID=your-client-id
DIRECTORY_CLIENT_SECRET=your-client-secret
REDIS_PASSWORD=redis-secret
OPENAI_API_KEY=sk-xxxxx
ANTHROPIC_API_KEY=sk-ant-xxxxx
DRIVE_ACCESSKEY=minio
DRIVE_SECRET=minio123
ENCRYPTION_KEY=super-secret-key

After (With Vault)

# .env - ONLY VAULT CONNECTION
VAULT_ADDR=https://localhost:8200
VAULT_TOKEN=hvs.xxxxx
# EVERYTHING in Vault
vault kv put gbo/directory \
  url=https://localhost:9000 \
  project_id=12345 \
  client_id=xxx \
  client_secret=xxx

vault kv put gbo/tables \
  host=localhost \
  port=5432 \
  database=botserver \
  username=user \
  password=password

vault kv put gbo/cache \
  host=localhost \
  port=6379 \
  password=redis-secret

vault kv put gbo/llm \
  openai_key=sk-xxxxx \
  anthropic_key=sk-ant-xxxxx

vault kv put gbo/drive \
  endpoint=https://localhost:9000 \
  accesskey=minio \
  secret=minio123

vault kv put gbo/encryption \
  master_key=super-secret-key

Migration Script

#!/bin/bash
# migrate-to-vault.sh

# Read existing .env
source .env

# Parse DATABASE_URL if present
if [ -n "$DATABASE_URL" ]; then
  # postgres://user:pass@host:port/db
  DB_USER=$(echo $DATABASE_URL | sed -n 's|postgres://\([^:]*\):.*|\1|p')
  DB_PASS=$(echo $DATABASE_URL | sed -n 's|postgres://[^:]*:\([^@]*\)@.*|\1|p')
  DB_HOST=$(echo $DATABASE_URL | sed -n 's|.*@\([^:]*\):.*|\1|p')
  DB_PORT=$(echo $DATABASE_URL | sed -n 's|.*:\([0-9]*\)/.*|\1|p')
  DB_NAME=$(echo $DATABASE_URL | sed -n 's|.*/\(.*\)|\1|p')
fi

# Store everything in Vault
vault kv put gbo/directory \
  url="${DIRECTORY_URL:-https://localhost:9000}" \
  project_id="${DIRECTORY_PROJECT_ID:-}" \
  client_id="${ZITADEL_CLIENT_ID:-}" \
  client_secret="${ZITADEL_CLIENT_SECRET:-}"

vault kv put gbo/tables \
  host="${DB_HOST:-localhost}" \
  port="${DB_PORT:-5432}" \
  database="${DB_NAME:-botserver}" \
  username="${DB_USER:-gbuser}" \
  password="${DB_PASS:-}"

vault kv put gbo/cache \
  host="${REDIS_HOST:-localhost}" \
  port="${REDIS_PORT:-6379}" \
  password="${REDIS_PASSWORD:-}"

vault kv put gbo/llm \
  openai_key="${OPENAI_API_KEY:-}" \
  anthropic_key="${ANTHROPIC_API_KEY:-}" \
  groq_key="${GROQ_API_KEY:-}" \
  deepseek_key="${DEEPSEEK_API_KEY:-}"

vault kv put gbo/drive \
  endpoint="${DRIVE_ENDPOINT:-https://localhost:9000}" \
  accesskey="${DRIVE_ACCESSKEY:-}" \
  secret="${DRIVE_SECRET:-}"

vault kv put gbo/encryption \
  master_key="${ENCRYPTION_KEY:-}"

# Clean up .env - ONLY Vault connection
cat > .env << EOF
# General Bots - Vault Connection Only
# All other secrets are stored in Vault

VAULT_ADDR=https://localhost:8200
VAULT_TOKEN=$VAULT_TOKEN
EOF

echo "Migration complete!"
echo ".env now contains only Vault connection."
echo "All secrets moved to Vault."

Using Vault References in config.csv

Reference Vault secrets in your bot’s config.csv:

# Direct value (non-sensitive)
llm-provider,anthropic
llm-model,claude-sonnet-4.5
llm-temperature,0.7

# Vault reference (sensitive)
llm-api-key,vault:gbo/llm/openai_key

# Multiple keys from same path
drive-accesskey,vault:gbo/drive/accesskey
drive-secret,vault:gbo/drive/secret

# Per-bot secrets (for multi-tenant)
custom-api-key,vault:gbo/bots/mybot/api_key

Syntax

vault:<path>/<key>
  • path: Vault KV path (e.g., gbo/llm)
  • key: Specific key within the secret (e.g., openai_key)

Security Best Practices

1. Protect init.json

# Set restrictive permissions
chmod 600 botserver-stack/conf/vault/init.json

# Consider encrypting or moving off-server
gpg -c init.json
scp init.json.gpg secure-backup-server:
rm init.json

2. Use Token Policies

Create limited tokens for applications:

# gbo-readonly.hcl
path "gbo/*" {
  capabilities = ["read", "list"]
}
vault policy write gbo-readonly gbo-readonly.hcl
vault token create -policy=gbo-readonly -ttl=24h

3. Enable Audit Logging

vault audit enable file file_path=/opt/gbo/logs/vault-audit.log

4. Rotate Secrets Regularly

# Rotate LLM keys
vault kv put gbo/llm \
  openai_key=sk-new-key \
  anthropic_key=sk-ant-new-key

# botserver will pick up new keys automatically (cache TTL)

5. Backup Vault Data

# Snapshot Vault data
vault operator raft snapshot save backup.snap

# Or backup the data directory
tar -czf vault-backup.tar.gz botserver-stack/data/vault/

No UI Needed

You don’t need to expose a UI for secrets management because:

  1. Automatic at runtime: Secrets are fetched automatically
  2. config.csv for changes: Update bot config, not secrets
  3. Vault UI for emergencies: Available at https://localhost:8200/ui
  4. CLI for automation: Scripts can manage secrets

When Admins Need Access

SituationSolution
Add new LLM providervault kv put gbo/llm new_key=xxx
Rotate compromised keyUpdate in Vault, services auto-reload
Check what’s storedvault kv get gbo/llm or Vault UI
Debug connection issuesCheck Vault logs and service logs
Disaster recoveryUse init.json to unseal and recover

Relationship Summary

┌─────────────────────────────────────────────────────────────────┐
│                           .env                                  │
│              VAULT_ADDR + VAULT_TOKEN (only!)                   │
└─────────────────────────────┬───────────────────────────────────┘
                              │
                              ▼
┌─────────────────────────────────────────────────────────────────┐
│                          Vault                                  │
│    "Give me all service credentials and connection info"        │
│                                                                 │
│  gbo/directory → Zitadel URL, credentials                       │
│  gbo/tables    → Database connection + credentials              │
│  gbo/drive     → MinIO endpoint + credentials                   │
│  gbo/cache     → Redis connection + password                    │
│  gbo/llm       → All LLM API keys                               │
└─────────────────────────────┬───────────────────────────────────┘
                              │
                              ▼
┌─────────────────────────────────────────────────────────────────┐
│                       botserver                                 │
│         Connects to all services using Vault secrets            │
└─────────────────────────────┬───────────────────────────────────┘
                              │
                              ▼
┌─────────────────────────────────────────────────────────────────┐
│                        User Request                             │
└─────────────────────────────┬───────────────────────────────────┘
                              │
                              ▼
┌─────────────────────────────────────────────────────────────────┐
│                        Zitadel                                  │
│              "Who is this user? Are they allowed?"              │
│              (Credentials from Vault at startup)                │
└─────────────────────────────┬───────────────────────────────────┘
                              │
                              ▼
┌─────────────────────────────────────────────────────────────────┐
│                       config.csv                                │
│              "What LLM should I use? What model?"               │
│              (Non-sensitive bot configuration)                   │
└─────────────────────────────┬───────────────────────────────────┘
                              │
                              ▼
┌─────────────────────────────────────────────────────────────────┐
│                      LLM Provider                               │
│              (API key from Vault at startup)                    │
└─────────────────────────────────────────────────────────────────┘

Vault Paths Reference

PathContents
gbo/directoryurl, project_id, client_id, client_secret
gbo/tableshost, port, database, username, password
gbo/driveendpoint, accesskey, secret
gbo/cachehost, port, password
gbo/llmopenai_key, anthropic_key, groq_key, deepseek_key, mistral_key
gbo/encryptionmaster_key, data_key
gbo/emailhost, username, password
gbo/meeturl, api_key, api_secret
gbo/almurl, admin_password, runner_token
gbo/vectordburl, api_key
gbo/observabilityurl, org, bucket, token

Next Steps

System Limits Reference

This document provides a comprehensive reference for all system limits, rate limits, package sizes, and configurable parameters in General Bots. Each limit includes the config.csv key, default value, and the source code location where it’s enforced.


Quick Reference

CategoryLimitDefaultConfig Key
Package SizeTotal package100 MBpackage-max-size
Package SizeSingle file10 MBuser-file-limit
Package SizeFile count1,000user-file-count
Package SizeScript size1 MBscript-max-size
SessionMessage history50session-message-history
SessionVariable storage1 MBsession-variable-limit
SessionConcurrent sessions1,000session-max-concurrent
SessionRate limit60/minsession-rate-limit
SessionTimeout30 minsession-timeout
Knowledge BaseCollections50kb-max-collections
Knowledge BaseDocument size50 MBkb-doc-max-size
File UploadPer file10 MBupload-max-size
File UploadAttachment25 MBattachment-max-size
APIRate limit100/minapi-rate-limit
Loop SafetyMax iterations100,000loop-max-iterations
GOTO SafetyMax iterations1,000,000goto-max-iterations

Package Size Limits

Controls the size and composition of .gbai packages.

Total Package Size

PropertyValue
Config Keypackage-max-size
Default104,857,600 (100 MB)
UnitBytes
Sourcebotserver/src/core/package_manager/mod.rs
name,value
package-max-size,209715200

Single Document Size

PropertyValue
Config Keyuser-file-limit
Default10,485,760 (10 MB)
UnitBytes
Sourcebotserver/src/core/package_manager/mod.rs

File Count Per Package

PropertyValue
Config Keyuser-file-count
Default1,000
UnitFiles
Sourcebotserver/src/core/package_manager/mod.rs

Script File Size

PropertyValue
Config Keyscript-max-size
Default1,048,576 (1 MB)
UnitBytes
Sourcebotserver/src/basic/compiler/mod.rs

Session Limits

Controls resource usage per user session.

Message History

PropertyValue
Config Keysession-message-history
Default50
UnitMessages
Sourcebotserver/src/core/session/mod.rs
NotesMessages kept in LLM context window

Variable Storage

PropertyValue
Config Keysession-variable-limit
Default1,048,576 (1 MB)
UnitBytes
Sourcebotserver/src/core/session/mod.rs
NotesTotal size of all session variables

Concurrent Sessions

PropertyValue
Config Keysession-max-concurrent
Default1,000
UnitSessions
Sourcebotserver/src/core/session/mod.rs
NotesPer server instance

Session Rate Limit

PropertyValue
Config Keysession-rate-limit
Default60
UnitMessages per minute
Sourcebotserver/src/core/session/mod.rs

Session Timeout

PropertyValue
Config Keysession-timeout
Default1,800 (30 minutes)
UnitSeconds
Sourcebotserver/src/core/session/mod.rs

Knowledge Base Limits

Controls document ingestion and vector storage.

Maximum Collections

PropertyValue
Config Keykb-max-collections
Default50
UnitCollections
Sourcebotserver/src/basic/keywords/kb.rs

Document Size by Type

File TypeMax SizeConfig Key
PDF50 MBkb-pdf-max-size
Word (.docx)25 MBkb-word-max-size
Excel (.xlsx)25 MBkb-excel-max-size
Text/Markdown10 MBkb-text-max-size
Images10 MBkb-image-max-size

RAG Parameters

Config KeyDefaultDescription
rag-top-k10Number of chunks to retrieve
rag-chunk-size512Tokens per chunk
rag-chunk-overlap50Overlap between chunks
rag-hybrid-enabledtrueEnable hybrid search
rag-rerank-enabledfalseEnable reranking

File Upload Limits

Controls file upload operations.

Standard Upload

PropertyValue
Config Keyupload-max-size
Default10,485,760 (10 MB)
UnitBytes
Sourcebotserver/src/api/upload.rs

Email Attachment

PropertyValue
Config Keyattachment-max-size
Default26,214,400 (25 MB)
UnitBytes
Sourcebotserver/src/basic/keywords/send_mail.rs

Archive Extraction

LimitDefaultConfig Key
Archive size100 MBextract-archive-max-size
Extracted size500 MBextract-output-max-size
Files in archive10,000extract-max-files
Path depth10extract-max-depth

API Rate Limits

Controls API request rates.

General API

PropertyValue
Config Keyapi-rate-limit
Default100
UnitRequests per minute
Sourcebotserver/src/api/middleware/rate_limit.rs

Endpoint-Specific Limits

Endpoint CategoryLimitConfig Key
Standard endpoints100/minapi-rate-limit
Compliance scans5/hourapi-scan-rate-limit
Report generation10/hourapi-report-rate-limit
LLM inference20/minllm-rate-limit
Embedding100/minembedding-rate-limit

Rate Limit Headers

All API responses include rate limit headers:

HeaderDescription
X-RateLimit-LimitMaximum requests allowed
X-RateLimit-RemainingRequests remaining in window
X-RateLimit-ResetUnix timestamp when limit resets

Loop Safety Limits

Prevents infinite loops in BASIC scripts.

WHILE/DO Loops

PropertyValue
Config Keyloop-max-iterations
Default100,000
UnitIterations
Sourcebotserver/src/basic/keywords/procedures.rs

GOTO State Machine

PropertyValue
Config Keygoto-max-iterations
Default1,000,000
UnitIterations
Sourcebotserver/src/basic/compiler/goto_transform.rs

Sandbox Limits

Controls code execution sandbox resources.

Memory Limit

PropertyValue
Config Keysandbox-memory-mb
Default256
UnitMegabytes
Sourcebotserver/src/basic/keywords/code_sandbox.rs

CPU Limit

PropertyValue
Config Keysandbox-cpu-percent
Default50
UnitPercent
Sourcebotserver/src/basic/keywords/code_sandbox.rs

Execution Timeout

PropertyValue
Config Keysandbox-timeout
Default30
UnitSeconds
Sourcebotserver/src/basic/keywords/code_sandbox.rs

Communication Limits

WhatsApp

LimitDefaultConfig Key
Messages per second10whatsapp-rate-limit
Broadcast recipients1,000whatsapp-broadcast-max
Template message size1,024whatsapp-template-max-size

Email

LimitDefaultConfig Key
Recipients per email50email-max-recipients
Emails per hour100email-rate-limit
Attachment size25 MBemail-attachment-max-size

Delegate to Bot

PropertyValue
Config Keydelegate-message-max-size
Default1,048,576 (1 MB)
UnitBytes
Sourcebotserver/src/basic/keywords/delegate_to_bot.rs
PropertyValue
Config Keydelegate-timeout
Default300
UnitSeconds
Sourcebotserver/src/basic/keywords/delegate_to_bot.rs

Storage Limits

User Storage Quota

PropertyValue
Config Keyuser-storage-quota
Default104,857,600 (100 MB)
UnitBytes
Sourcebotserver/src/basic/keywords/drive.rs
PropertyValue
Config Keydownload-link-expiry
Default86,400 (24 hours)
UnitSeconds
Sourcebotserver/src/basic/keywords/download.rs

LLM Limits

Token Limits

Config KeyDefaultDescription
llm-max-tokens4,096Max output tokens
llm-context-window8,192Context window size
llm-temperature0.7Default temperature

Tokens Per Minute (TPM)

PropertyValue
Config Keyllm-tpm-limit
Default20,000
UnitTokens per minute
Sourcebotcoder/src/main.rs
Env VarLLM_TPM

A2A Protocol Limits

Maximum Hops

PropertyValue
Config Keya2a-max-hops
Default5
UnitHops
Sourcebotserver/src/basic/keywords/a2a_protocol.rs
NotesPrevents infinite delegation chains

Video/Audio Limits

Player Limits

Config KeyDefaultDescription
player-max-file-size-mb100Max video file size
player-default-volume80Default volume (0-100)
player-preloadmetadataPreload strategy

Configuring Limits

Via config.csv

Add entries to your bot’s config.csv file:

name,value
package-max-size,209715200
session-rate-limit,120
api-rate-limit,200
llm-max-tokens,8192

Via Environment Variables

Some limits can be set via environment variables (overrides config.csv):

Environment VariableConfig Key
LLM_TPMllm-tpm-limit
SESSION_TIMEOUTsession-timeout
API_RATE_LIMITapi-rate-limit

Via API

Update limits programmatically:

SET CONFIG "session-rate-limit" TO "120"
SET CONFIG "api-rate-limit" TO "200"

Monitoring Limits

Viewing Current Limits

config = GET CONFIG "api-rate-limit"
TALK "Current API rate limit: " + config

Rate Limit Errors

When limits are exceeded, the system returns:

HTTP StatusError CodeDescription
429RATE_LIMITEDToo many requests
413PAYLOAD_TOO_LARGEFile/request too large
507INSUFFICIENT_STORAGEStorage quota exceeded

Best Practices

  1. Start Conservative: Begin with default limits and increase as needed
  2. Monitor Usage: Track rate limit headers to understand usage patterns
  3. Plan for Scale: Increase limits gradually as traffic grows
  4. Document Changes: Track limit changes in your bot’s changelog
  5. Test Limits: Verify your application handles limit errors gracefully

Drive Integration

The drive component provides S3-compatible object storage for botserver, storing bot packages, documents, and user files.

Overview

botserver uses the drive component as its primary storage backend for bot packages in .gbai directories, knowledge base documents in .gbkb files, configuration files like config.csv, media and attachments, and user-uploaded content.

Configuration

Storage configuration is automatically managed by the Directory service (Zitadel), so you do not need to configure storage credentials manually. During bootstrap, the Directory service provisions storage credentials, distributes them securely to botserver, and handles credential rotation automatically.

Storage Structure

Bucket Organization

Each bot gets its own bucket named after the bot package. The bucket naming convention uses {bot-name}.gbai for bot buckets and botserver-media for the shared media bucket. Each bucket contains the complete bot package structure.

Bot package structure

Features

Automatic Upload

When deploying a bot package, botserver automatically creates a bucket if it doesn’t exist, uploads all package files, maintains the directory structure, and preserves file permissions.

Real-time Synchronization

The bot monitors its bucket for changes. Configuration updates trigger automatic reload, new knowledge base files are indexed immediately, and deleted files are removed from the index automatically.

Drive Monitor

The DriveMonitor service watches for changes in drive storage. It detects configuration updates, triggers bot reloads when changes occur, and syncs the local cache with drive storage.

Bootstrap Integration

During bootstrap, botserver handles installation by downloading and installing the drive binary if not present, receiving credentials from the Directory service, creating data directories, and uploading template files to drive storage.

Knowledge base files are uploaded to drive buckets, indexed for vector search, and cached locally for improved performance.

The BASIC GET keyword can retrieve files from drive:

content = GET "knowledge.gbkb/document.pdf"

This retrieves files from the bot’s bucket in drive storage.

Media Handling

The multimedia handler uses drive for storing uploaded images, serving media files, managing attachments, and processing thumbnails.

Console Integration

The built-in console provides a file browser for drive with paths like /media/ for browsing uploaded media, /files/{bot}/ for browsing bot files, and /download/{bot}/{file} for downloading specific files.

S3-Compatible Client Configuration

botserver uses an S3-compatible client configured for the drive:

#![allow(unused)]
fn main() {
let config = S3Config::builder()
    .endpoint_url(&drive_endpoint)
    .region("us-east-1")  // Required but arbitrary for S3-compatible
    .force_path_style(true)
    .build();
}

The force_path_style(true) setting ensures compatibility with S3-compatible storage providers.

Deployment Modes

Local Mode

The default mode runs drive on the same machine. The binary is downloaded to {{BIN_PATH}}/drive, data is stored in {{DATA_PATH}}, and logs are written to {{LOGS_PATH}}/drive.log.

Container Mode (LXC)

Drive can run in an LXC container with mapped volumes for persistent storage:

lxc config device add default-drive data disk \
  source=/opt/gbo/data path=/opt/gbo/data

External S3-Compatible Storage

botserver can use existing S3-compatible infrastructure. The Directory service manages the connection and supports providers including MinIO (the default local installation), Backblaze B2, Wasabi, DigitalOcean Spaces, Cloudflare R2, and any other S3-compatible service.

To use external storage, configure it through the Directory service admin console.

Security

Credentials are managed by the Directory service for centralized security. TLS can be enabled for secure communication between components. Bucket policies control access on a per-bot basis, and credential rotation is handled automatically without service interruption.

Monitoring

The drive console runs on port 9001 as an optional management interface. The API endpoint runs on port 9000 for programmatic access. Health checks are available via /health/live and metrics can be scraped from /metrics.

Troubleshooting

Check Drive Status

The package manager monitors drive status using:

ps -ef | grep drive | grep -v grep

Console Access

The drive console is available at http://localhost:9001 for bucket management, user management, policy configuration, and access log review.

Common Issues

Connection failures typically indicate that drive is not running or ports are not accessible. Access denied errors suggest the Directory service has not yet provisioned credentials. Bucket not found errors occur when bot deployment did not complete successfully. Upload failures often result from insufficient disk space or incorrect permissions.

Debug Logging

Enable trace logging to see drive operations:

RUST_LOG=trace ./botserver

This reveals file retrieval details, bucket operations, and authentication attempts.

Best Practices

Back up the drive data directory regularly to prevent data loss. Monitor disk usage to ensure adequate storage space remains available. Use bucket policies to restrict access appropriately for each bot. Enable object versioning for critical data that may need recovery. Configure lifecycle policies for automatic cleanup of old files that are no longer needed.

See Also

The Storage API chapter provides the complete API reference for drive operations. The Environment Variables appendix covers Directory service configuration options. The LXC Containers documentation explains container deployment in detail.

Chapter 11: Hardware & Scaling

This chapter covers hardware requirements and horizontal scaling strategies for General Bots.

Overview

General Bots is designed from the ground up to scale horizontally. The architecture supports:

  • Multi-tenancy: Complete isolation between organizations
  • Regional sharding: Data locality for compliance and performance
  • Database partitioning: Efficient handling of high-volume tables
  • Stateless services: Easy horizontal pod autoscaling

Chapter Contents

Key Concepts

Tenant Isolation

Every piece of data in General Bots is associated with a tenant_id. This enables:

  1. Complete data isolation between organizations
  2. Per-tenant resource limits and quotas
  3. Tenant-specific configurations
  4. Easy data export/deletion for compliance

Shard Architecture

┌─────────────────────────────────────────────────────────────┐
│                      Load Balancer                          │
└─────────────────────────┬───────────────────────────────────┘
                          │
        ┌─────────────────┼─────────────────┐
        │                 │                 │
        ▼                 ▼                 ▼
   ┌─────────┐       ┌─────────┐       ┌─────────┐
   │ Region  │       │ Region  │       │ Region  │
   │   USA   │       │   EUR   │       │   APAC  │
   └────┬────┘       └────┬────┘       └────┬────┘
        │                 │                 │
   ┌────┴────┐       ┌────┴────┐       ┌────┴────┐
   │ Shard 1 │       │ Shard 2 │       │ Shard 3 │
   │ Shard 4 │       │ Shard 5 │       │ Shard 6 │
   └─────────┘       └─────────┘       └─────────┘

Database Design Principles

  1. SMALLINT enums instead of VARCHAR for domain values (2 bytes vs 20+ bytes)
  2. Partitioned tables for high-volume data (messages, sessions, analytics)
  3. Composite primary keys including shard_id for distributed queries
  4. Snowflake-like IDs for globally unique, time-sortable identifiers

When to Scale

UsersSessions/dayMessages/dayRecommended Setup
< 10K< 100K< 1MSingle node
10K-100K100K-1M1M-10M2-3 nodes, single region
100K-1M1M-10M10M-100MMulti-node, consider sharding
1M-10M10M-100M100M-1BRegional shards
> 10M> 100M> 1BGlobal shards with Citus/CockroachDB

Quick Start

To enable sharding in your deployment:

  1. Configure shard mapping in shard_config table
  2. Set SHARD_ID environment variable per instance
  3. Deploy region-specific instances
  4. Configure load balancer routing rules

See Sharding Architecture for detailed setup instructions.

Beginner’s Guide: Buying Your First SBC

A complete guide for beginners who want to buy a Single Board Computer (SBC) and start automating their home or workplace with General Bots.

What is an SBC?

A Single Board Computer (SBC) is a complete computer on a single circuit board. Unlike a desktop PC, it’s:

  • Small - Credit card to smartphone size
  • Low power - 2-15 watts (vs 200W+ for a PC)
  • Affordable - $15-150 depending on power
  • Quiet - No fans on most models
  • GPIO equipped - Can connect to sensors and actuators

Which SBC Should I Buy?

Decision Flowchart

Hardware Budget Decision Tree

🌟 Best for Beginners: Raspberry Pi 4 Kit (~$80)

What to buy:

  • Raspberry Pi 4 Model B (4GB RAM) - $55
  • Official power supply (USB-C 5V 3A) - $8
  • 32GB microSD card (Class 10) - $8
  • Case with heatsink - $10

Where to buy:

💰 Budget Option: Orange Pi Zero 3 Kit (~$35)

What to buy:

  • Orange Pi Zero 3 (1GB RAM) - $20
  • 5V 2A power supply - $5
  • 16GB microSD card - $5
  • Acrylic case - $5

🧠 Best for AI: Orange Pi 5 Kit (~$120)

What to buy:

  • Orange Pi 5 (8GB RAM) - $89
  • 12V 2A power supply - $10
  • 64GB microSD or NVMe SSD - $15
  • Cooling fan case - $10

This board has a 6 TOPS NPU for accelerated AI inference!

What Else Do I Need?

Essential Accessories

ItemPurposePrice Range
microSD CardOperating system storage$8-15
Power SupplyPower the board$8-15
Ethernet CableWired network (faster)$5
HDMI CableConnect to monitor/TV$5-10
USB KeyboardInitial setup$10-20

For Display Projects

ItemPurposePrice Range
3.5“ TFT LCDSmall color touchscreen$15-25
7“ HDMI LCDLarger display$40-60
16x2 LCDSimple text display$5-10
0.96“ OLEDTiny status display$5-8

For Home Automation

ItemPurposePrice Range
Relay Module (4ch)Control lights, appliances$5-10
DHT22 SensorTemperature & humidity$5-8
PIR SensorMotion detection$3-5
BuzzerAlerts and notifications$2-3
Jumper WiresConnect components$3-5
BreadboardPrototyping$3-5

Sample Shopping Lists

Home Temperature Monitor ($45)

Perfect first project - monitor and log temperature!

ItemPrice
Orange Pi Zero 3 (1GB)$20
16GB microSD card$5
5V 2A power supply$5
DHT22 temperature sensor$6
0.96“ OLED display (I2C)$6
Jumper wires (female-female)$3
Total$45

Smart Doorbell ($70)

AI-powered doorbell with notifications!

ItemPrice
Raspberry Pi Zero 2 W$15
Pi Camera Module$25
Push button$1
Piezo buzzer$2
LED (with resistor)$1
16GB microSD card$5
5V 2.5A power supply$8
Case$5
Jumper wires$3
Total$70

Offline AI Assistant ($150)

Run AI completely offline - no internet needed!

ItemPrice
Orange Pi 5 (8GB RAM)$89
128GB NVMe SSD$20
12V 3A power supply$12
7“ HDMI touchscreen$45
USB microphone$10
Case with fan$15
Jumper wires$3
Total~$195

Voice-Controlled Lights ($55)

Control your lights by talking!

ItemPrice
Raspberry Pi 4 (2GB)$35
4-channel relay module$6
USB microphone$8
16GB microSD card$5
5V 3A power supply$8
Jumper wires$3
Total~$65

Where to Buy (By Region)

United States

  • Amazon - Fast shipping, good returns
  • Adafruit - Quality accessories, great tutorials
  • SparkFun - Sensors and components
  • Micro Center - If you have one nearby!

Europe

  • The Pi Hut (UK) - Official Pi reseller
  • Pimoroni (UK) - Creative accessories
  • Amazon.de/.fr/.es - Local shipping
  • Conrad (Germany) - Electronics store

Asia

  • AliExpress - Cheapest, 2-4 week shipping
  • Taobao (China) - Even cheaper if you read Chinese
  • Amazon.co.jp (Japan)

South America

  • MercadoLivre (Brazil) - Local marketplace
  • FilipeFlop (Brazil) - Arduino/Pi specialist
  • Amazon.com.br - Limited selection

Tips for AliExpress

  • Check seller ratings (97%+ is good)
  • Read reviews with photos
  • Expect 2-4 weeks shipping
  • Buy from China Direct for best prices
  • Consider “Choice” items for faster shipping

First-Time Setup Guide

Step 1: Flash the OS

  1. Download Raspberry Pi Imager
  2. Insert your microSD card
  3. Select:
    • Device: Your board
    • OS: Raspberry Pi OS Lite (64-bit)
    • Storage: Your microSD
  4. Click EDIT SETTINGS:
    • Set hostname: mybot
    • Enable SSH
    • Set username/password
    • Configure WiFi
  5. Click WRITE

Step 2: First Boot

  1. Insert microSD into your SBC
  2. Connect power
  3. Wait 2 minutes for first boot
  4. Find your device:
    # On your computer
    ping mybot.local
    # or check your router's device list
    

Step 3: Connect via SSH

ssh pi@mybot.local
# Enter your password

Step 4: Install General Bots

# Quick install
curl -fsSL https://get.generalbots.com | bash

# Or use the deploy script
git clone https://github.com/GeneralBots/botserver.git
cd botserver
./scripts/deploy-embedded.sh --local --with-ui

Step 5: Access the Interface

Open in your browser:

http://mybot.local:9000

Common Beginner Mistakes

❌ Wrong Power Supply

Problem: Board keeps rebooting or won’t start

Solution:

  • Raspberry Pi 4/5: Use official 5V 3A USB-C PSU
  • Orange Pi 5: Use 12V 2A, not 5V!
  • Don’t use phone chargers - they can’t supply enough current

❌ Cheap/Slow microSD Card

Problem: Slow boot, random crashes, data corruption

Solution:

  • Buy Class 10 or A1/A2 rated cards
  • Good brands: SanDisk, Samsung, Kingston
  • Avoid no-name cards from AliExpress

❌ No Heatsink/Cooling

Problem: Board throttles or overheats

Solution:

  • Always use heatsinks on the CPU
  • Consider a fan for Pi 4/5 or Orange Pi 5
  • Use a case with ventilation

❌ Connecting to Wrong Voltage

Problem: Fried components, magic smoke

Solution:

  • Raspberry Pi GPIO is 3.3V only!
  • Never connect 5V to GPIO pins
  • Use level shifters for 5V sensors

Getting Help

Community Resources

  • ExplainingComputers - Great SBC reviews
  • Jeff Geerling - Deep Pi tutorials
  • Andreas Spiess - IoT and sensors
  • DroneBot Workshop - Beginner friendly

Books

  • “Getting Started with Raspberry Pi” - Matt Richardson
  • “Make: Electronics” - Charles Platt

Next Steps

Once you have your SBC:

  1. Quick Start Guide - Get GB running in 5 minutes
  2. GPIO Keywords - Control hardware with BASIC
  3. Templates - Ready-made automation projects
  4. Local LLM - Add offline AI capabilities

Happy building! 🤖

Mobile Deployment - Android & HarmonyOS

Deploy General Bots as the primary interface on Android and HarmonyOS devices, transforming them into dedicated AI assistants.

Overview

BotDevice transforms any Android or HarmonyOS device into a dedicated General Bots system, removing manufacturer bloatware and installing GB as the default launcher.

BotDevice Architecture

Supported Platforms

Android

  • AOSP - Pure Android
  • Samsung One UI - Galaxy devices
  • Xiaomi MIUI - Mi, Redmi, Poco
  • OPPO ColorOS - OPPO, OnePlus, Realme
  • Vivo Funtouch/OriginOS
  • Google Pixel

HarmonyOS

  • Huawei - P series, Mate series, Nova
  • Honor - Magic series, X series

Installation Levels

LevelRequirementsWhat It Does
Level 1ADB onlyRemoves bloatware, installs BotDevice as app
Level 2Root + MagiskGB boot animation, BotDevice as system app
Level 3Unlocked bootloaderFull Android replacement with BotDevice

Quick Installation

Level 1: Debloat + App (No Root)

# Clone botdevice repository
git clone https://github.com/GeneralBots/botdevice.git
cd botdevice/rom

# Connect device via USB (enable USB debugging first)
./install.sh

The interactive installer will:

  1. Detect your device and manufacturer
  2. Remove bloatware automatically
  3. Install BotDevice APK
  4. Optionally set as default launcher

Level 2: Magisk Module (Root Required)

# Generate Magisk module
cd botdevice/rom/scripts
./build-magisk-module.sh

# Copy to device
adb push botdevice-magisk-v1.0.zip /sdcard/

# Install via Magisk app
# Magisk → Modules → + → Select ZIP → Reboot

This adds:

  • Custom boot animation
  • BotDevice as system app (privileged permissions)
  • Debloat via overlay

Level 3: GSI (Full Replacement)

For advanced users with unlocked bootloader. See botdevice/rom/gsi/README.md.

Bloatware Removed

Samsung One UI

  • Bixby, Samsung Pay, Samsung Pass
  • Duplicate apps (Email, Calendar, Browser)
  • AR Zone, Game Launcher
  • Samsung Free, Samsung Global Goals

Huawei EMUI/HarmonyOS

  • AppGallery, HiCloud, HiCar
  • Huawei Browser, Music, Video
  • Petal Maps, Petal Search
  • AI Life, HiSuite

Honor MagicOS

  • Honor Store, MagicRing
  • Honor Browser, Music

Xiaomi MIUI

  • MSA (analytics), Mi Apps
  • GetApps, Mi Cloud
  • Mi Browser, Mi Music

Universal (All Devices)

  • Pre-installed Facebook, Instagram
  • Pre-installed Netflix, Spotify
  • Games like Candy Crush
  • Carrier bloatware

Building from Source

Prerequisites

# Install Rust and Android targets
rustup target add aarch64-linux-android armv7-linux-androideabi

# Set up Android SDK/NDK
export ANDROID_HOME=$HOME/Android/Sdk
export NDK_HOME=$ANDROID_HOME/ndk/25.2.9519653

# Install Tauri CLI
cargo install tauri-cli

# For icons/boot animation
sudo apt install librsvg2-bin imagemagick

Build APK

cd botdevice

# Generate icons from SVG
./scripts/generate-icons.sh

# Initialize Android project
cargo tauri android init

# Build release APK
cargo tauri android build --release

Output: gen/android/app/build/outputs/apk/release/app-release.apk

Development Mode

# Connect device and run
cargo tauri android dev

# Watch logs
adb logcat -s BotDevice:*

Configuration

AndroidManifest.xml

BotDevice is configured as a launcher:

<intent-filter>
    <action android:name="android.intent.action.MAIN" />
    <category android:name="android.intent.category.HOME" />
    <category android:name="android.intent.category.DEFAULT" />
    <category android:name="android.intent.category.LAUNCHER" />
</intent-filter>

Permissions

Default capabilities in capabilities/default.json:

  • Internet access
  • Camera (for QR codes, photos)
  • Location (GPS)
  • Storage (files)
  • Notifications

Connecting to Server

Edit the embedded URL in tauri.conf.json:

{
  "build": {
    "frontendDist": "../botui/ui/suite"
  }
}

Or configure botserver URL at runtime:

window.BOTSERVER_URL = "https://your-server.com";

Boot Animation

Create custom boot animation with GB branding:

# Generate animation
cd botdevice/scripts
./create-bootanimation.sh

# Install (requires root)
adb root
adb remount
adb push bootanimation.zip /system/media/
adb reboot

Project Structure

PathDescription
Cargo.tomlRust/Tauri dependencies
tauri.conf.jsonTauri config → botui/ui/suite
build.rsBuild script
src/lib.rsAndroid entry point
icons/gb-bot.svgSource icon
icons/icon.pngMain icon (512x512)
icons/*/ic_launcher.pngIcons by density
scripts/generate-icons.shGenerate PNGs from SVG
scripts/create-bootanimation.shBoot animation generator
capabilities/default.jsonTauri permissions
gen/android/Generated Android project
rom/install.shInteractive installer
rom/scripts/debloat.shRemove bloatware
rom/scripts/build-magisk-module.shMagisk module builder
rom/gsi/README.mdGSI instructions

Offline Mode

BotDevice can work offline with local LLM:

  1. Install botserver on the device (see Local LLM)
  2. Configure to use localhost:
    window.BOTSERVER_URL = "http://127.0.0.1:9000";
    
  3. Run llama.cpp with small model (TinyLlama on 4GB+ devices)

Use Cases

Dedicated Kiosk

  • Retail product information
  • Hotel check-in
  • Restaurant ordering
  • Museum guides

Enterprise Device

  • Field service assistant
  • Warehouse scanner with AI
  • Delivery driver companion
  • Healthcare bedside terminal

Consumer Device

  • Elder-friendly phone
  • Child-safe device
  • Single-purpose assistant
  • Smart home controller

Troubleshooting

App Won’t Install

# Enable installation from unknown sources
# Settings → Security → Unknown Sources

# Or use ADB
adb install -r botdevice.apk

Debloat Not Working

# Some packages require root
# Use Level 2 (Magisk) for complete removal

# Check which packages failed
adb shell pm list packages | grep <manufacturer>

Boot Loop After GSI

# Boot into recovery
# Wipe data/factory reset
# Reflash stock ROM

WebView Crashes

# Update Android System WebView
adb shell pm enable com.google.android.webview

Supported Hardware

Desktop & Workstation Hardware Guide

A detailed guide crossing high-performance AI models with hardware availability and pricing (prices in BRL).

Important Note: Proprietary models like Claude Opus 4.5, GPT-5.2, and Gemini 3 Pro represent the cutting edge of Cloud AI. For Local AI, we focus on efficiently running models that approximate this power using MoE (Mixture of Experts) technology, specifically GLM-4.7, DeepSeek, and OSS120B-GPT.

AI Model Scaling for Local Hardware

Mapping mentioned top-tier models to their local “runnable” equivalents.

Citation ModelReal StatusLocal Equivalent (GPU)Size (Params)
Claude Opus 4.5API OnlyGLM-4.7 (MoE)~9B to 16B (Highly Efficient)
GPT-5.2API OnlyDeepSeek-V3 (MoE)~236B (Single RTX High RAM)
Gemini 3 ProAPI OnlyOSS120B-GPT (MoE)~120B (Single RTX)
GPT-4oAPI OnlyDeepSeek-V2-Lite~16B (efficient)

GLM-4-9B Chat (9B parameters):

  • Q4_K_M: bartowski/glm-4-9b-chat-GGUF - 5.7GB file, needs 8GB VRAM
  • Q6_K: Same link - 8.26GB file, needs 10GB VRAM
  • Q8_0: Same link - 9.99GB file, needs 12GB VRAM

DeepSeek-V3 (671B total, 37B active MoE):

  • Q2_K: bartowski/deepseek-ai_DeepSeek-V3-GGUF - ~280GB file, needs 32GB VRAM
  • Q4_K_M: Same link - 409GB file, needs 48GB VRAM (2x RTX 3090)
  • Q6_K: Same link - 551GB file, needs 64GB VRAM (impossible on consumer GPUs)

Mistral Large 2407 (123B parameters):

Compatibility Matrix (GPU x Model x Quantization)

Defining how well each GPU runs the listed models, focusing on “Best Performance”.

Quantization Legend:

  • Q4_K_M: The “Gold Standard” for home use. Good balance of speed and intelligence.
  • Q5_K_M / Q6_K: High quality, slower, requires more VRAM.
  • Q8_0: Near perfection (FP16 equivalent), but very heavy.
  • Offload CPU: Model fits in system RAM, not VRAM (slow).
GPUVRAMSystem RAMGLM-4-9B
(Q4_K_M: 5.7GB)
DeepSeek-V3
(Q2_K: 280GB)
Mistral Large
(Q4_K_M: 75GB)
RTX 30508 GB16 GBQ8_0 (Perfect)CPU Offload (Very Slow)Impossible
RTX 306012 GB32 GBQ8_0 (Instant)CPU Offload (Slow)CPU Offload (Slow)
RTX 4060 Ti16 GB32 GBQ8_0 (Overkill)CPU Offload (Slow)CPU Offload (Slow)
RTX 309024 GB64 GBQ8_0 (Dual Models)CPU Offload (Usable)Q2_K (Fits!)
2x RTX 309048 GB128 GBN/AQ4_K_M (Good)Q4_K_M (Perfect)
4x RTX 309096 GB256 GBN/AQ6_K (Excellent)Q6_K (Excellent)

Market Pricing & Minimum Specs

Approximate prices in BRL (R$).

GPUUsed Price (OLX/ML)New Price (ML)Min System RAMRAM Cost (Approx.)Min CPUGLM-4-9BDeepSeek-V3Mistral Large
RTX 3050 (8GB)R$ 750 - R$ 950R$ 1.400 - R$ 1.60016 GB (DDR4)R$ 180 (Used)i5-10400 / Ryzen 3600Q8_0 (10GB)❌ Too small❌ Too small
RTX 3060 (12GB)R$ 1.100 - R$ 1.400R$ 1.800 - R$ 2.40032 GB (DDR4)R$ 350 (Used Kit)Ryzen 5 5600X / i5-12400FQ8_0 (10GB)⚠️ CPU offload only⚠️ CPU offload only
RTX 4060 Ti (16GB)R$ 2.000 - R$ 2.300R$ 2.800 - R$ 3.20032 GB (DDR5)R$ 450 (Used Kit)Ryzen 7 5700X3D / i5-13400FQ8_0 (10GB)⚠️ CPU offload only⚠️ CPU offload only
RTX 3070 (8GB)R$ 1.200 - R$ 1.500N/A32 GB (DDR4)R$ 350 (Used Kit)Ryzen 7 5800XQ6_K (8GB)❌ Too small❌ Too small
RTX 3090 (24GB)R$ 3.500 - R$ 4.500R$ 10.000+ (Rare)64 GB (DDR4/5)R$ 700 (Kit 32x2)Ryzen 9 5900X / i7-12700KQ8_0 (10GB)⚠️ CPU offload (280GB)Q2_K (24GB)
RTX 4090 (24GB)R$ 9.000 - R$ 11.000R$ 12.000 - R$ 15.00064 GB (DDR5)R$ 900 (Kit 32x2)Ryzen 9 7950X / i9-13900KQ8_0 (10GB)⚠️ CPU offload (280GB)Q2_K (24GB)
RTX 4080 Super (16GB)R$ 6.000 - R$ 7.000R$ 7.500 - R$ 9.00064 GB (DDR5)R$ 900 (Kit 32x2)Ryzen 9 7900XQ8_0 (10GB)⚠️ CPU offload only⚠️ CPU offload only
2x RTX 3090 (48GB)R$ 7.000 - R$ 9.000N/A128 GB (DDR4/5)R$ 1.400 (Kit 64x2)Ryzen 9 5950X / i9-12900K✅ Multiple modelsQ4_K_M (409GB)Q4_K_M (75GB)

Technical Analysis & DeepSeek Support

To achieve performance similar to GLM 4 or DeepSeek locally, consider these factors:

1. GGUF File Sizes vs VRAM Requirements

GLM-4-9B (9 billion parameters):

  • Q2_K: 3.99GB file → needs 6GB VRAM
  • Q4_K_M: 5.7GB file → needs 8GB VRAM
  • Q6_K: 8.26GB file → needs 10GB VRAM
  • Q8_0: 9.99GB file → needs 12GB VRAM

DeepSeek-V3 (671B total, 37B active MoE):

  • Q2_K: ~280GB file → needs 32GB VRAM (impossible on single consumer GPU)
  • Q4_K_M: 409GB file → needs 48GB VRAM (2x RTX 3090 minimum)
  • Q6_K: 551GB file → needs 64GB VRAM (3x RTX 3090 or data center)

Mistral Large 2407 (123B parameters):

  • Q2_K: ~50GB file → needs 24GB VRAM (RTX 3090/4090)
  • Q4_K_M: ~75GB file → needs 32GB VRAM (2x RTX 3060 or better)
  • Q6_K: ~95GB file → needs 48GB VRAM (2x RTX 3090)

2. Reality Check: DeepSeek-V3 Needs Serious Hardware

DeepSeek-V3 is a 671B parameter MoE model. Even with only 37B active parameters per token, the GGUF files are massive:

  • Minimum viable: Q2_K at 280GB requires 32GB VRAM (impossible on consumer GPUs)
  • Recommended: Q4_K_M at 409GB requires 48GB VRAM (2x RTX 3090 = R$ 8.000+)
  • For most users: Stick to GLM-4-9B or Mistral Large for local AI

GLM-4-9B is the sweet spot:

  • Q8_0 (9.99GB) runs perfectly on RTX 3060 12GB
  • Near-identical performance to much larger models
  • Costs under R$ 2.000 total system cost

3. DeepSeek & MoE (Mixture of Experts) in General Bots

DeepSeek-V2/V3 uses an architecture called MoE (Mixture of Experts). This is highly efficient but requires specific support.

General Bots Offline Component (llama.cpp): The General Bots local LLM component is built on llama.cpp, which fully supports MoE models like DeepSeek and Mixtral efficiently.

  • MoE Efficiency: Only a fraction of parameters are active for each token generation. DeepSeek-V2 might have 236B parameters total, but only uses ~21B per token.
  • Running DeepSeek:
    • On an RTX 3060, you can run DeepSeek-V2-Lite (16B) exceptionally well.
    • It offers performance rivaling much larger dense models.
    • Configuration: Simply select the model in your local-llm setup. The internal llama.cpp engine handles the MoE routing automatically. No special Flags (-moe) are strictly required in recent versions, but ensuring you have the latest botserver update guarantees the llama.cpp binary supports these architectures.

Entry Level (R$ 2.500 total):

  • GPU: RTX 3060 12GB (Used ~R$ 1.300)
  • RAM: 32 GB DDR4 (~R$ 350)
  • Runs: GLM-4-9B Q8_0 (perfect), Mistral-7B, Llama-3-8B
  • File sizes: 10GB models fit comfortably

Prosumer (R$ 5.000 total):

  • GPU: RTX 3090 24GB (Used ~R$ 4.000)
  • RAM: 64 GB DDR4 (~R$ 700)
  • Runs: GLM-4-9B + Mistral Large Q2_K (24GB), multiple models simultaneously
  • File sizes: Up to 50GB models

Enterprise (R$ 10.000+):

  • GPU: 2x RTX 3090 (48GB total VRAM)
  • RAM: 128 GB DDR4/5 (~R$ 1.400)
  • Runs: DeepSeek-V3 Q4_K_M (409GB), Mistral Large Q4_K_M (75GB)
  • File sizes: 400GB+ models with excellent performance

Local LLM with llama.cpp

Sharding Architecture

This document describes how General Bots distributes data across multiple database shards for horizontal scaling.

Overview

Sharding enables General Bots to scale beyond single-database limits by distributing data across multiple database instances. Each shard contains a subset of tenants, and data never crosses shard boundaries during normal operations.

Shard Configuration

Shard Config Table

The shard_config table defines all available shards:

CREATE TABLE shard_config (
    shard_id SMALLINT PRIMARY KEY,
    region_code CHAR(3) NOT NULL,        -- ISO 3166-1 alpha-3: USA, BRA, DEU
    datacenter VARCHAR(32) NOT NULL,      -- e.g., 'us-east-1', 'eu-west-1'
    connection_string TEXT NOT NULL,      -- Encrypted connection string
    is_primary BOOLEAN DEFAULT false,
    is_active BOOLEAN DEFAULT true,
    min_tenant_id BIGINT NOT NULL,
    max_tenant_id BIGINT NOT NULL,
    created_at TIMESTAMPTZ DEFAULT NOW()
);

Example Configuration

-- Americas
INSERT INTO shard_config VALUES 
(1, 'USA', 'us-east-1', 'postgresql://shard1.db:5432/gb', true, true, 1, 1000000),
(2, 'USA', 'us-west-2', 'postgresql://shard2.db:5432/gb', false, true, 1000001, 2000000),
(3, 'BRA', 'sa-east-1', 'postgresql://shard3.db:5432/gb', false, true, 2000001, 3000000);

-- Europe
INSERT INTO shard_config VALUES 
(4, 'DEU', 'eu-central-1', 'postgresql://shard4.db:5432/gb', false, true, 3000001, 4000000),
(5, 'GBR', 'eu-west-2', 'postgresql://shard5.db:5432/gb', false, true, 4000001, 5000000);

-- Asia Pacific
INSERT INTO shard_config VALUES 
(6, 'SGP', 'ap-southeast-1', 'postgresql://shard6.db:5432/gb', false, true, 5000001, 6000000),
(7, 'JPN', 'ap-northeast-1', 'postgresql://shard7.db:5432/gb', false, true, 6000001, 7000000);

Tenant-to-Shard Mapping

Mapping Table

CREATE TABLE tenant_shard_map (
    tenant_id BIGINT PRIMARY KEY,
    shard_id SMALLINT NOT NULL REFERENCES shard_config(shard_id),
    region_code CHAR(3) NOT NULL,
    created_at TIMESTAMPTZ DEFAULT NOW()
);

Routing Logic

When a request comes in, the system:

  1. Extracts tenant_id from the request context
  2. Looks up shard_id from tenant_shard_map
  3. Routes the query to the appropriate database connection
#![allow(unused)]
fn main() {
// Rust routing example
pub fn get_shard_connection(tenant_id: i64) -> Result<DbConnection> {
    let shard_id = SHARD_MAP.get(&tenant_id)
        .ok_or_else(|| Error::TenantNotFound(tenant_id))?;
    
    CONNECTION_POOLS.get(shard_id)
        .ok_or_else(|| Error::ShardNotAvailable(*shard_id))
}
}

Data Model Requirements

Every Table Includes Shard Keys

All tables must include tenant_id and shard_id columns:

CREATE TABLE user_sessions (
    id UUID PRIMARY KEY DEFAULT gen_random_uuid(),
    tenant_id BIGINT NOT NULL,           -- Required for routing
    shard_id SMALLINT NOT NULL,          -- Denormalized for queries
    user_id UUID NOT NULL,
    bot_id UUID NOT NULL,
    -- ... other columns
);

Foreign Keys Within Shard Only

Foreign keys only reference tables within the same shard:

-- Good: Same shard reference
ALTER TABLE message_history 
ADD CONSTRAINT fk_session 
FOREIGN KEY (session_id) REFERENCES user_sessions(id);

-- Bad: Cross-shard reference (never do this)
-- FOREIGN KEY (other_tenant_data) REFERENCES other_shard.table(id)

Snowflake ID Generation

For globally unique, time-sortable IDs across shards:

CREATE OR REPLACE FUNCTION generate_snowflake_id(p_shard_id SMALLINT)
RETURNS BIGINT AS $$
DECLARE
    epoch BIGINT := 1704067200000;  -- 2024-01-01 00:00:00 UTC
    ts BIGINT;
    seq BIGINT;
BEGIN
    -- 41 bits: timestamp (milliseconds since epoch)
    ts := (EXTRACT(EPOCH FROM NOW()) * 1000)::BIGINT - epoch;
    
    -- 10 bits: shard_id (0-1023)
    -- 12 bits: sequence (0-4095)
    seq := nextval('global_seq') & 4095;
    
    RETURN (ts << 22) | ((p_shard_id & 1023) << 12) | seq;
END;
$$ LANGUAGE plpgsql;

ID Structure

 64-bit Snowflake ID
┌─────────────────────────────────────────────────────────────────┐
│  41 bits timestamp  │  10 bits shard  │  12 bits sequence      │
│  (69 years range)   │  (1024 shards)  │  (4096/ms/shard)       │
└─────────────────────────────────────────────────────────────────┘

Shard Operations

Creating a New Shard

  1. Provision new database instance
  2. Run migrations
  3. Add to shard_config
  4. Update routing configuration
  5. Begin assigning new tenants
# 1. Run migrations on new shard
DATABASE_URL="postgresql://new-shard:5432/gb" diesel migration run

# 2. Add shard config
psql -c "INSERT INTO shard_config VALUES (8, 'AUS', 'ap-southeast-2', '...', false, true, 7000001, 8000000);"

# 3. Reload routing
curl -X POST http://localhost:3000/api/admin/reload-shard-config

Tenant Migration Between Shards

Moving a tenant to a different shard (e.g., for data locality):

-- 1. Set tenant to read-only mode
UPDATE tenants SET settings = settings || '{"read_only": true}' WHERE id = 12345;

-- 2. Export tenant data
pg_dump -t 'user_sessions' -t 'message_history' --where="tenant_id=12345" source_db > tenant_12345.sql

-- 3. Import to new shard
psql target_db < tenant_12345.sql

-- 4. Update routing
UPDATE tenant_shard_map SET shard_id = 5, region_code = 'DEU' WHERE tenant_id = 12345;

-- 5. Remove read-only mode
UPDATE tenants SET settings = settings - 'read_only' WHERE id = 12345;

-- 6. Clean up source shard (after verification)
DELETE FROM user_sessions WHERE tenant_id = 12345;
DELETE FROM message_history WHERE tenant_id = 12345;

Query Patterns

Single-Tenant Queries (Most Common)

-- Efficient: Uses shard routing
SELECT * FROM user_sessions 
WHERE tenant_id = 12345 AND user_id = 'abc-123';

Cross-Shard Queries (Admin Only)

For global analytics, use a federation layer:

-- Using postgres_fdw for cross-shard reads
SELECT shard_id, COUNT(*) as session_count
FROM all_shards.user_sessions
WHERE created_at > NOW() - INTERVAL '1 day'
GROUP BY shard_id;

Scatter-Gather Pattern

For queries that must touch multiple shards:

#![allow(unused)]
fn main() {
async fn get_global_stats() -> Stats {
    let futures: Vec<_> = SHARDS.iter()
        .map(|shard| get_shard_stats(shard.id))
        .collect();
    
    let results = futures::future::join_all(futures).await;
    
    results.into_iter().fold(Stats::default(), |acc, s| acc.merge(s))
}
}

High Availability

Per-Shard Replication

Each shard should have:

  • 1 Primary (read/write)
  • 1-2 Replicas (read-only, failover)
  • Async replication with < 1s lag
Shard 1 Architecture:
┌─────────────┐
│   Primary   │◄──── Writes
└──────┬──────┘
       │ Streaming Replication
   ┌───┴───┐
   ▼       ▼
┌──────┐ ┌──────┐
│Rep 1 │ │Rep 2 │◄──── Reads
└──────┘ └──────┘

Failover Configuration

# config.csv
shard-1-primary,postgresql://shard1-primary:5432/gb
shard-1-replica-1,postgresql://shard1-replica1:5432/gb
shard-1-replica-2,postgresql://shard1-replica2:5432/gb
shard-1-failover-priority,replica-1,replica-2

Monitoring

Key Metrics Per Shard

MetricWarningCritical
Connection pool usage> 70%> 90%
Query latency p99> 100ms> 500ms
Replication lag> 1s> 10s
Disk usage> 70%> 85%
Tenant count> 80% capacity> 95% capacity

Shard Health Check

-- Run on each shard
SELECT 
    current_setting('cluster_name') as shard,
    pg_is_in_recovery() as is_replica,
    pg_last_wal_receive_lsn() as wal_position,
    pg_postmaster_start_time() as uptime_since,
    (SELECT count(*) FROM pg_stat_activity) as connections,
    (SELECT count(DISTINCT tenant_id) FROM tenants) as tenant_count;

Best Practices

  1. Shard by tenant, not by table - Keep all tenant data together
  2. Avoid cross-shard transactions - Design for eventual consistency where needed
  3. Pre-allocate tenant ranges - Leave room for growth in each shard
  4. Monitor shard hotspots - Rebalance if one shard gets too busy
  5. Test failover regularly - Ensure replicas can be promoted
  6. Use connection pooling - PgBouncer or similar for each shard
  7. Cache shard routing - Don’t query tenant_shard_map on every request

Migration from Single Database

To migrate an existing single-database deployment to sharded:

  1. Add shard_id column to all tables (default to 1)
  2. Deploy shard routing code (disabled)
  3. Set up additional shard databases
  4. Enable routing for new tenants only
  5. Gradually migrate existing tenants during low-traffic windows
  6. Decommission original database when empty

See Regional Deployment for multi-region considerations.

Database Optimization

This document covers database schema design and optimization strategies for billion-user scale deployments.

Schema Design Principles

Use SMALLINT Enums Instead of VARCHAR

One of the most impactful optimizations is using integer enums instead of string-based status fields.

Before (inefficient):

CREATE TABLE auto_tasks (
    id UUID PRIMARY KEY,
    status VARCHAR(50) NOT NULL DEFAULT 'pending',
    priority VARCHAR(20) NOT NULL DEFAULT 'normal',
    execution_mode VARCHAR(50) NOT NULL DEFAULT 'supervised',
    CONSTRAINT check_status CHECK (status IN ('pending', 'ready', 'running', 'paused', 'waiting_approval', 'completed', 'failed', 'cancelled'))
);

After (optimized):

CREATE TABLE auto_tasks (
    id UUID PRIMARY KEY,
    status SMALLINT NOT NULL DEFAULT 0,        -- 2 bytes
    priority SMALLINT NOT NULL DEFAULT 1,      -- 2 bytes
    execution_mode SMALLINT NOT NULL DEFAULT 1 -- 2 bytes
);

Storage Comparison

Field TypeStorageExample Values
VARCHAR(50)1-51 bytes‘waiting_approval’ = 17 bytes
TEXT1+ bytes‘completed’ = 10 bytes
SMALLINT2 bytes4 = 2 bytes (always)
INTEGER4 bytes4 = 4 bytes (always)

Savings per row with 5 enum fields:

  • VARCHAR: ~50 bytes average
  • SMALLINT: 10 bytes fixed
  • Savings: 40 bytes per row = 40GB per billion rows

Enum Value Reference

All domain values in General Bots use SMALLINT. Reference table:

Channel Types

ValueNameDescription
0webWeb chat interface
1whatsappWhatsApp Business
2telegramTelegram Bot
3msteamsMicrosoft Teams
4slackSlack
5emailEmail channel
6smsSMS/Text messages
7voiceVoice/Phone
8instagramInstagram DM
9apiDirect API

Message Role

ValueNameDescription
1userUser message
2assistantBot response
3systemSystem prompt
4toolTool call/result
9episodicEpisodic memory summary
10compactCompacted conversation

Message Type

ValueNameDescription
0textPlain text
1imageImage attachment
2audioAudio file
3videoVideo file
4documentDocument/PDF
5locationGPS location
6contactContact card
7stickerSticker
8reactionMessage reaction

LLM Provider

ValueNameDescription
0openaiOpenAI API
1anthropicAnthropic Claude
2azure_openaiAzure OpenAI
3azure_claudeAzure Claude
4googleGoogle AI
5localLocal llama.cpp
6ollamaOllama
7groqGroq
8mistralMistral AI
9cohereCohere

Task Status

ValueNameDescription
0pendingWaiting to start
1readyReady to execute
2runningCurrently executing
3pausedPaused by user
4waiting_approvalNeeds approval
5completedSuccessfully finished
6failedFailed with error
7cancelledCancelled by user

Task Priority

ValueNameDescription
0lowLow priority
1normalNormal priority
2highHigh priority
3urgentUrgent
4criticalCritical

Execution Mode

ValueNameDescription
0manualManual execution only
1supervisedRequires approval
2autonomousFully automatic

Risk Level

ValueNameDescription
0noneNo risk
1lowLow risk
2mediumMedium risk
3highHigh risk
4criticalCritical risk

Approval Status

ValueNameDescription
0pendingAwaiting decision
1approvedApproved
2rejectedRejected
3expiredTimed out
4skippedSkipped

Intent Type

ValueNameDescription
0unknownUnclassified
1app_createCreate application
2todoCreate task/reminder
3monitorSet up monitoring
4actionExecute action
5scheduleCreate schedule
6goalSet goal
7toolCreate tool
8queryQuery/search

Memory Type

ValueNameDescription
0shortShort-term
1longLong-term
2episodicEpisodic
3semanticSemantic
4proceduralProcedural

Sync Status

ValueNameDescription
0syncedFully synced
1pendingSync pending
2conflictConflict detected
3errorSync error
4deletedMarked for deletion

Indexing Strategies

Composite Indexes for Common Queries

-- Session lookup by user
CREATE INDEX idx_sessions_user ON user_sessions(user_id, created_at DESC);

-- Messages by session (most common query)
CREATE INDEX idx_messages_session ON message_history(session_id, message_index);

-- Active tasks by status and priority
CREATE INDEX idx_tasks_status ON auto_tasks(status, priority) WHERE status < 5;

-- Tenant-scoped queries
CREATE INDEX idx_sessions_tenant ON user_sessions(tenant_id, created_at DESC);

Partial Indexes for Active Records

-- Only index active bots (saves space)
CREATE INDEX idx_bots_active ON bots(tenant_id, is_active) WHERE is_active = true;

-- Only index pending approvals
CREATE INDEX idx_approvals_pending ON task_approvals(task_id, expires_at) WHERE status = 0;

-- Only index unread messages
CREATE INDEX idx_messages_unread ON message_history(user_id, created_at) WHERE is_read = false;

BRIN Indexes for Time-Series Data

-- BRIN index for time-ordered data (much smaller than B-tree)
CREATE INDEX idx_messages_created_brin ON message_history USING BRIN (created_at);
CREATE INDEX idx_analytics_date_brin ON analytics_events USING BRIN (created_at);

Table Partitioning

Partition High-Volume Tables by Time

-- Partitioned messages table
CREATE TABLE message_history (
    id UUID NOT NULL,
    session_id UUID NOT NULL,
    tenant_id BIGINT NOT NULL,
    created_at TIMESTAMPTZ NOT NULL,
    -- other columns...
    PRIMARY KEY (id, created_at)
) PARTITION BY RANGE (created_at);

-- Monthly partitions
CREATE TABLE message_history_2025_01 PARTITION OF message_history
    FOR VALUES FROM ('2025-01-01') TO ('2025-02-01');
CREATE TABLE message_history_2025_02 PARTITION OF message_history
    FOR VALUES FROM ('2025-02-01') TO ('2025-03-01');
-- ... continue for each month

-- Default partition for future data
CREATE TABLE message_history_default PARTITION OF message_history DEFAULT;

Automatic Partition Management

-- Function to create next month's partition
CREATE OR REPLACE FUNCTION create_monthly_partition(
    table_name TEXT,
    partition_date DATE
) RETURNS VOID AS $$
DECLARE
    partition_name TEXT;
    start_date DATE;
    end_date DATE;
BEGIN
    partition_name := table_name || '_' || to_char(partition_date, 'YYYY_MM');
    start_date := date_trunc('month', partition_date);
    end_date := start_date + INTERVAL '1 month';
    
    EXECUTE format(
        'CREATE TABLE IF NOT EXISTS %I PARTITION OF %I FOR VALUES FROM (%L) TO (%L)',
        partition_name, table_name, start_date, end_date
    );
END;
$$ LANGUAGE plpgsql;

-- Create partitions for next 3 months
SELECT create_monthly_partition('message_history', NOW() + INTERVAL '1 month');
SELECT create_monthly_partition('message_history', NOW() + INTERVAL '2 months');
SELECT create_monthly_partition('message_history', NOW() + INTERVAL '3 months');

Connection Pooling

PgBouncer Configuration

; pgbouncer.ini
[databases]
gb_shard1 = host=shard1.db port=5432 dbname=gb
gb_shard2 = host=shard2.db port=5432 dbname=gb

[pgbouncer]
listen_port = 6432
listen_addr = *
auth_type = md5
auth_file = /etc/pgbouncer/userlist.txt

; Pool settings
pool_mode = transaction
max_client_conn = 10000
default_pool_size = 50
min_pool_size = 10
reserve_pool_size = 25
reserve_pool_timeout = 3

; Timeouts
server_connect_timeout = 3
server_idle_timeout = 600
server_lifetime = 3600
client_idle_timeout = 0

Application Connection Settings

# config.toml
[database]
max_connections = 100
min_connections = 10
connection_timeout_secs = 5
idle_timeout_secs = 300
max_lifetime_secs = 1800

Query Optimization

Use EXPLAIN ANALYZE

EXPLAIN (ANALYZE, BUFFERS, FORMAT TEXT)
SELECT * FROM message_history
WHERE session_id = 'abc-123'
ORDER BY message_index;

Avoid N+1 Queries

Bad:

-- 1 query for sessions
SELECT * FROM user_sessions WHERE user_id = 'xyz';
-- N queries for messages (one per session)
SELECT * FROM message_history WHERE session_id = ?;

Good:

-- Single query with JOIN
SELECT s.*, m.*
FROM user_sessions s
LEFT JOIN message_history m ON m.session_id = s.id
WHERE s.user_id = 'xyz'
ORDER BY s.created_at DESC, m.message_index;

Use Covering Indexes

-- Index includes all needed columns (no table lookup)
CREATE INDEX idx_sessions_covering ON user_sessions(user_id, created_at DESC)
INCLUDE (title, message_count, last_activity_at);

Vacuum and Maintenance

Aggressive Autovacuum for High-Churn Tables

ALTER TABLE message_history SET (
    autovacuum_vacuum_scale_factor = 0.01,
    autovacuum_analyze_scale_factor = 0.005,
    autovacuum_vacuum_cost_delay = 2
);

ALTER TABLE user_sessions SET (
    autovacuum_vacuum_scale_factor = 0.02,
    autovacuum_analyze_scale_factor = 0.01
);

Regular Maintenance Tasks

-- Weekly: Reindex bloated indexes
REINDEX INDEX CONCURRENTLY idx_messages_session;

-- Monthly: Update statistics
ANALYZE VERBOSE message_history;

-- Quarterly: Cluster heavily-queried tables
CLUSTER message_history USING idx_messages_session;

Monitoring Queries

Table Bloat Check

SELECT
    schemaname || '.' || tablename AS table,
    pg_size_pretty(pg_total_relation_size(schemaname || '.' || tablename)) AS total_size,
    pg_size_pretty(pg_relation_size(schemaname || '.' || tablename)) AS table_size,
    pg_size_pretty(pg_indexes_size(schemaname || '.' || tablename)) AS index_size
FROM pg_tables
WHERE schemaname = 'public'
ORDER BY pg_total_relation_size(schemaname || '.' || tablename) DESC
LIMIT 20;

Slow Query Log

-- postgresql.conf
log_min_duration_statement = 100  -- Log queries > 100ms
log_statement = 'none'
log_lock_waits = on

Index Usage Statistics

SELECT
    schemaname || '.' || relname AS table,
    indexrelname AS index,
    idx_scan AS scans,
    idx_tup_read AS tuples_read,
    idx_tup_fetch AS tuples_fetched,
    pg_size_pretty(pg_relation_size(indexrelid)) AS size
FROM pg_stat_user_indexes
ORDER BY idx_scan DESC
LIMIT 20;

Best Practices Summary

  1. Use SMALLINT for enums - 2 bytes vs 10-50 bytes per field
  2. Partition time-series tables - Monthly partitions for messages/analytics
  3. Create partial indexes - Only index active/relevant rows
  4. Use connection pooling - PgBouncer with transaction mode
  5. Enable aggressive autovacuum - For high-churn tables
  6. Monitor query performance - Log slow queries, check EXPLAIN plans
  7. Use covering indexes - Include frequently-accessed columns
  8. Avoid cross-shard queries - Keep tenant data together
  9. Regular maintenance - Reindex, analyze, cluster as needed
  10. Test at scale - Use production-like data volumes in staging

Chapter 12: Ecosystem & Reference

General Bots supports full white-label customization, allowing you to rebrand the entire platform.

Overview

White-labeling allows you to:

  • Replace “General Bots” branding with your own product name
  • Enable/disable specific apps in the suite
  • Set a default theme for all users
  • Customize logos, colors, and other visual elements
  • Control which APIs are available based on enabled apps

Configuration File

The white-label settings are defined in the .product file located in the root of the botserver directory.

File Location

botserver/
├── .product          # White-label configuration
├── src/
├── Cargo.toml
└── ...

Configuration Format

The .product file uses a simple key=value format:

# Product Configuration File
# Lines starting with # are comments

# Product name (replaces "General Bots" throughout the application)
name=My Custom Platform

# Active apps (comma-separated list)
apps=chat,drive,tasks,calendar

# Default theme
theme=sentient

# Optional customizations
logo=/static/my-logo.svg
favicon=/static/favicon.ico
primary_color=#3b82f6
support_email=support@mycompany.com
docs_url=https://docs.mycompany.com
copyright=© {year} {name}. All rights reserved.

Configuration Options

name

Type: String
Default: General Bots

The product name that replaces “General Bots” throughout the application, including:

  • Page titles
  • Welcome messages
  • Footer text
  • Email templates
  • API responses
name=Acme Bot Platform

apps

Type: Comma-separated list
Default: All apps enabled

Specifies which apps are active in the suite. Only listed apps will:

  • Appear in the navigation menu
  • Have their APIs enabled
  • Be accessible to users

Available apps:

AppDescription
chatMain chat interface
mailEmail client
calendarCalendar and scheduling
driveFile storage
tasksTask management
docsDocument editor
paperNotes and quick documents
sheetSpreadsheet editor
slidesPresentation editor
meetVideo conferencing
researchResearch assistant
sourcesData sources management
analyticsAnalytics dashboard
adminAdministration panel
monitoringSystem monitoring
settingsUser settings

Example - Minimal setup:

apps=chat,drive,tasks

Example - Full productivity suite:

apps=chat,mail,calendar,drive,tasks,docs,sheet,slides,meet

theme

Type: String
Default: sentient

Sets the default theme for new users and the login page.

Available themes:

ThemeDescription
sentientDefault neon green theme
darkDark mode with blue accents
lightLight mode with blue accents
blueBlue theme
purplePurple theme
greenGreen theme
orangeOrange theme
cyberpunkCyberpunk aesthetic
retrowave80s retrowave style
vapordreamVaporwave aesthetic
y2kglowY2K neon style
arcadeflashArcade game style
discofeverDisco theme
grungeera90s grunge style
jazzageJazz age gold
mellowgoldMellow gold tones
midcenturymodMid-century modern
polaroidmemoriesVintage polaroid
saturdaycartoonsCartoon style
seasidepostcardBeach/ocean theme
typewriterTypewriter/monospace
3dbevel3D beveled style
xeroxuiClassic Xerox UI
xtreegoldXTree Gold DOS style
theme=dark

Type: URL/Path (optional)
Default: General Bots logo

URL or path to your custom logo. Supports SVG, PNG, or other image formats.

logo=/static/branding/my-logo.svg
logo=https://cdn.mycompany.com/logo.png

favicon

Type: URL/Path (optional)
Default: General Bots favicon

URL or path to your custom favicon.

favicon=/static/branding/favicon.ico

primary_color

Type: Hex color code (optional)
Default: Theme-dependent

Override the primary accent color across the UI.

primary_color=#3b82f6
primary_color=#e11d48

support_email

Type: Email address (optional)
Default: None

Support email displayed in help sections and error messages.

support_email=support@mycompany.com

docs_url

Type: URL (optional)
Default: https://docs.pragmatismo.com.br

URL to your documentation site.

docs_url=https://docs.mycompany.com

Type: String (optional)
Default: © {year} {name}. All rights reserved.

Copyright text for the footer. Supports placeholders:

  • {year} - Current year
  • {name} - Product name
copyright=© {year} {name} - A product of My Company Inc.

API Integration

Product Configuration Endpoint

The current product configuration is available via API:

GET /api/product

Response:

{
  "name": "My Custom Platform",
  "apps": ["chat", "drive", "tasks", "calendar"],
  "theme": "dark",
  "logo": "/static/my-logo.svg",
  "favicon": null,
  "primary_color": "#3b82f6",
  "docs_url": "https://docs.mycompany.com",
  "copyright": "© 2025 My Custom Platform. All rights reserved."
}

App-Gated APIs

When an app is disabled in the .product file, its corresponding APIs return 403 Forbidden:

{
  "error": "app_disabled",
  "message": "The 'calendar' app is not enabled for this installation"
}

UI Integration

JavaScript Access

The product configuration is available in the frontend:

// Fetch product config
const response = await fetch('/api/product');
const product = await response.json();

console.log(product.name);  // "My Custom Platform"
console.log(product.apps);  // ["chat", "drive", "tasks"]

Conditional App Rendering

The navigation menu automatically hides disabled apps. If you need to check manually:

function isAppEnabled(appName) {
  return window.productConfig?.apps?.includes(appName) ?? false;
}

if (isAppEnabled('calendar')) {
  // Show calendar features
}

Examples

Example 1: Simple Chat Bot

A minimal setup for a chat-only bot:

name=Support Bot
apps=chat
theme=dark
support_email=help@example.com

Example 2: Internal Tools Platform

An internal company platform with productivity tools:

name=Acme Internal Tools
apps=chat,drive,tasks,docs,calendar,meet
theme=light
logo=/static/acme-logo.svg
primary_color=#1e40af
docs_url=https://wiki.acme.internal
copyright=© {year} Acme Corporation - Internal Use Only

Example 3: Customer Service Platform

A customer service focused deployment:

name=ServiceDesk Pro
apps=chat,tasks,analytics,admin,monitoring
theme=blue
support_email=admin@servicedesk.com
docs_url=https://help.servicedesk.com

Example 4: Full Suite

Enable all features (default behavior):

name=Enterprise Suite
apps=chat,mail,calendar,drive,tasks,docs,paper,sheet,slides,meet,research,sources,analytics,admin,monitoring,settings
theme=sentient

Reloading Configuration

The product configuration is loaded at server startup. To apply changes:

  1. Edit the .product file
  2. Restart the server
# Restart the server to apply changes
systemctl restart botserver
# or
docker-compose restart botserver

Environment Variable Override

You can override the .product file location using an environment variable:

export PRODUCT_CONFIG_PATH=/etc/myapp/.product

Best Practices

  1. Version Control: Include the .product file in your deployment configuration (but not in the main repo if it contains sensitive branding)

  2. Minimal Apps: Only enable the apps your users need to reduce complexity and improve performance

  3. Consistent Branding: Ensure your logo, colors, and theme work well together

  4. Documentation: Update your docs_url to point to customized documentation for your users

  5. Testing: Test the UI with your specific app combination to ensure navigation works correctly

Troubleshooting

Apps Not Hiding

If disabled apps still appear:

  1. Clear browser cache
  2. Verify the .product file syntax
  3. Check server logs for configuration errors
  4. Restart the server

API Returns 403

If APIs return “app_disabled” errors:

  1. Check the apps list in .product
  2. Ensure the app name is spelled correctly (lowercase)
  3. Restart the server after changes

Branding Not Updating

If the product name doesn’t change:

  1. Hard refresh the browser (Ctrl+Shift+R)
  2. Clear application cache
  3. Verify the name field in .product
  4. Check for syntax errors (missing = sign)

Migration Overview

This page has been consolidated with Migration Concepts.

Please see the main chapter page for:

  • Migration paradigms and philosophy
  • Architecture translation concepts
  • Assessment and planning frameworks
  • Success metrics

For specific migration guides:

Platform Comparison Matrix

This comprehensive comparison helps organizations evaluate General Bots against major productivity, automation, and AI platforms.

Platform Comparison Summary

Executive Summary

General Bots uniquely combines self-hosted deployment, open source licensing, native AI integration, and powerful BASIC scripting—capabilities that typically require multiple expensive subscriptions across competing platforms.

Complete Platform Comparison

Deployment & Licensing

CapabilityGeneral BotsMicrosoft 365Google Workspacen8nNotionPerplexityClaudeMake/Zapier
Self-hosted✅ Full❌ Cloud only❌ Cloud only✅ Available❌ Cloud only❌ Cloud only❌ Cloud only❌ Cloud only
Open source✅ AGPL❌ Proprietary❌ Proprietary✅ Fair-code❌ Proprietary❌ Proprietary❌ Proprietary❌ Proprietary
Data sovereignty✅ Your servers❌ Microsoft servers❌ Google servers✅ Self-host option❌ AWS/GCP❌ Their servers❌ Anthropic servers❌ Their servers
Per-user licensing✅ None❌ $12-57/user/mo❌ $6-18/user/mo⚠️ Cloud version❌ $10-15/user/mo❌ $20/mo❌ $20/mo❌ Per-task pricing
Source code access✅ Full❌ None❌ None✅ Available❌ None❌ None❌ None❌ None
Modify & extend✅ Unlimited❌ API only❌ API only✅ Possible❌ API only❌ None❌ None❌ None

Productivity Suite

CapabilityGeneral BotsMicrosoft 365Google Workspacen8nNotionPerplexityClaudeMake/Zapier
Email✅ Stalwart✅ Exchange✅ Gmail❌ None❌ None❌ None❌ None❌ None
Calendar✅ CalDAV✅ Outlook✅ Calendar❌ None❌ Basic❌ None❌ None❌ None
File storage✅ SeaweedFS✅ OneDrive✅ Drive❌ None⚠️ Limited❌ None❌ None❌ None
Tasks/Projects✅ Full✅ Planner✅ Tasks❌ None✅ Strong❌ None❌ None❌ None
Video meetings✅ LiveKit✅ Teams✅ Meet❌ None❌ None❌ None❌ None❌ None
Team chat✅ Multi-channel✅ Teams✅ Chat❌ None⚠️ Comments❌ None❌ None❌ None
Document editing✅ Available✅ Office apps✅ Docs/Sheets❌ None✅ Pages❌ None❌ None❌ None
Identity/SSO✅ Zitadel✅ Entra ID✅ Identity❌ None⚠️ Basic❌ None❌ None❌ None

AI & Intelligence

CapabilityGeneral BotsMicrosoft 365Google Workspacen8nNotionPerplexityClaudeMake/Zapier
LLM integration✅ Any provider⚠️ Copilot ($30/user)⚠️ Gemini (extra)⚠️ Via nodes⚠️ Limited✅ Built-in✅ Built-in⚠️ Via connectors
Custom prompts✅ Full control⚠️ Limited⚠️ Limited✅ Available⚠️ Basic⚠️ Limited✅ Available⚠️ Limited
RAG/Knowledge base✅ Built-in⚠️ Extra cost⚠️ Extra cost⚠️ Custom build⚠️ Page search⚠️ Pro only⚠️ Projects❌ None
Image generation✅ Local SD⚠️ Designer⚠️ Limited⚠️ Via API❌ None⚠️ Limited✅ Available⚠️ Via API
Video generation✅ Zeroscope❌ None❌ None⚠️ Via API❌ None❌ None❌ None⚠️ Via API
Speech-to-text✅ Whisper⚠️ Extra⚠️ Extra⚠️ Via API❌ None❌ None❌ None⚠️ Via API
Vision/OCR✅ BLIP2⚠️ Extra⚠️ Extra⚠️ Via API❌ None❌ None✅ Available⚠️ Via API
Local/offline AI✅ Full support❌ None❌ None⚠️ Possible❌ None❌ None❌ None❌ None
AI cost✅ Bring your key❌ $30/user/mo❌ $20/user/mo⚠️ API costs❌ $10/user/mo❌ $20/mo❌ $20/mo⚠️ Per operation

Automation & Integration

CapabilityGeneral BotsMicrosoft 365Google Workspacen8nNotionPerplexityClaudeMake/Zapier
Workflow automation✅ BASIC scripts⚠️ Power Automate ($)⚠️ AppSheet ($)✅ Visual builder⚠️ Basic❌ None❌ None✅ Visual builder
Scheduled tasks✅ Cron + natural⚠️ Extra license⚠️ Limited✅ Available❌ None❌ None❌ None✅ Available
Webhooks✅ Instant creation⚠️ Complex setup⚠️ Limited✅ Available⚠️ Limited❌ None❌ None✅ Available
Custom APIs✅ One line❌ Azure required❌ GCP required✅ Possible❌ None❌ None✅ API available❌ None
Database access✅ Direct SQL⚠️ Dataverse ($)⚠️ BigQuery ($)✅ Multiple DBs⚠️ Notion DBs❌ None❌ None⚠️ Limited
REST API calls✅ GET/POST/etc⚠️ Premium connectors⚠️ Limited✅ HTTP nodes❌ None❌ None❌ None✅ HTTP module
GraphQL✅ Native❌ None❌ None✅ Available❌ None❌ None❌ None⚠️ Limited
SOAP/Legacy✅ Supported⚠️ Limited❌ None✅ Available❌ None❌ None❌ None⚠️ Limited
Automation pricing✅ Unlimited❌ Per-flow fees❌ Per-run fees⚠️ Execution limits❌ None❌ None❌ None❌ Per-task fees

Multi-Channel Communication

CapabilityGeneral BotsMicrosoft 365Google Workspacen8nNotionPerplexityClaudeMake/Zapier
Web chat✅ Built-in⚠️ Bot Framework❌ None❌ None❌ None✅ Web only✅ Web only❌ None
WhatsApp✅ Native⚠️ Extra setup❌ None⚠️ Via nodes❌ None❌ None❌ None⚠️ Connector
Teams✅ Native✅ Native❌ None⚠️ Via nodes❌ None❌ None❌ None⚠️ Connector
Slack✅ Native⚠️ Connector⚠️ Limited⚠️ Via nodes⚠️ Integration❌ None⚠️ Integration⚠️ Connector
Telegram✅ Native❌ None❌ None⚠️ Via nodes❌ None❌ None❌ None⚠️ Connector
SMS✅ Native⚠️ Extra❌ None⚠️ Via nodes❌ None❌ None❌ None⚠️ Connector
Email bot✅ Native⚠️ Complex⚠️ Limited⚠️ Via nodes❌ None❌ None❌ None⚠️ Connector
Voice✅ LiveKit⚠️ Extra⚠️ Extra❌ None❌ None❌ None❌ None❌ None

Developer Experience

CapabilityGeneral BotsMicrosoft 365Google Workspacen8nNotionPerplexityClaudeMake/Zapier
Scripting language✅ BASIC (simple)⚠️ Power Fx⚠️ Apps Script✅ JavaScript❌ None❌ None❌ None❌ Visual only
No-code option✅ Conversational⚠️ Power Apps⚠️ AppSheet✅ Visual builder✅ Pages✅ Chat✅ Chat✅ Visual builder
Custom keywords✅ Rust extensible❌ None❌ None✅ Custom nodes❌ None❌ None❌ None❌ None
API-first✅ Full REST✅ Graph API✅ Workspace API✅ REST API⚠️ Limited⚠️ Limited✅ Full API⚠️ Limited
Debugging✅ Console + logs⚠️ Complex⚠️ Complex✅ Execution logs❌ None❌ None❌ None⚠️ Limited
Version control✅ File-based⚠️ Limited⚠️ Limited✅ Git support⚠️ Page history❌ None❌ None⚠️ Limited

Security & Compliance

CapabilityGeneral BotsMicrosoft 365Google Workspacen8nNotionPerplexityClaudeMake/Zapier
Data residency control✅ Your choice⚠️ Limited regions⚠️ Limited regions✅ Self-host❌ US/EU only❌ No control❌ No control❌ No control
GDPR compliance✅ Self-managed✅ Available✅ Available✅ Self-host⚠️ Depends⚠️ Limited⚠️ Limited⚠️ Limited
HIPAA capable✅ Self-managed⚠️ Extra cost⚠️ Extra cost✅ Self-host❌ No❌ No❌ No❌ No
Audit logs✅ Full control✅ Available✅ Available✅ Available⚠️ Limited❌ Limited❌ Limited⚠️ Limited
Encryption at rest✅ Configurable✅ Standard✅ Standard✅ Configurable✅ Standard✅ Standard✅ Standard✅ Standard
SSO/OIDC✅ Zitadel✅ Entra✅ Identity⚠️ Enterprise⚠️ Business❌ Basic⚠️ Enterprise⚠️ Enterprise
MFA✅ Built-in✅ Built-in✅ Built-in⚠️ Configure⚠️ Basic⚠️ Basic⚠️ Basic⚠️ Basic

Cost Analysis (100 Users, Annual)

PlatformBase LicenseAI FeaturesAutomationStorageTotal Annual
General Bots$0$0 (bring key)$0Included$3,000-12,000*
Microsoft 365 E3 + Copilot$43,200$36,000$12,000+Included$91,200+
Google Workspace Business + Gemini$21,600$24,000$6,000+Included$51,600+
n8n Cloud + separate tools$0-6,000API costsIncludedNone$20,000+
Notion Team + AI$12,000$12,000NoneLimited$24,000
Multiple point solutionsVariesVariesVariesVaries$50,000+

*General Bots cost = infrastructure + optional LLM API usage

Feature Availability by Use Case

Customer Service Bot

RequirementGeneral BotsMicrosoftGooglen8nNotionAI Assistants
Knowledge base⚠️ Extra⚠️ Extra⚠️ Build⚠️ Limited⚠️ Limited
WhatsApp channel⚠️ Complex⚠️ Build
Web widget⚠️ Complex
Ticket creation⚠️ Extra⚠️ Extra⚠️ Manual
Human handoff⚠️ Extra⚠️ Build
Analytics⚠️ Extra⚠️ Extra⚠️ Build

Internal Automation

RequirementGeneral BotsMicrosoftGooglen8nNotionAI Assistants
Scheduled reports⚠️ Extra⚠️ Extra
Database sync⚠️ Extra⚠️ Extra
API orchestration⚠️ Premium⚠️ Limited
Document processing⚠️ Extra⚠️ Extra⚠️ Build⚠️ Limited
Email automation
Custom logic⚠️ Limited⚠️ Limited

Team Collaboration

RequirementGeneral BotsMicrosoftGooglen8nNotionAI Assistants
Project management
Team chat⚠️
File sharing⚠️
Video meetings
AI assistant⚠️ Extra⚠️ Extra⚠️ Build⚠️ Extra
Self-hosted

Migration Complexity

From PlatformTo General BotsEffortData PortabilityTool Support
Microsoft 365Full migrationMediumGood (APIs)Scripts provided
Google WorkspaceFull migrationMediumGood (APIs)Scripts provided
n8nAutomation onlyLowEasy (JSON)Direct import
NotionContent migrationLowGood (Export)Scripts provided
Zapier/MakeWorkflow rebuildMediumManualTemplates available
Custom solutionVariesVariesDependsAPI compatible

Decision Matrix

Choose General Bots when you need:

  • ✅ Complete data sovereignty and self-hosting
  • ✅ No per-user licensing costs at scale
  • ✅ Native AI without additional subscriptions
  • ✅ Full productivity suite in one platform
  • ✅ Multi-channel chatbot deployment
  • ✅ Powerful automation without limits
  • ✅ Open source transparency and extensibility
  • ✅ Custom integrations and modifications

Consider alternatives when:

  • You require specific certifications only available from large vendors
  • Your organization mandates a particular cloud provider
  • You have no infrastructure or IT capacity for self-hosting
  • You need only a single narrow feature (e.g., just document editing)

Summary

General Bots provides the most comprehensive feature set for organizations seeking:

AdvantageImpact
75-95% cost reductionEliminate per-user fees, AI add-ons, automation limits
Complete data controlSelf-hosted, your infrastructure, your rules
Unified platformEmail, files, chat, automation, AI in one system
No artificial limitsUnlimited users, workflows, API calls, storage
Full transparencyOpen source code, audit everything
Future-proofNo vendor lock-in, standard formats, portable data

The combination of enterprise productivity features, native AI, powerful automation, and self-hosted deployment makes General Bots unique in the market—delivering capabilities that would otherwise require subscriptions to multiple expensive platforms.

See Also

Knowledge Base Migration

Converting documents from cloud storage to General Bots knowledge bases.

Overview

Knowledge base migration involves moving documents from various sources into .gbkb packages where they become searchable through General Bots.

Source Systems

Common document sources include SharePoint document libraries, Google Drive folders, OneDrive and Dropbox storage, Confluence spaces, and traditional file servers.

Document Types

General Bots supports a variety of document formats for knowledge base ingestion. These include PDF files, Office documents such as Word, Excel, and PowerPoint, plain text files, Markdown files, and HTML pages.

Migration Process

1. Export

Begin by downloading documents from the source system. Preserve the folder structure to maintain organizational context, and retain metadata where possible for future reference.

2. Organize

Group related documents into logical collections. Create meaningful organizational structures and remove any duplicate documents that would clutter the knowledge base.

3. Import

Place the organized documents in .gbkb folders within your bot package. General Bots indexes these documents automatically, making them searchable for RAG-powered responses.

Considerations

Volume

Large document sets require additional time to index. Consider staging the migration in batches rather than importing everything at once. Monitor disk space throughout the process to ensure adequate storage remains available.

Quality

Before migration, clean up outdated content that no longer reflects current information. Remove duplicate documents to avoid confusing the AI with conflicting information. Fix any broken or corrupted files that would fail during indexing.

Structure

Maintain logical organization within your knowledge base. Use meaningful folder names that describe the content within. Group documents by topic or department to improve retrieval accuracy.

Format Conversion

Some formats require conversion before import. Web pages should be converted to PDF or Markdown for reliable indexing. Database content should be exported to CSV format. Proprietary formats from specialized applications need conversion to standard formats that the indexing system can process.

Testing

After migration, verify the knowledge base functions correctly. Test that search works across the imported documents. Check that users can access all migrated content. Run sample queries to ensure the AI provides accurate responses based on the imported knowledge.

Next Steps

Review the Overview for general migration concepts. See Validation for detailed testing procedures to verify your migration succeeded.

Google Workspace Migration Guide

Migrating from Google Workspace to self-hosted infrastructure.

Overview

Google Workspace (formerly G Suite) provides integrated cloud services that need to be migrated to various self-hosted alternatives.

Service Mapping

Google Workspace ServiceSelf-Hosted AlternativeMigration Tool
GmailMail server (Stalwart, etc.)imapsync, Got Your Back
Google DriveMinIO or Nextcloudrclone
Google Docs/Sheets/SlidesLibreOffice, OnlyOfficeExport to Office formats
Google CalendarCalDAV serverical export/import
Google MeetJitsi, LiveKitNo migration (ephemeral)
Google ChatGeneral BotsAPI export

Migration Steps

1. Email Migration (Gmail)

Before beginning email migration, ensure IMAP is enabled in Google Admin, you have app-specific passwords or OAuth configured, and your target mail server is ready to receive data.

The migration process uses imapsync or Got Your Back (GYB) to transfer mailboxes. Migrate labels as folders since Gmail’s labeling system differs from traditional folder structures. Preserve read and unread status to maintain inbox organization.

Consider that Gmail labels don’t map perfectly to folders, which may require some reorganization. Some users may have 15 or more years of email history, making this a time-intensive process. Attachments can consume significant storage space on the target system.

2. File Migration (Google Drive)

Prerequisites include Google Drive API access, service account or OAuth credentials, and proper storage capacity planning on your target system.

Use rclone with the Google Drive backend for the migration process. Export Google Docs to portable formats since they exist as pointers rather than actual files. Maintain the folder structure during transfer to preserve organizational context.

Keep in mind that Google Docs need format conversion to work offline. Shared drives require separate handling from personal drives. Comments and suggestions on documents are typically lost in the conversion process.

3. User Migration (Google Directory)

You’ll need Google Admin SDK access and your target identity provider configured before starting.

Export users via the Admin SDK or GAM tool. Transform the exported data to your target format such as LDIF or JSON. Import the transformed data to your new identity management system.

Note that passwords cannot be exported from Google, so all users will need to set new passwords. Two-factor authentication settings need reconfiguration on the new system. Groups and organizational units need mapping to equivalent structures.

Google-Specific Challenges

Format Conversion

Google’s proprietary formats require conversion to standard formats. Google Docs should be converted to .docx or .odt files. Google Sheets become .xlsx or .ods files. Google Slides convert to .pptx or .odp format. Google Drawings export as .svg or .png images.

API Quotas

Google enforces strict quotas on API usage. The Drive API allows 1,000 queries per 100 seconds. The Gmail API permits 250 quota units per user per second. The Admin SDK has various limits depending on which specific API you’re accessing. Plan your migration to work within these constraints.

Data Takeout

Google Takeout provides a user-initiated bulk export option that includes most Google services. However, the resulting ZIP files can be enormous, sometimes exceeding 100GB. This approach is not suitable for organization-wide migration but can help individual users verify their data transferred correctly.

Tools and Utilities

Google Admin Tools

GAM (Google Apps Manager) provides a command-line interface for managing Google Workspace. GAMADV-XTD is an enhanced version with additional capabilities. The Google Admin console offers manual export options for smaller migrations.

Got Your Back (GYB)

GYB is a Python-based Gmail backup tool that supports full mailbox export and can restore to different accounts, making it useful for migration scenarios.

rclone Configuration

rclone supports team drives, handles Google Photos separately from Drive, and can preserve modification times during transfer.

Common Issues

Large Attachments

Gmail allows attachments up to 25MB, but some mail servers have lower limits. You may need to store large attachments separately or adjust your target server’s configuration.

Shared Resources

Shared drives need owner reassignment before migration. Calendar sharing must be recreated on the new system. Document collaboration links will break and need updating.

Google Photos

Google Photos is not part of standard Google Drive storage and needs a separate migration approach. Consider whether you want original quality or compressed versions.

Migration Strategy

Phased Approach

Start with a pilot group to identify issues before the broader migration. Migrate inactive users first to reduce impact if problems occur. Schedule department by department to manage support load. Keep Google active during the transition period for rollback capability.

Hybrid Period

MX records can split email delivery between old and new systems during transition. Users can access both systems simultaneously. Gradual cutover reduces risk compared to a single migration event.

Data Validation

After migration, compare file counts between source and destination. Verify email folders transferred correctly. Check that user access permissions work as expected.

Post-Migration

User Training

Document key differences for users. Explain that real-time collaboration like Google Docs may work differently. Walk through the changed UI and UX in alternative applications. Demonstrate new sharing workflows.

Feature Gaps

Some features may be lost in migration. Smart Compose in Gmail won’t transfer to other mail clients. Google Assistant integration is Google-specific. Automatic photo organization depends on Google’s ML systems. Version history in Docs may not fully transfer.

Maintaining Archive Access

For historical data access, you might keep a reduced Google license for archive purposes, export everything to static storage for reference, or convert all documents to standard formats for long-term preservation.

Cost Factors

Google Workspace Pricing

Business Starter costs $6 per user per month. Business Standard costs $12 per user per month. Business Plus costs $18 per user per month. You’ll need to maintain these subscriptions during the migration period.

Data Export Costs

There are no direct egress fees from Google, but API quotas may extend your timeline. Consider bandwidth costs on your receiving infrastructure.

Timeline Estimates

Migration duration depends on several factors including number of users, data volume per user, available bandwidth, and conversion requirements.

Typical timelines range from 2-3 weeks for small organizations under 50 users, 1-3 months for medium organizations between 50-500 users, and 3-6 months for large organizations with over 500 users.

Best Practices

Inventory your existing environment first by documenting what you have before starting. Test thoroughly by piloting with your IT team before broader rollout. Communicate often to keep users informed throughout the process. Plan for rollback by having contingency plans if issues arise. Archive everything by keeping backups of original data in case you need to reference it later.

Next Steps

Review Common Concepts for general migration principles. See Validation for testing procedures to verify your migration succeeded.

Microsoft 365 Migration Guide

Migrating from Microsoft 365 to self-hosted infrastructure.

Overview

Microsoft 365 (formerly Office 365) includes multiple services that need to be migrated to different self-hosted components.

Service Mapping

Microsoft 365 ServiceSelf-Hosted AlternativeMigration Tool
Exchange OnlineMail server (Stalwart, etc.)imapsync
OneDriveMinIO or Nextcloudrclone
SharePointMinIO + Wiki/CMSrclone + export tools
TeamsMattermost, General Bots, etc.Export/Import APIs
Azure ADKeycloak, Authentik, ZitadelLDAP export

Migration Steps

1. Email Migration (Exchange Online)

Before beginning the email migration, ensure IMAP access is enabled in Exchange Online, your target mail server is configured, and you have user credentials or app passwords available.

The migration process uses imapsync for mailbox migration. Migrate in batches to avoid throttling from Microsoft’s servers, and preserve folder structure and flags during transfer.

Keep in mind that Exchange uses proprietary features such as categories that may not transfer cleanly. Calendar and contacts require separate migration using CalDAV and CardDAV protocols. Shared mailboxes require special handling and may need to be migrated individually.

2. File Migration (OneDrive/SharePoint)

Prerequisites include having the OneDrive sync client or API access configured, sufficient storage on the target system, and adequate network bandwidth for the transfer.

Use rclone with the OneDrive backend for the migration process. Maintain folder structure during transfer and preserve timestamps where possible.

Be aware that SharePoint metadata won’t transfer automatically and may need manual recreation. Version history is typically lost during migration. Permissions need to be recreated on the target system.

3. User Migration (Azure AD)

Prepare for user migration by setting up Azure AD Connect or API access, and ensure your target identity provider is ready to receive users.

Export users via PowerShell or Graph API, transform the data to the target format such as LDIF or JSON, then import to your new identity provider.

Important considerations include that passwords cannot be exported from Azure AD, so users will need to reset their passwords. MFA settings require reconfiguration on the new system. Group memberships need mapping to equivalent structures in the target system.

Common Challenges

API Throttling

Microsoft throttles API calls to protect their infrastructure. Plan for a slow, steady migration rather than attempting bulk transfers. Use batch operations where possible and consider running migrations during off-peak hours.

Data Volume

Large organizations may have accumulated terabytes of OneDrive and SharePoint data, years of email history, and thousands of users. Factor this into your timeline and resource planning.

Feature Parity

Some M365 features have no direct equivalent in self-hosted solutions. Power Automate workflows will need to be recreated using different automation tools. SharePoint lists and forms require alternative solutions. Teams channel history may be difficult to preserve in its original format.

Tools and Utilities

PowerShell for Export

The Azure AD PowerShell module handles user export operations. Exchange Online PowerShell provides mailbox information. SharePoint Online PowerShell helps with site inventory and metadata export.

Graph API

The Graph API provides programmatic access to most M365 services and is useful for custom migration scripts. Using it requires app registration and appropriate permissions in your Azure tenant.

Third-Party Tools

Commercial options include BitTitan MigrationWiz and Sharegate, which provide guided migration experiences. Various open-source scripts are available on GitHub for more customized approaches.

Post-Migration

DNS Changes

Update MX records to point to your new email server. Update autodiscover records for email client configuration. Consider keeping a hybrid setup temporarily to catch any missed emails during the transition.

User Communication

Provide new login credentials to all users. Document any changed procedures and differences from the M365 experience. Offer training sessions on the new tools to ensure smooth adoption.

Validation

Verify email delivery works correctly in both directions. Test file access to ensure permissions transferred properly. Confirm authentication works for all migrated users.

Cost Considerations

Subscription Overlap

You may need to maintain M365 subscriptions during the migration period. Consider read-only licenses for archive access if you need to retain access to historical data.

Data Transfer Costs

Factor in egress charges from Microsoft when transferring large amounts of data. Account for bandwidth costs if transferring over the internet rather than dedicated connections.

Timeline Estimates

Small organizations with fewer than 50 users typically complete migration in 1-2 weeks. Medium organizations with 50-500 users usually require 1-2 months. Large organizations with more than 500 users should plan for 2-6 months.

Factors affecting timeline include data volume, network speed, complexity of the existing setup, and user training needs.

Next Steps

Review the Common Concepts guide for general migration principles. See Validation for detailed testing procedures to verify your migration succeeded.

n8n Migration Guide

Migrating workflows and automations from n8n to General Bots.

General Bots

Overview

n8n is a workflow automation platform with a visual node-based editor. General Bots provides equivalent automation capabilities through BASIC scripting, offering more flexibility and integrated features without execution limits.

Why Migrate

Aspectn8nGeneral Bots
AutomationVisual workflowsBASIC scripts (more powerful)
PricingPer-execution limitsUnlimited executions
AI IntegrationVia API nodesNative LLM keywords
Chat/BotNot includedFull multi-channel
Productivity SuiteNot includedEmail, calendar, files, tasks
Knowledge BaseNot includedBuilt-in RAG
Self-hostingAvailableAvailable

Workflow Mapping

Triggers

n8n TriggerGeneral Bots Equivalent
Schedule TriggerSET SCHEDULE
WebhookWEBHOOK
Email Trigger (IMAP)ON "email:received"
Database TriggerON "table:tablename:insert"
Manual TriggerDirect script execution
CronSET SCHEDULE "cron expression"

Common Nodes

n8n NodeGeneral Bots Equivalent
HTTP RequestGET, POST, PUT, DELETE
SetVariable assignment
IFIF/THEN/ELSE/END IF
SwitchSWITCH/CASE/END SWITCH
Code (JavaScript)BASIC script
FunctionBASIC subroutines
MergeArray operations
Split In BatchesFOR EACH
WaitWAIT
Send EmailSEND MAIL
SlackPOST to Slack webhook
DiscordPOST to Discord webhook
Google SheetsGET/POST to Sheets API
AirtableGET/POST to Airtable API
MySQL/PostgreSQLFIND, INSERT, UPDATE, DELETE
MongoDBGET/POST to MongoDB API

Migration Examples

Scheduled Data Sync

n8n workflow:

Schedule Trigger → HTTP Request → IF → Google Sheets

General Bots equivalent:

SET SCHEDULE "every hour"

data = GET "https://api.example.com/data"

IF data.status = "active" THEN
    FOR EACH item IN data.items
        INSERT "synced_data", #{
            id: item.id,
            name: item.name,
            value: item.value,
            synced_at: NOW()
        }
    NEXT item
END IF

TALK "Synced " + LEN(data.items) + " items"

Webhook Processing

n8n workflow:

Webhook → Set → IF → Send Email + Slack

General Bots equivalent:

WEBHOOK "order-received"

order = body
customer_name = order.customer.name
order_total = order.total

IF order_total > 1000 THEN
    SEND MAIL TO "sales@company.com" SUBJECT "Large Order" BODY "Order from " + customer_name + ": $" + order_total
    
    POST "https://hooks.slack.com/services/xxx", #{
        text: "Large order received: $" + order_total
    }
END IF

Multi-Step API Orchestration

n8n workflow:

Webhook → HTTP Request (API 1) → Code → HTTP Request (API 2) → IF → Multiple outputs

General Bots equivalent:

WEBHOOK "process-lead"

lead = body

' Step 1: Enrich lead data
enriched = POST "https://api.clearbit.com/enrich", #{email: lead.email}

' Step 2: Score the lead
WITH lead_data
    .email = lead.email
    .company = enriched.company.name
    .industry = enriched.company.industry
    .size = enriched.company.employees
END WITH

score = SCORE LEAD lead_data

' Step 3: Route based on score
IF score.status = "hot" THEN
    POST "https://api.salesforce.com/leads", lead_data
    SEND MAIL TO "sales@company.com" SUBJECT "Hot Lead" BODY lead_data
ELSEIF score.status = "warm" THEN
    POST "https://api.hubspot.com/contacts", lead_data
ELSE
    INSERT "cold_leads", lead_data
END IF

Error Handling

n8n approach: Error Trigger node

General Bots equivalent:

SET SCHEDULE "every 5 minutes"

TRY
    result = GET "https://api.example.com/health"
    IF result.status <> "healthy" THEN
        THROW "Service unhealthy: " + result.message
    END IF
CATCH
    SEND MAIL TO "ops@company.com" SUBJECT "Alert: Service Down" BODY ERROR_MESSAGE
    POST "https://hooks.slack.com/services/xxx", #{text: "Service alert: " + ERROR_MESSAGE}
END TRY

Exporting n8n Workflows

Export Process

  1. In n8n, select the workflow
  2. Click the three-dot menu → Download
  3. Save the JSON file
  4. Analyze nodes and connections
  5. Translate to BASIC script

JSON Structure Analysis

n8n exports workflows as JSON:

{
  "nodes": [
    {"type": "n8n-nodes-base.httpRequest", "parameters": {...}},
    {"type": "n8n-nodes-base.if", "parameters": {...}}
  ],
  "connections": {...}
}

Map each node type to the equivalent BASIC keyword.

Feature Comparison

What You Gain

Native AI integration:

USE KB "company-docs"
response = LLM "Analyze this data and provide insights: " + data

Multi-channel chat:

TALK "How can I help you?"
HEAR question
answer = LLM question
TALK answer

Built-in productivity:

CREATE TASK "Follow up with " + customer_name DUE DATEADD(NOW(), 3, "day")
BOOK "Meeting with " + customer_name AT meeting_time
SEND MAIL TO customer_email SUBJECT "Confirmation" BODY message

Knowledge base:

USE KB "product-docs"
USE KB "pricing-info"
answer = LLM customer_question

What Changes

n8n ApproachGeneral Bots Approach
Visual drag-and-dropText-based BASIC scripts
Node connectionsSequential code flow
Credentials UIconfig.csv settings
Execution history UILog files + monitoring
Community nodesHTTP keywords + custom code

Credentials Migration

n8n Credentials

n8n stores credentials separately. Export and configure in General Bots:

config.csv:

key,value
slack-webhook-url,https://hooks.slack.com/services/xxx
api-key-clearbit,your-api-key
salesforce-token,your-token

Usage in BASIC:

slack_url = GET CONFIG "slack-webhook-url"
POST slack_url, #{text: "Message"}

Migration Checklist

Pre-Migration

  • Export all n8n workflows as JSON
  • Document active schedules and triggers
  • List all credentials and API keys
  • Identify critical workflows for priority migration
  • Set up General Bots environment

Migration

  • Translate workflows to BASIC scripts
  • Configure credentials in config.csv
  • Set up webhooks with same endpoints
  • Configure schedules
  • Test each workflow individually

Post-Migration

  • Run parallel execution for verification
  • Compare outputs between systems
  • Monitor for errors
  • Decommission n8n workflows
  • Document new BASIC scripts

Common Patterns

Batch Processing

n8n: Split In Batches node

General Bots:

items = GET "https://api.example.com/items"
batch_size = 10
total = LEN(items)

FOR i = 0 TO total - 1 STEP batch_size
    batch = SLICE(items, i, i + batch_size)
    FOR EACH item IN batch
        PROCESS_ITEM(item)
    NEXT item
    WAIT 1000  ' Rate limiting
NEXT i

Conditional Branching

n8n: IF node with multiple branches

General Bots:

SWITCH status
    CASE "new"
        HANDLE_NEW()
    CASE "pending"
        HANDLE_PENDING()
    CASE "complete"
        HANDLE_COMPLETE()
    DEFAULT
        HANDLE_UNKNOWN()
END SWITCH

Data Transformation

n8n: Set node or Code node

General Bots:

' Transform data
WITH transformed
    .full_name = data.first_name + " " + data.last_name
    .email = LOWER(data.email)
    .created = NOW()
    .source = "api"
END WITH

Best Practices

Start with simple workflows. Migrate straightforward automations first to build familiarity with BASIC syntax.

Use descriptive variable names. BASIC scripts are more readable than node graphs when well-written.

Add comments. Document your scripts for future maintenance:

' Daily sales report - sends summary to management
' Runs at 6 PM on weekdays
SET SCHEDULE "0 18 * * 1-5"

Leverage native features. Don’t just replicate n8n workflows—take advantage of General Bots’ integrated AI, chat, and productivity features.

Test incrementally. Verify each migrated workflow before moving to the next.

See Also

Notion Migration Guide

Migrating content and workflows from Notion to General Bots.

General Bots

Overview

Notion is a collaborative workspace combining notes, databases, and project management. General Bots provides equivalent functionality through its knowledge base, task management, and AI-powered conversation features—with the added benefit of self-hosting and native automation.

Why Migrate

AspectNotionGeneral Bots
HostingCloud onlySelf-hosted
Pricing$10-15/user/monthNo per-user fees
AI Features$10/user/month add-onNative (any LLM)
AutomationLimitedFull BASIC scripting
Chat/BotNot includedMulti-channel
API CreationNot availableInstant webhooks
Data SovereigntyTheir serversYour infrastructure

Content Mapping

Notion to General Bots

Notion FeatureGeneral Bots Equivalent
PagesKnowledge base documents (.gbkb)
DatabasesTables (CSV/database)
Kanban boardsTask boards
CalendarCalendar API
CommentsConversation history
TemplatesBot templates
IntegrationsBASIC scripts + webhooks
Notion AILLM keyword

Migration Process

Step 1: Export Notion Content

Navigate to Settings & Members, then Settings, and scroll to Export content. Choose the Markdown & CSV export format and download the ZIP file.

The export includes pages as Markdown files, databases as CSV files, and attachments in folders.

Step 2: Prepare Knowledge Base

Organize exported content for General Bots:

my-bot.gbkb/
├── company-info/
│   ├── about.md
│   ├── policies.md
│   └── procedures.md
├── products/
│   ├── catalog.md
│   └── pricing.md
└── support/
    ├── faq.md
    └── troubleshooting.md

Step 3: Import Documents

Place Markdown files in your .gbkb folder. General Bots automatically indexes them for RAG:

USE KB "company-info"
USE KB "products"
USE KB "support"

TALK "How can I help you?"
HEAR question
answer = LLM question
TALK answer

Step 4: Convert Databases

Transform Notion CSV exports to General Bots tables:

' Import Notion database export
data = READ "notion-export/Projects.csv"

FOR EACH row IN data
    INSERT "projects", #{
        name: row["Name"],
        status: row["Status"],
        due_date: row["Due Date"],
        assignee: row["Assignee"],
        imported_at: NOW()
    }
NEXT row

TALK "Imported " + LEN(data) + " projects"

Step 5: Recreate Task Boards

Convert Notion Kanban boards to General Bots task boards:

' Create project for Notion board
project_id = CREATE PROJECT "Product Roadmap" WITH DESCRIPTION "Migrated from Notion"

' Import tasks
tasks = READ "notion-export/Roadmap.csv"

FOR EACH task IN tasks
    status = SWITCH task["Status"]
        CASE "Not Started" : "todo"
        CASE "In Progress" : "in_progress"
        CASE "Done" : "done"
        DEFAULT : "todo"
    END SWITCH
    
    CREATE TASK task["Name"] IN PROJECT project_id WITH STATUS status
NEXT task

Notion AI to General Bots

Document Summarization

Notion AI allows highlighting text and selecting “Summarize” but is limited to Notion content. General Bots provides broader capability:

USE KB "documents"
summary = LLM "Summarize the key points from our Q3 report"
TALK summary

Content Generation

Where Notion AI uses the /ai command for basic prompting, General Bots offers full control:

SET CONTEXT "You are a technical writer. Write clear, concise documentation."

TALK "What would you like me to write?"
HEAR topic

content = LLM "Write comprehensive documentation about: " + topic
WRITE "/docs/" + SLUGIFY(topic) + ".md", content
TALK "Documentation created!"

Q&A on Documents

Notion AI asks questions about single page content. General Bots searches across your entire knowledge base:

' Load entire knowledge base
USE KB "all-docs"
USE KB "wiki"
USE KB "procedures"

' Answer questions across all content
TALK "Ask me anything about our documentation"
HEAR question
answer = LLM question
TALK answer

Automation Migration

Notion Automations (Limited)

Notion has basic automations for status changes, due date reminders, and Slack notifications.

General Bots Equivalent

Status change automation:

ON "table:projects:update"
    IF params.old_status <> params.new_status THEN
        IF params.new_status = "complete" THEN
            SEND MAIL TO params.owner_email SUBJECT "Project Completed" BODY "Your project " + params.name + " is now complete!"
        END IF
    END IF
END ON

Due date reminders:

SET SCHEDULE "every day at 9am"

upcoming = FIND "tasks", "due_date = DATEADD(NOW(), 1, 'day') AND status <> 'done'"

FOR EACH task IN upcoming
    SEND MAIL TO task.assignee_email SUBJECT "Task Due Tomorrow" BODY "Reminder: " + task.name + " is due tomorrow"
NEXT task

Slack notifications:

ON "table:tasks:insert"
    POST "https://hooks.slack.com/services/xxx", #{
        text: "New task created: " + params.name,
        channel: "#tasks"
    }
END ON

Database Migration

Notion Database Properties

Notion PropertyGeneral Bots Equivalent
TitleTEXT column
TextTEXT column
NumberNUMERIC column
SelectTEXT with validation
Multi-selectJSONB array
DateDATE/TIMESTAMP column
PersonUser reference
FilesFile path references
CheckboxBOOLEAN column
URLTEXT column
EmailTEXT column
PhoneTEXT column
FormulaComputed in BASIC
RelationForeign key
RollupAGGREGATE queries

Formula Migration

Notion formulas like prop("Price") * prop("Quantity") translate to BASIC calculations:

' Calculate on insert/update
total = price * quantity
INSERT "orders", #{item: item, price: price, quantity: quantity, total: total}

' Or query with calculation
SELECT "*, price * quantity as total FROM orders"

Relation Migration

Notion relations link databases together. General Bots uses foreign keys:

' Create related tables
CREATE TABLE "projects" (id, name, status)
CREATE TABLE "tasks" (id, project_id, name, assignee)

' Query with join
tasks = FIND "tasks", "project_id = '" + project_id + "'"

' Or use JOIN keyword
result = JOIN "projects", "tasks", "projects.id = tasks.project_id"

Template Migration

Notion Templates

Notion templates are pre-filled pages. Convert to General Bots templates as BASIC scripts.

Meeting notes template:

' meeting-notes.bas
PARAM meeting_title AS string
PARAM attendees AS string
PARAM date AS date

DESCRIPTION "Create meeting notes document"

template = "# " + meeting_title + "

**Date:** " + FORMAT(date, "MMMM d, yyyy") + "
**Attendees:** " + attendees + "

## Agenda
1. 
2. 
3. 

## Discussion Notes


## Action Items
- [ ] 
- [ ] 

## Next Meeting
"

WRITE "/meetings/" + FORMAT(date, "yyyy-MM-dd") + "-" + SLUGIFY(meeting_title) + ".md", template
TALK "Meeting notes created: " + meeting_title

Project Template

' new-project.bas
PARAM project_name AS string
PARAM owner AS string

DESCRIPTION "Create new project with standard structure"

project_id = CREATE PROJECT project_name WITH DESCRIPTION "Created by template"
ADD USER TO PROJECT project_id, owner, "owner"

' Create standard tasks
CREATE TASK "Define requirements" IN PROJECT project_id
CREATE TASK "Create timeline" IN PROJECT project_id
CREATE TASK "Assign resources" IN PROJECT project_id
CREATE TASK "Kickoff meeting" IN PROJECT project_id
CREATE TASK "First milestone review" IN PROJECT project_id

TALK "Project '" + project_name + "' created with 5 starter tasks"

What You Gain

Self-Hosting

Your data stays on your infrastructure. No concerns about Notion’s data practices or service availability.

Native AI Without Extra Cost

Notion charges $10/user/month for AI features. General Bots includes AI at no additional cost—use any LLM provider.

Full Automation

Go beyond Notion’s limited automations with complete BASIC scripting:

SET SCHEDULE "every monday at 9am"

' Generate weekly report
projects = FIND "projects", "status = 'active'"
tasks_completed = AGGREGATE "tasks", "COUNT", "id", "completed_at > DATEADD(NOW(), -7, 'day')"

SET CONTEXT "You are a project manager. Create a concise weekly summary."
report = LLM "Summarize: " + LEN(projects) + " active projects, " + tasks_completed + " tasks completed this week"

SEND MAIL TO "team@company.com" SUBJECT "Weekly Project Summary" BODY report

Multi-Channel Access

Access your knowledge base through any channel:

' Same bot works on web, WhatsApp, Teams, Slack
TALK "How can I help you today?"
HEAR question

USE KB "company-wiki"
answer = LLM question
TALK answer

Custom APIs

Create APIs instantly—something not possible in Notion:

WEBHOOK "project-status"

project = FIND "projects", "id = '" + params.id + "'"
tasks = FIND "tasks", "project_id = '" + params.id + "'"

WITH response
    .project = project
    .task_count = LEN(tasks)
    .completed = LEN(FILTER(tasks, "status = 'done'"))
END WITH

Migration Checklist

Pre-Migration

Before starting, export all Notion content in Markdown & CSV format. Inventory your databases and their properties. Document active integrations. Identify critical templates that need recreation. Set up your General Bots environment.

Migration

During the migration, organize Markdown files into the .gbkb structure. Import database CSVs to tables. Convert formulas to BASIC calculations. Recreate task boards as projects. Migrate templates to BASIC scripts. Set up automations to replace Notion workflows.

Post-Migration

After migration, verify all content is searchable in the knowledge base. Test database queries. Confirm automations work correctly. Train your team on the new interface. Redirect any Notion integrations to General Bots.

Best Practices

Organize your knowledge base thoughtfully by grouping related documents in collections for better RAG results.

Simplify database structures because Notion encourages complex relations while General Bots works best with cleaner schemas.

Leverage AI for migration by using General Bots’ LLM to help transform and summarize Notion content:

content = READ "notion-export/long-document.md"
summary = LLM "Create a concise summary of this document: " + content
WRITE "/summaries/document-summary.md", summary

Keep templates as scripts since BASIC templates are more powerful than Notion’s static templates.

See Also

Zapier and Make Migration Guide

Migrating workflows from Zapier or Make (formerly Integromat) to General Bots.

General Bots

Overview

Zapier and Make are visual automation platforms connecting apps through triggers and actions. General Bots provides equivalent automation through BASIC scripting, offering more power and flexibility without per-task pricing.

Why Migrate

AspectZapier/MakeGeneral Bots
PricingPer-task/operationUnlimited executions
AutomationVisual workflowsBASIC scripts
AI IntegrationVia paid appsNative LLM keywords
Chat/BotNot includedMulti-channel
Productivity SuiteNot includedEmail, calendar, files
Custom LogicLimitedFull programming
Self-hostingNot availableFull control
API CreationNot availableInstant webhooks

Cost Comparison

Zapier Pricing

PlanTasks/MonthCost
Free100$0
Starter750$19.99
Professional2,000$49
Team50,000$69/user
Company100,000+Custom

Make Pricing

PlanOperations/MonthCost
Free1,000$0
Core10,000$9
Pro10,000$16
Teams10,000$29/user
EnterpriseCustomCustom

General Bots

PlanOperationsCost
Self-hostedUnlimitedInfrastructure only

Trigger Mapping

Zapier/Make TriggerGeneral Bots Equivalent
ScheduleSET SCHEDULE
WebhookWEBHOOK
New EmailON "email:received"
New Row (Sheets)ON "table:name:insert"
Form SubmissionON FORM SUBMIT
New FileON "file:created"
RSS FeedScheduled GET
App-specificAPI polling or webhooks

Action Mapping

Zapier/Make ActionGeneral Bots Equivalent
Send EmailSEND MAIL
HTTP RequestGET, POST, PUT, DELETE
Create RowINSERT
Update RowUPDATE
FilterIF/THEN/ELSE
FormatterString/date functions
DelayWAIT
PathsIF branches
LoopFOR EACH
Code (JS/Python)BASIC script
Slack MessagePOST to Slack webhook
Create TaskCREATE TASK
Send SMSSMS integration

Migration Examples

Simple Zap: Form to Email

Zapier:

Typeform → Gmail (Send Email)

General Bots:

ON FORM SUBMIT "contact-form"
    name = fields.name
    email = fields.email
    message = fields.message
    
    SEND MAIL TO "support@company.com" SUBJECT "New Contact: " + name BODY "From: " + email + "\n\nMessage:\n" + message
END ON

Multi-Step Zap: Lead Processing

Zapier:

Webhook → Filter → Clearbit Enrich → Salesforce (Create Lead) → Slack (Send Message)

General Bots:

WEBHOOK "new-lead"

lead = body

' Filter
IF lead.email = "" OR NOT CONTAINS(lead.email, "@") THEN
    RETURN #{status: "invalid", reason: "Invalid email"}
END IF

' Enrich
SET HEADER "Authorization", "Bearer " + GET CONFIG "clearbit-key"
enriched = GET "https://person.clearbit.com/v2/people/find?email=" + lead.email

' Create in CRM
WITH salesforce_lead
    .Email = lead.email
    .FirstName = enriched.name.givenName
    .LastName = enriched.name.familyName
    .Company = enriched.employment.name
    .Title = enriched.employment.title
END WITH

SET HEADER "Authorization", "Bearer " + GET CONFIG "salesforce-token"
result = POST "https://yourinstance.salesforce.com/services/data/v52.0/sobjects/Lead", salesforce_lead

' Notify Slack
POST GET CONFIG "slack-webhook", #{
    text: "New lead: " + lead.email + " from " + enriched.employment.name
}

RETURN #{status: "success", salesforce_id: result.id}

Scheduled Sync

Make Scenario:

Schedule → HTTP Request → Iterator → Google Sheets (Add Row)

General Bots:

SET SCHEDULE "every hour"

data = GET "https://api.example.com/new-orders"

FOR EACH order IN data.orders
    INSERT "orders", #{
        order_id: order.id,
        customer: order.customer_name,
        total: order.total,
        status: order.status,
        synced_at: NOW()
    }
NEXT order

TALK "Synced " + LEN(data.orders) + " orders"

Error Handling

Zapier: Error handling path or retry

General Bots:

SET SCHEDULE "every 5 minutes"

TRY
    result = POST "https://api.example.com/sync", data
    IF result.status <> 200 THEN
        THROW "API returned " + result.status
    END IF
CATCH
    ' Log error
    INSERT "error_log", #{
        error: ERROR_MESSAGE,
        timestamp: NOW(),
        data: data
    }
    
    ' Alert
    SEND MAIL TO "ops@company.com" SUBJECT "Sync Error" BODY ERROR_MESSAGE
    POST GET CONFIG "slack-alerts", #{text: "Sync failed: " + ERROR_MESSAGE}
END TRY

Conditional Paths

Zapier Paths:

Trigger → Path A (if condition) → Actions
       → Path B (else) → Actions

General Bots:

WEBHOOK "order-status"

order = body

IF order.total > 1000 THEN
    ' High-value order path
    SEND MAIL TO "vip-team@company.com" SUBJECT "High-Value Order" BODY order
    POST GET CONFIG "slack-vip", #{text: "VIP Order: $" + order.total}
    priority = "high"
    
ELSEIF order.is_rush = true THEN
    ' Rush order path
    SEND MAIL TO "fulfillment@company.com" SUBJECT "RUSH Order" BODY order
    priority = "rush"
    
ELSE
    ' Standard order path
    priority = "normal"
END IF

INSERT "orders", #{
    id: order.id,
    total: order.total,
    priority: priority,
    created: NOW()
}

Data Transformation

Make/Zapier Formatter:

  • Split text
  • Format dates
  • Math operations
  • Lookup tables

General Bots:

' String operations
full_name = first_name + " " + last_name
email_domain = SPLIT(email, "@")[1]
slug = LOWER(REPLACE(title, " ", "-"))

' Date formatting
formatted_date = FORMAT(created_at, "MMMM d, yyyy")
due_date = DATEADD(NOW(), 7, "day")
days_ago = DATEDIFF("day", created_at, NOW())

' Math
subtotal = price * quantity
tax = subtotal * 0.08
total = subtotal + tax
discount = IIF(total > 100, total * 0.1, 0)

' Lookup
status_label = SWITCH status
    CASE "new" : "New Order"
    CASE "processing" : "In Progress"
    CASE "shipped" : "On the Way"
    CASE "delivered" : "Completed"
    DEFAULT : "Unknown"
END SWITCH

App-Specific Migrations

Gmail/Email

Zapier: Gmail trigger/action

General Bots:

' Send email
SEND MAIL TO recipient SUBJECT subject BODY body

' With attachments
SEND MAIL TO recipient SUBJECT subject BODY body ATTACH "/files/report.pdf"

' Process incoming (via Stalwart webhook)
ON "email:received"
    IF CONTAINS(params.subject, "Order") THEN
        PROCESS_ORDER(params)
    END IF
END ON

Slack

Zapier: Slack app

General Bots:

' Simple message
POST "https://hooks.slack.com/services/xxx", #{text: "Hello!"}

' Rich message
WITH slack_message
    .channel = "#general"
    .blocks = [
        #{type: "header", text: #{type: "plain_text", text: "New Order"}},
        #{type: "section", text: #{type: "mrkdwn", text: "*Customer:* " + customer_name}},
        #{type: "section", text: #{type: "mrkdwn", text: "*Total:* $" + total}}
    ]
END WITH
POST GET CONFIG "slack-webhook", slack_message

Google Sheets

Zapier: Google Sheets app

General Bots:

' Read from sheet
SET HEADER "Authorization", "Bearer " + GET CONFIG "google-token"
data = GET "https://sheets.googleapis.com/v4/spreadsheets/{spreadsheetId}/values/Sheet1!A1:D100"

' Append row
POST "https://sheets.googleapis.com/v4/spreadsheets/{spreadsheetId}/values/Sheet1!A1:append?valueInputOption=USER_ENTERED", #{
    values: [[name, email, phone, NOW()]]
}

' Or use General Bots tables directly
INSERT "contacts", #{name: name, email: email, phone: phone}

Airtable

Zapier: Airtable app

General Bots:

SET HEADER "Authorization", "Bearer " + GET CONFIG "airtable-key"

' Read records
records = GET "https://api.airtable.com/v0/{baseId}/{tableName}"

' Create record
POST "https://api.airtable.com/v0/{baseId}/{tableName}", #{
    fields: #{
        Name: name,
        Email: email,
        Status: "New"
    }
}

HubSpot

Zapier: HubSpot app

General Bots:

SET HEADER "Authorization", "Bearer " + GET CONFIG "hubspot-token"

' Create contact
POST "https://api.hubapi.com/crm/v3/objects/contacts", #{
    properties: #{
        email: email,
        firstname: first_name,
        lastname: last_name,
        company: company
    }
}

' Create deal
POST "https://api.hubapi.com/crm/v3/objects/deals", #{
    properties: #{
        dealname: deal_name,
        amount: amount,
        pipeline: "default",
        dealstage: "appointmentscheduled"
    }
}

What You Gain

No Operation Limits

' Process thousands of records without worrying about limits
SET SCHEDULE "every hour"

records = GET "https://api.example.com/all-records"

FOR EACH record IN records
    PROCESS_RECORD(record)  ' No per-operation cost
NEXT record

Native AI Integration

' AI-powered automation
USE KB "company-docs"

incoming_email = params.body
category = LLM "Categorize this email as: support, sales, billing, or other: " + incoming_email

IF category = "support" THEN
    response = LLM "Draft a helpful support response to: " + incoming_email
    SEND MAIL TO params.from SUBJECT "Re: " + params.subject BODY response
END IF

Multi-Channel Chat

' Same automation works across channels
TALK "How can I help you?"
HEAR request

USE KB "help-docs"
answer = LLM request
TALK answer

' Available on Web, WhatsApp, Teams, Slack, Telegram, SMS

Built-in Productivity

' No need for separate calendar, task, email apps
CREATE TASK "Follow up with " + customer_name DUE DATEADD(NOW(), 3, "day")
BOOK "Call with " + customer_name AT meeting_time
SEND MAIL TO customer_email SUBJECT "Confirmation" BODY message

Migration Checklist

Pre-Migration

  • Export Zap/Scenario descriptions
  • Document all triggers and schedules
  • List all connected apps and credentials
  • Identify critical automations
  • Set up General Bots environment

Migration

  • Create BASIC scripts for each workflow
  • Configure credentials in config.csv
  • Set up webhooks with same URLs
  • Configure schedules
  • Test each automation

Post-Migration

  • Run parallel for verification
  • Compare execution results
  • Monitor for errors
  • Disable Zapier/Make automations
  • Cancel subscriptions

Best Practices

Start with simple Zaps. Migrate basic workflows first to learn BASIC syntax.

Combine multiple Zaps. Often several Zaps can become one General Bots script.

Use native features. Don’t replicate Zapier patterns—leverage AI, chat, and productivity features.

Add error handling. BASIC provides better error handling than visual builders.

Document your scripts. Add comments explaining what each script does.

' Daily sales report
' Runs at 6 PM on weekdays
' Aggregates daily orders and sends summary to management
SET SCHEDULE "0 18 * * 1-5"

' ... implementation

See Also

CLI Reference

botserver provides a command-line interface for managing components, secrets, and services.

⚠️ IMPORTANT: All container create commands (botserver install ... --container) must be run from the host system, not from inside a container. The botserver binary manages LXC containers from the host level.

General Usage

botserver <command> [options]

Commands Overview

CommandDescription
installInstall a component
removeRemove a component
listList all available components
statusCheck component status
startStart all installed components
stopStop all components
restartRestart all components
vaultManage secrets in HashiCorp Vault
rotate-secretRotate credentials for a component
rotate-secretsRotate ALL credentials
versionShow version information

Global Options

OptionDescription
--containerUse LXC container mode instead of local installation
--tenant <name>Specify tenant name (default: “default”)
--help, -hShow help information
--version, -vShow version

Component Management

Install a Component

botserver install <component> [--container] [--tenant <name>]

⚠️ Run from host: Container install commands must be executed on the host machine, not inside any container.

Examples:

# Install vault locally
botserver install vault

# Install vault in an LXC container with tenant name (run on HOST)
botserver install vault --container --tenant pragmatismo

# Install vector database (run on HOST)
botserver install vector_db --container --tenant pragmatismo

Example: Create Vault and VectorDB containers

This example shows how to create both Vault (secrets management) and VectorDB (Qdrant for embeddings) containers from scratch:

# Run these commands on the HOST system, not inside a container

# Step 1: Install Vault container
botserver install vault --container --tenant mycompany

# Step 2: Install VectorDB (Qdrant) container
botserver install vector_db --container --tenant mycompany

# Step 3: Verify containers are running
lxc list | grep mycompany

# Expected output:
# | mycompany-vault     | RUNNING | 10.x.x.x (eth0) | ... |
# | mycompany-vectordb  | RUNNING | 10.x.x.x (eth0) | ... |

# Step 4: Get container IPs for configuration
lxc list mycompany-vault -c n4 --format csv
lxc list mycompany-vectordb -c n4 --format csv

# Step 5: Test Vault health
curl http://<vault-ip>:8200/v1/sys/health

# Step 6: Test VectorDB health
curl http://<vectordb-ip>:6333/health

Available Components:

ComponentDescription
vaultHashiCorp Vault - Secrets management
tablesPostgreSQL - Primary database
cacheValkey - Redis-compatible cache
driveMinIO - S3-compatible object storage
llmllama.cpp - Local LLM server
emailStalwart - Mail server
proxyCaddy - HTTPS reverse proxy
dnsCoreDNS - DNS server
directoryZitadel - Identity management
almForgejo - Git repository
alm_ciForgejo Runner - CI/CD
meetingLiveKit - Video conferencing
vector_dbQdrant - Vector database
timeseries_dbInfluxDB - Time series database
observabilityVector - Log aggregation

Remove a Component

botserver remove <component> [--container] [--tenant <name>]

List Components

botserver list [--container] [--tenant <name>]

Shows all available components and their installation status.

Check Status

botserver status <component> [--container] [--tenant <name>]

Service Control

Start Services

botserver start [--container] [--tenant <name>]

Starts all installed components.

Stop Services

botserver stop

Stops all running components.

Restart Services

botserver restart [--container] [--tenant <name>]

Vault Commands

The vault subcommand manages secrets stored in HashiCorp Vault.

Prerequisites

⚠️ SECURITY WARNING: Never expose VAULT_TOKEN in shell history or scripts. Use a secrets file with restricted permissions (600) or environment injection.

Vault commands require these environment variables:

# Secure method: use a file with restricted permissions
echo "VAULT_TOKEN=<your-vault-token>" > ~/.vault-token
chmod 600 ~/.vault-token
source ~/.vault-token

export VAULT_ADDR=http://<vault-ip>:8200

Migrate Secrets from .env

Migrates secrets from an existing .env file to Vault.

botserver vault migrate [env_file]

Arguments:

ArgumentDescriptionDefault
env_filePath to .env file.env

Example:

# Migrate from default .env
botserver vault migrate

# Migrate from specific file
botserver vault migrate /opt/gbo/bin/system/.env

Migrated Secret Paths:

.env VariablesVault Path
TABLES_*gbo/tables
CUSTOM_*gbo/custom
DRIVE_*gbo/drive
EMAIL_*gbo/email
STRIPE_*gbo/stripe
AI_*, LLM_*gbo/llm

After migration, your .env file only needs:

RUST_LOG=info
VAULT_ADDR=http://<vault-ip>:8200
VAULT_TOKEN=<vault-token>
SERVER_HOST=0.0.0.0
SERVER_PORT=5858

Store Secrets

Store key-value pairs at a Vault path.

botserver vault put <path> <key=value> [key=value...]

Examples:

# Store database credentials
botserver vault put gbo/tables host=localhost port=5432 username=postgres password=secret

# Store email configuration
botserver vault put gbo/email server=mail.example.com user=admin password=secret

# Store API keys
botserver vault put gbo/llm api_key=sk-xxx endpoint=https://api.openai.com

Retrieve Secrets

Get secrets from a Vault path.

botserver vault get <path> [key]

Examples:

# Get all secrets at a path (values are masked)
botserver vault get gbo/tables

# Get a specific key value
botserver vault get gbo/tables password

# Get drive credentials
botserver vault get gbo/drive

Output:

Secrets at gbo/tables:
  host=localhost
  port=5432
  database=botserver
  username=gbuser
  password=67a6...

Note: Sensitive values (password, secret, key, token) are automatically masked in output.

List Secret Paths

Shows all configured secret paths.

botserver vault list

Output:

Configured secret paths:
  gbo/tables           - Database credentials
  gbo/drive            - S3/MinIO credentials
  gbo/cache            - Redis credentials
  gbo/email            - SMTP credentials
  gbo/directory        - Zitadel credentials
  gbo/llm              - AI API keys
  gbo/encryption       - Encryption keys
  gbo/meet             - LiveKit credentials
  gbo/alm              - Forgejo credentials
  gbo/vectordb         - Qdrant credentials
  gbo/observability    - InfluxDB credentials
  gbo/stripe           - Payment credentials
  gbo/custom           - Custom database

Health Check

Check Vault connection status.

botserver vault health

Output (success):

* Vault is healthy
  Address: http://10.16.164.100:8200

Output (failure):

x Vault not configured
  Set VAULT_ADDR and VAULT_TOKEN environment variables

Version Information

Show botserver version and component status.

botserver version [--all]

Options:

OptionDescription
--allShow detailed info: build, components, Vault status

Examples:

# Simple version
botserver version
# Output: botserver 6.1.0

# Detailed version with all components
botserver version --all

Output with –all:

botserver 6.1.0

Build Information:
  rustc: rustc 1.83.0 (90b35a623 2024-11-26)
  target: x86_64
  os: linux

Installed Components:
  * vault (installed)
  * tables (installed)
  * cache (installed)

Available Components: 15

Secrets:
  Vault: connected

Secret Rotation

Rotate credentials for security compliance and breach response.

🔒 SECURITY: Regular credential rotation is a security best practice. Recommended rotation schedule:

  • Production: Every 90 days
  • After employee departure: Immediately
  • After security incident: Immediately

Rotate Single Component

botserver rotate-secret <component>

Available Components:

ComponentWhat Gets Rotated
tablesPostgreSQL password
driveMinIO access key and secret
cacheValkey/Redis password
emailSMTP password
directoryZitadel client secret
encryptionMaster encryption key (⚠️ dangerous)
jwtJWT signing secret (⚠️ invalidates refresh tokens)

Examples:

# Rotate database password
botserver rotate-secret tables

# Output:
# ⚠️  WARNING: You must update PostgreSQL with the new password!
#
# Run this SQL command:
#   ALTER USER postgres WITH PASSWORD 'NewP@ssw0rd...';
#
# Old password: 67a6...
# New password: Xk9m...
# Save to Vault? [y/N]: y
# ✓ Credentials saved to Vault

# Rotate S3/MinIO credentials
botserver rotate-secret drive

# Rotate Redis password
botserver rotate-secret cache

⚠️ WARNING: After rotating, you MUST manually update the service with the new credentials before restarting botserver.

Rotate All Secrets

Rotate all credentials at once. Use for security incidents or compliance requirements.

botserver rotate-secrets --all

Output:

🔐 ROTATING ALL SECRETS
========================

⚠️  CRITICAL WARNING!
This will generate new credentials for ALL components.
You MUST update each service manually after rotation.

Type 'ROTATE ALL' to continue: ROTATE ALL

Generating new credentials...

✓ tables: ALTER USER postgres WITH PASSWORD 'Xk9mP@ss...';
✓ drive: mc admin user add myminio AKIAEXAMPLE... secretkey...
✓ cache: redis-cli CONFIG SET requirepass 'NewRedisP@ss...'
✓ email: new password = SmtpP@ss...
✓ directory: new client_secret = ZitadelSecret...

========================
✓ All secrets rotated and saved to Vault

⚠️  IMPORTANT: Run the commands above to update each service!
⚠️  Then restart botserver: botserver restart

Post-Rotation Checklist

After rotating secrets, follow this checklist:

# 1. Update PostgreSQL
lxc exec pragmatismo-tables -- psql -U postgres -c "ALTER USER postgres WITH PASSWORD 'new-password';"

# 2. Update MinIO (create new user, migrate data, delete old)
lxc exec pragmatismo-drive -- mc admin user add local newkey newsecret
lxc exec pragmatismo-drive -- mc admin policy attach local readwrite --user newkey

# 3. Update Valkey/Redis
lxc exec pragmatismo-cache -- redis-cli CONFIG SET requirepass 'new-password'
lxc exec pragmatismo-cache -- redis-cli CONFIG REWRITE

# 4. Update Zitadel (via admin console)
# Navigate to: Settings > OAuth > Applications > Update Secret

# 5. Restart botserver
botserver restart

# 6. Verify all services
botserver version --all

🔒 ENCRYPTION KEY WARNING: Rotating the encryption key (botserver rotate-secret encryption) will make ALL existing encrypted data unreadable. Only do this if you have re-encryption procedures in place.


Security Considerations

Current Limitations

⚠️ Manual Service Updates Required After rotating credentials, you MUST manually update each service:

  • Database (tables): Run the provided SQL command to update PostgreSQL user password
  • Drive (MinIO): Run the provided mc admin commands to update S3 credentials
  • Cache (Redis): Run the provided redis-cli command to update password
  • Directory (Zitadel): Update client secret via admin console

⚠️ Service Restart Required After rotating JWT secret, you MUST restart botserver:

botserver restart

All users will need to re-login (refresh tokens invalidated). Access tokens (15-minute expiry) will expire naturally.

⚠️ No Automatic Rollback If verification fails, you must manually restore from backups:

# Database: Re-run SQL with old password
# JWT: Restore .env.backup.<timestamp>
# Other: Use backup values shown in rotation output

Available Components for Rotation

ComponentCredential TypeManual Update RequiredService Restart
tablesPostgreSQL password✅ Run SQL command❌ No
driveMinIO S3 credentials✅ Run mc commands❌ No
cacheRedis/Valkey password✅ Run redis-cli❌ No
emailSMTP password✅ Update mail server❌ No
directoryZitadel client secret✅ Update via console❌ No
encryptionMaster encryption key⚠️ Re-encrypt all data❌ No
jwtJWT signing secret❌ NoYes

Best Practices

  1. Test in staging first - Never rotate in production without testing
  2. Schedule during low traffic - Rotate JWT outside peak hours
  3. Have rollback plan ready - Save backup paths shown during rotation
  4. Monitor logs - Check for authentication failures after rotation:
    tail -f /var/log/botserver/app.log | grep -i "authentication\\|jwt\\|token"
    
  5. Rotate regularly - Every 90 days for production, per security compliance
  6. After JWT rotation - Verify all services are healthy before declaring success

Verification

The rotate-secret command includes automatic verification where possible:

  • Database: Tests PostgreSQL connection with new credentials
  • JWT: Checks health endpoint (requires service to be running)
  • Other: Displays manual verification instructions

If verification fails:

  1. Check the error message for specific failure details
  2. Restore from backup if needed
  3. Re-run rotation after fixing the issue

Complete Setup Example

Here’s a complete workflow to set up Vault and migrate secrets.

⚠️ Run all commands on the HOST system, not inside any container.

# 1. Install Vault in a container (run on HOST)
botserver install vault --container --tenant pragmatismo

# 2. Install Vector DB for embeddings (run on HOST)
botserver install vector_db --container --tenant pragmatismo

# 3. Get Vault container IP
lxc list pragmatismo-vault

# 4. Set environment variables
export VAULT_ADDR=http://<vault-ip>:8200
export VAULT_TOKEN=<root-token-from-init>

# 5. Migrate existing secrets
botserver vault migrate /opt/gbo/bin/system/.env

# 6. Verify migration
botserver vault health
botserver vault get gbo/tables
botserver vault get gbo/drive
botserver vault get gbo/email

# 7. Update .env to use Vault only (SECURE METHOD)
cat > /opt/gbo/bin/system/.env << EOF
RUST_LOG=info
VAULT_ADDR=http://<vault-ip>:8200
SERVER_HOST=0.0.0.0
SERVER_PORT=5858
EOF

# Store token separately with restricted permissions
echo "VAULT_TOKEN=<root-token>" > /opt/gbo/secrets/vault-token
chmod 600 /opt/gbo/secrets/vault-token
chown root:root /opt/gbo/secrets/vault-token

# 8. Restart botserver
botserver restart

Secret Paths Reference

gbo/tables

Database connection credentials.

KeyDescription
hostDatabase server hostname
portDatabase port
databaseDatabase name
usernameDatabase user
passwordDatabase password

gbo/drive

S3/MinIO storage credentials.

KeyDescription
serverStorage server hostname
portStorage port
use_sslEnable SSL (true/false)
accesskeyAccess key ID
secretSecret access key
org_prefixOrganization prefix for buckets

gbo/email

SMTP email configuration.

KeyDescription
fromSender email address
serverSMTP server hostname
portSMTP port
usernameSMTP username
passwordSMTP password
reject_unauthorizedReject invalid certs

gbo/llm

AI/LLM configuration.

KeyDescription
api_keyAPI key for cloud LLM
modelModel identifier
endpointAPI endpoint URL
localUse local LLM (true/false)
urlLocal LLM server URL
model_pathPath to local model file
embedding_model_pathPath to embedding model
embedding_urlEmbedding server URL

gbo/stripe

Payment processing credentials.

KeyDescription
secret_keyStripe secret key
professional_plan_price_idProfessional plan price ID
personal_plan_price_idPersonal plan price ID

gbo/cache

Redis/Valkey credentials.

KeyDescription
passwordCache password

gbo/directory

Zitadel identity provider.

KeyDescription
urlZitadel server URL
project_idProject ID
client_idOAuth client ID
client_secretOAuth client secret
masterkeyMaster encryption key

gbo/encryption

Encryption keys.

KeyDescription
master_keyMaster encryption key

Troubleshooting

Vault Connection Issues

# Check if Vault is running
lxc exec pragmatismo-vault -- systemctl status vault

# Check Vault seal status
lxc exec pragmatismo-vault -- vault status

# Unseal Vault if sealed
lxc exec pragmatismo-vault -- vault operator unseal <unseal-key>

Component Installation Fails

# Check logs
tail -f botserver-stack/logs/<component>.log

# Verify container exists
lxc list | grep <tenant>-<component>

# Check container logs
lxc exec <tenant>-<component> -- journalctl -xe

Missing Dependencies

If you see errors like error while loading shared libraries: libpq.so.5, install the runtime dependencies:

# Quick install (recommended) - run on HOST system
curl -fsSL https://raw.githubusercontent.com/GeneralBots/botserver/main/scripts/install-dependencies.sh | sudo bash

# Or manual install (Debian/Ubuntu)
sudo apt-get install -y libpq5 libssl3 liblzma5 zlib1g ca-certificates curl wget

# Or manual install (Fedora/RHEL)
sudo dnf install -y libpq openssl-libs xz-libs zlib ca-certificates curl wget

For development/building from source:

# Install development dependencies
sudo apt-get install -y libpq-dev libssl-dev liblzma-dev

Security Best Practices

🛡️ SECURITY HARDENING GUIDE

🔒 SECURITY NOTES

Token Management

  • NEVER commit tokens or secrets to version control
  • NEVER pass tokens as command-line arguments (visible in ps)
  • NEVER store tokens in shell history (use HISTCONTROL=ignorespace)
  • ALWAYS use environment variables or secure files with chmod 600
  • ROTATE Vault tokens regularly (recommended: every 30 days)
  • ROTATE service credentials regularly (recommended: every 90 days)
# Prevent command from being saved in history (note the leading space)
 export VAULT_TOKEN=s.xxxx

File Permissions

# Secure your secrets directory
chmod 700 /opt/gbo/secrets
chmod 600 /opt/gbo/secrets/*
chown -R root:root /opt/gbo/secrets

Vault Hardening

# Enable audit logging
botserver vault put gbo/audit enabled=true

# Use short-lived tokens in production
# Configure token TTL in Vault policies

Network Security

  • Run Vault behind a firewall
  • Use TLS for Vault connections in production
  • Restrict Vault access to specific container IPs
# Example: Only allow botserver container to reach Vault
iptables -A INPUT -p tcp --dport 8200 -s 10.16.164.33 -j ACCEPT
iptables -A INPUT -p tcp --dport 8200 -j DROP

Credential Rotation Schedule

ComponentRotation FrequencyCommand
Vault Token30 daysVault UI or API
Database90 daysbotserver rotate-secret tables
S3/MinIO90 daysbotserver rotate-secret drive
Redis90 daysbotserver rotate-secret cache
Email90 daysbotserver rotate-secret email
All at onceAfter incidentbotserver rotate-secrets --all

Incident Response

If you suspect a credential breach:

# 1. Immediately rotate ALL secrets
botserver rotate-secrets --all

# 2. Update all services with new credentials (see output)

# 3. Restart all services
botserver restart

# 4. Check for unauthorized access in logs
grep -r "authentication failed" /opt/gbo/logs/

# 5. Review Vault audit logs
vault audit list

Updating Components

botserver’s stack components are regularly updated by their respective maintainers. This guide explains how to check for updates, apply them safely, and verify everything works correctly.

Update Philosophy

botserver uses a conservative update strategy:

  1. Pinned Versions - Each component has a tested version in 3rdparty.toml
  2. Checksum Verification - Downloads are verified with SHA256 hashes
  3. Cached Downloads - Updates are cached in botserver-installers/ for offline use
  4. Rollback Ready - Previous binaries can be restored from cache

Checking for Updates

View Current Versions

Check installed versions:

./botserver version --all

Example output:

botserver Stack Versions:
  vault:     1.15.4
  tables:    17.2.0 (PostgreSQL)
  directory: 2.70.4 (Zitadel)
  drive:     latest (MinIO)
  cache:     8.0.2 (Valkey)
  llm:       b7345 (llama.cpp)
  email:     0.10.7 (Stalwart)
  proxy:     2.9.1 (Caddy)
  dns:       1.11.1 (CoreDNS)
  alm:       10.0.2 (Forgejo)
  meeting:   2.8.2 (LiveKit)

Check Upstream Releases


Updating the Configuration

Component URLs and checksums are defined in 3rdparty.toml. To update a component:

1. Edit 3rdparty.toml

[components.llm]
name = "Llama.cpp Server"
url = "https://github.com/ggml-org/llama.cpp/releases/download/b7345/llama-b7345-bin-ubuntu-x64.zip"
filename = "llama-b7345-bin-ubuntu-x64.zip"
sha256 = "91b066ecc53c20693a2d39703c12bc7a69c804b0768fee064d47df702f616e52"

2. Get the New Checksum

Most releases publish SHA256 checksums. If not, calculate it:

# Download and calculate checksum
curl -L -o new-release.zip "https://github.com/.../new-release.zip"
sha256sum new-release.zip

3. Update Both Files

Update both configuration files to stay in sync:

  • 3rdparty.toml - Main component registry
  • config/llm_releases.json - LLM-specific builds and checksums

Component Update Procedures

Updating llama.cpp (LLM Server)

The LLM server powers local AI inference. Updates often include performance improvements and new model support.

Step 1: Check the latest release

Visit github.com/ggml-org/llama.cpp/releases

Step 2: Update 3rdparty.toml

[components.llm]
name = "Llama.cpp Server"
url = "https://github.com/ggml-org/llama.cpp/releases/download/b7345/llama-b7345-bin-ubuntu-x64.zip"
filename = "llama-b7345-bin-ubuntu-x64.zip"
sha256 = "91b066ecc53c20693a2d39703c12bc7a69c804b0768fee064d47df702f616e52"

Step 3: Update config/llm_releases.json

This file contains platform-specific builds:

{
  "llama_cpp": {
    "version": "b7345",
    "base_url": "https://github.com/ggml-org/llama.cpp/releases/download",
    "checksums": {
      "llama-b7345-bin-ubuntu-x64.zip": "sha256:91b066ecc53c20693a2d39703c12bc7a69c804b0768fee064d47df702f616e52",
      "llama-b7345-bin-macos-arm64.zip": "sha256:72ae9b4a4605aa1223d7aabaa5326c66c268b12d13a449fcc06f61099cd02a52"
    }
  }
}

Step 4: Update installer.rs version constant

#![allow(unused)]
fn main() {
const LLAMA_CPP_VERSION: &str = "b7345";
}

Step 5: Apply the update

# Stop LLM service
pkill llama-server

# Remove old binary
rm -rf botserver-stack/bin/llm/*

# Re-run bootstrap (downloads new version)
./botserver bootstrap

# Or manually trigger download
./botserver update llm

Available llama.cpp Builds (b7345)

PlatformArchitectureVariantFilename
Linuxx64CPUllama-b7345-bin-ubuntu-x64.zip
Linuxx64Vulkanllama-b7345-bin-ubuntu-vulkan-x64.zip
Linuxs390xCPUllama-b7345-bin-ubuntu-s390x.zip
macOSARM64Metalllama-b7345-bin-macos-arm64.zip
macOSx64CPUllama-b7345-bin-macos-x64.zip
Windowsx64CPUllama-b7345-bin-win-cpu-x64.zip
Windowsx64CUDA 12.4llama-b7345-bin-win-cuda-12.4-x64.zip
Windowsx64CUDA 13.1llama-b7345-bin-win-cuda-13.1-x64.zip
Windowsx64Vulkanllama-b7345-bin-win-vulkan-x64.zip
WindowsARM64CPUllama-b7345-bin-win-cpu-arm64.zip

Note: Linux releases are transitioning from .zip to .tar.gz format.


Updating PostgreSQL (Tables)

Warning: Database updates require careful planning. Always backup first!

# Backup database
pg_dump $DATABASE_URL > backup-$(date +%Y%m%d).sql

# Update 3rdparty.toml
[components.tables]
url = "https://github.com/theseus-rs/postgresql-binaries/releases/download/17.2.0/postgresql-17.2.0-x86_64-unknown-linux-gnu.tar.gz"
filename = "postgresql-17.2.0-x86_64-unknown-linux-gnu.tar.gz"

# Stop services
./botserver stop

# Apply update
./botserver update tables

# Start services
./botserver start

# Verify
psql $DATABASE_URL -c "SELECT version();"

Updating MinIO (Drive)

MinIO updates are generally safe and backward-compatible.

# Update 3rdparty.toml
[components.drive]
url = "https://dl.min.io/server/minio/release/linux-amd64/minio"
filename = "minio"

# Apply update
./botserver update drive

# Verify
curl http://localhost:9000/minio/health/live

Updating Valkey (Cache)

Valkey requires compilation from source.

# Update 3rdparty.toml
[components.cache]
url = "https://github.com/valkey-io/valkey/archive/refs/tags/8.0.2.tar.gz"
filename = "valkey-8.0.2.tar.gz"

# Stop cache
./botserver stop cache

# Remove old build
rm -rf botserver-stack/bin/cache/*

# Rebuild
./botserver update cache

# Verify
./botserver-stack/bin/cache/valkey-cli ping

Updating Zitadel (Directory)

Warning: Directory service updates may require database migrations.

# Backup Zitadel database
pg_dump -d zitadel > zitadel-backup-$(date +%Y%m%d).sql

# Update 3rdparty.toml
[components.directory]
url = "https://github.com/zitadel/zitadel/releases/download/v2.70.4/zitadel-linux-amd64.tar.gz"
filename = "zitadel-linux-amd64.tar.gz"

# Stop directory
./botserver stop directory

# Apply update
./botserver update directory

# Run migrations (if needed)
./botserver-stack/bin/directory/zitadel setup

# Start
./botserver start directory

Updating Vault (Secrets)

Critical: Vault updates require unsealing after restart.

# Update 3rdparty.toml
[components.vault]
url = "https://releases.hashicorp.com/vault/1.15.4/vault_1.15.4_linux_amd64.zip"
filename = "vault_1.15.4_linux_amd64.zip"

# Stop Vault
./botserver stop vault

# Apply update
./botserver update vault

# Start and unseal
./botserver start vault
./botserver unseal

Platform-Specific Builds

Automatic Detection

botserver automatically detects your platform and downloads the appropriate build:

  1. Operating System - Linux, macOS, Windows
  2. Architecture - x64, ARM64, s390x
  3. GPU Support - CUDA, Vulkan, Metal, ROCm

Manual Override

Force a specific build variant:

# In 3rdparty.toml - use Vulkan build instead of CPU
[components.llm]
url = "https://github.com/ggml-org/llama.cpp/releases/download/b7345/llama-b7345-bin-ubuntu-vulkan-x64.zip"

GPU Detection

The installer checks for GPU support:

#![allow(unused)]
fn main() {
// Linux CUDA detection
if Path::new("/usr/local/cuda").exists() || env::var("CUDA_HOME").is_ok() {
    // Use CUDA build
}

// Vulkan detection  
if Path::new("/usr/share/vulkan").exists() || env::var("VULKAN_SDK").is_ok() {
    // Use Vulkan build
}
}

Offline Updates

Pre-download for Air-Gapped Systems

  1. Download releases on a connected machine:
# Download all components
mkdir offline-updates
cd offline-updates

# LLM
curl -LO https://github.com/ggml-org/llama.cpp/releases/download/b7345/llama-b7345-bin-ubuntu-x64.zip

# Database
curl -LO https://github.com/theseus-rs/postgresql-binaries/releases/download/17.2.0/postgresql-17.2.0-x86_64-unknown-linux-gnu.tar.gz

# ... other components
  1. Transfer to air-gapped system
  2. Copy to cache directory:
cp offline-updates/* /path/to/botserver-installers/
  1. Run bootstrap (uses cached files):
./botserver bootstrap

Verifying Updates

Run Tests

# Run test suite
cargo test

# Integration tests
./botserver test

Health Checks

# Check all services
./botserver status

# Individual service checks
curl -k https://localhost:8081/health  # LLM
curl -k https://localhost:8082/health  # Embedding
curl http://localhost:9000/minio/health/live  # Drive

Security Audit

After updating dependencies:

# Rust dependencies
cargo audit

# Check for known vulnerabilities
cargo audit --deny warnings

Rollback Procedure

If an update causes issues:

Quick Rollback

# Stop services
./botserver stop

# Restore from cache (previous version must exist)
cp botserver-installers/llama-b4547-bin-ubuntu-x64.zip /tmp/
unzip /tmp/llama-b4547-bin-ubuntu-x64.zip -d botserver-stack/bin/llm/

# Restart
./botserver start

Full Rollback

# Restore database from backup
psql $DATABASE_URL < backup-20241210.sql

# Restore old binaries
rm -rf botserver-stack/bin/
tar -xzf botserver-stack-backup.tar.gz

# Restart
./botserver start

Update Schedule Recommendations

ComponentUpdate FrequencyRisk Level
llama.cppWeekly/MonthlyLow
MinIOMonthlyLow
ValkeyQuarterlyLow
CaddyMonthlyLow
CoreDNSQuarterlyLow
PostgreSQLQuarterlyMedium
ZitadelQuarterlyMedium
VaultQuarterlyHigh
StalwartMonthlyMedium

Security Updates

Apply security patches immediately for:

  • Vault (secrets management)
  • PostgreSQL (database)
  • Zitadel (authentication)

Automating Updates

Update Script

Create update-components.sh:

#!/bin/bash
set -e

echo "Backing up current state..."
./botserver backup

echo "Stopping services..."
./botserver stop

echo "Updating components..."
for component in llm drive cache; do
    echo "Updating $component..."
    ./botserver update $component
done

echo "Starting services..."
./botserver start

echo "Running health checks..."
./botserver status

echo "Update complete!"

Scheduled Updates

Use cron for automated updates (use with caution):

# Weekly LLM updates (low risk)
0 3 * * 0 /path/to/botserver update llm

# Monthly full updates
0 3 1 * * /path/to/update-components.sh

Troubleshooting Updates

Download Failures

# Clear cache and retry
rm botserver-installers/component-name*
./botserver update component-name

Checksum Mismatch

# Verify checksum manually
sha256sum botserver-installers/llama-b7345-bin-ubuntu-x64.zip
# Compare with 3rdparty.toml

Service Won’t Start

# Check logs
tail -100 botserver-stack/logs/llm.log

# Check permissions
ls -la botserver-stack/bin/llm/

# Make executable
chmod +x botserver-stack/bin/llm/llama-server

Database Migration Errors

# Run migrations manually
./botserver migrate

# Or reset (WARNING: data loss)
./botserver reset tables

See Also

Component Reference

This reference provides detailed information about each component in the botserver stack, including current versions, alternatives, and configuration options.


Core Components

Vault (Secrets Management)

PropertyValue
ServiceHashiCorp Vault
Current Version1.15.4
Default Port8200
Binary Pathbotserver-stack/bin/vault/vault
Config Pathbotserver-stack/conf/vault/
Data Pathbotserver-stack/data/vault/
Log Filebotserver-stack/logs/vault.log

Download URL:

https://releases.hashicorp.com/vault/1.15.4/vault_1.15.4_linux_amd64.zip

Purpose:

  • Stores all service credentials (database, drive, cache)
  • Manages encryption keys
  • Provides secrets rotation
  • Issues short-lived tokens

Alternatives:

AlternativeLicenseNotes
OpenBaoMPL-2.0Fork of Vault, fully open source
InfisicalMITModern secrets management
SOPSMPL-2.0File-based encryption
DopplerProprietaryCloud-based alternative

PostgreSQL (Tables/Database)

PropertyValue
ServicePostgreSQL
Current Version17.2.0
Default Port5432
Binary Pathbotserver-stack/bin/tables/
Config Pathbotserver-stack/conf/tables/
Data Pathbotserver-stack/data/tables/
Log Filebotserver-stack/logs/postgres.log

Download URL:

https://github.com/theseus-rs/postgresql-binaries/releases/download/17.2.0/postgresql-17.2.0-x86_64-unknown-linux-gnu.tar.gz

Purpose:

  • Primary relational database
  • Stores bot configurations, users, conversations
  • Supports full-text search
  • Handles transactions and ACID compliance

Alternatives:

AlternativeLicenseNotes
CockroachDBBSL/CCLDistributed SQL, PostgreSQL-compatible
YugabyteDBApache-2.0Distributed PostgreSQL
NeonApache-2.0Serverless PostgreSQL
SupabaseApache-2.0PostgreSQL with extras

Zitadel (Directory/Identity)

PropertyValue
ServiceZitadel
Current Version2.70.4
Default Port8080
Binary Pathbotserver-stack/bin/directory/zitadel
Config Pathbotserver-stack/conf/directory/
Data PathUses PostgreSQL
Log Filebotserver-stack/logs/zitadel.log

Download URL:

https://github.com/zitadel/zitadel/releases/download/v2.70.4/zitadel-linux-amd64.tar.gz

Purpose:

  • User authentication and authorization
  • OAuth2/OIDC provider
  • Single Sign-On (SSO)
  • Multi-factor authentication
  • Service credential provisioning

Alternatives:

AlternativeLicenseNotes
KeycloakApache-2.0Java-based, feature-rich
AuthentikCustom OSSPython-based, modern UI
AutheliaApache-2.0Lightweight, Nginx integration
OryApache-2.0Modular identity infrastructure
CasdoorApache-2.0Go-based, UI-focused

MinIO (Drive/Object Storage)

PropertyValue
ServiceMinIO
Current VersionLatest
Default Ports9000 (API), 9001 (Console)
Binary Pathbotserver-stack/bin/drive/minio
Config Pathbotserver-stack/conf/drive/
Data Pathbotserver-stack/data/drive/
Log Filebotserver-stack/logs/minio.log

Download URL:

https://dl.min.io/server/minio/release/linux-amd64/minio

Purpose:

  • S3-compatible object storage
  • Stores bot packages (.gbai, .gbkb, etc.)
  • File uploads and downloads
  • Static asset hosting

Alternatives:

AlternativeLicenseNotes
SeaweedFSApache-2.0Distributed, fast
GarageAGPL-3.0Lightweight, geo-distributed
CephLGPL-2.1Enterprise-grade, complex
LakeFSApache-2.0Git-like versioning for data

Valkey (Cache)

PropertyValue
ServiceValkey
Current Version8.0.2
Default Port6379
Binary Pathbotserver-stack/bin/cache/valkey-server
Config Pathbotserver-stack/conf/cache/
Data Pathbotserver-stack/data/cache/
Log Filebotserver-stack/logs/valkey.log

Download URL:

https://github.com/valkey-io/valkey/archive/refs/tags/8.0.2.tar.gz

Note: Valkey requires compilation from source. Build dependencies: gcc, make

Purpose:

  • In-memory caching
  • Session storage
  • Rate limiting
  • Pub/Sub messaging
  • Queue management

Alternatives:

AlternativeLicenseNotes
KeyDBBSD-3Multi-threaded Redis fork
DragonflyBSLHigh-performance, Redis-compatible
GarnetMITMicrosoft’s cache store
SkytableAGPL-3.0Modern NoSQL

llama.cpp (LLM Server)

PropertyValue
Servicellama.cpp
Current Versionb7345
Default Ports8081 (LLM), 8082 (Embedding)
Binary Pathbotserver-stack/bin/llm/llama-server
Config Pathbotserver-stack/conf/llm/
Data Pathbotserver-stack/data/llm/ (models)
Log Filebotserver-stack/logs/llm.log

Download URLs by Platform:

PlatformURL
Linux x64https://github.com/ggml-org/llama.cpp/releases/download/b7345/llama-b7345-bin-ubuntu-x64.zip
Linux x64 Vulkanhttps://github.com/ggml-org/llama.cpp/releases/download/b7345/llama-b7345-bin-ubuntu-vulkan-x64.zip
macOS ARM64https://github.com/ggml-org/llama.cpp/releases/download/b7345/llama-b7345-bin-macos-arm64.zip
macOS x64https://github.com/ggml-org/llama.cpp/releases/download/b7345/llama-b7345-bin-macos-x64.zip
Windows x64https://github.com/ggml-org/llama.cpp/releases/download/b7345/llama-b7345-bin-win-cpu-x64.zip
Windows CUDA 12https://github.com/ggml-org/llama.cpp/releases/download/b7345/llama-b7345-bin-win-cuda-12.4-x64.zip
Windows CUDA 13https://github.com/ggml-org/llama.cpp/releases/download/b7345/llama-b7345-bin-win-cuda-13.1-x64.zip

SHA256 Checksums:

llama-b7345-bin-ubuntu-x64.zip:        91b066ecc53c20693a2d39703c12bc7a69c804b0768fee064d47df702f616e52
llama-b7345-bin-ubuntu-vulkan-x64.zip: 03f0b3acbead2ddc23267073a8f8e0207937c849d3704c46c61cf167c1001442
llama-b7345-bin-macos-arm64.zip:       72ae9b4a4605aa1223d7aabaa5326c66c268b12d13a449fcc06f61099cd02a52
llama-b7345-bin-macos-x64.zip:         bec6b805cf7533f66b38f29305429f521dcb2be6b25dbce73a18df448ec55cc5
llama-b7345-bin-win-cpu-x64.zip:       ea449082c8e808a289d9a1e8331f90a0379ead4dd288a1b9a2d2c0a7151836cd
llama-b7345-bin-win-cuda-12.4-x64.zip: 7a82aba2662fa7d4477a7a40894de002854bae1ab8b0039888577c9a2ca24cae
llama-b7345-bin-win-cuda-13.1-x64.zip: 06ea715cefb07e9862394e6d1ffa066f4c33add536b1f1aa058723f86ae05572

Purpose:

  • Local LLM inference
  • Text embeddings for semantic search
  • OpenAI-compatible API
  • Supports GGUF model format

Alternatives:

AlternativeLicenseNotes
OllamaMITUser-friendly, model management
vLLMApache-2.0High throughput, production-grade
Text Generation InferenceApache-2.0HuggingFace’s solution
LocalAIMITDrop-in OpenAI replacement
LM StudioProprietaryDesktop GUI application

Supporting Components

Stalwart (Email Server)

PropertyValue
ServiceStalwart Mail Server
Current Version0.10.7
Default Ports25 (SMTP), 993 (IMAPS), 587 (Submission)
Binary Pathbotserver-stack/bin/email/stalwart-mail
Config Pathbotserver-stack/conf/email/
Data Pathbotserver-stack/data/email/
Log Filebotserver-stack/logs/stalwart.log

Download URL:

https://github.com/stalwartlabs/mail-server/releases/download/v0.10.7/stalwart-mail-x86_64-linux.tar.gz

Purpose:

  • Full email server (SMTP, IMAP, JMAP)
  • Email sending and receiving
  • Spam filtering
  • DKIM/SPF/DMARC support

Alternatives:

AlternativeLicenseNotes
MaddyGPL-3.0Composable mail server
Mail-in-a-BoxCC0All-in-one solution
PostalMITSending-focused
HarakaMITNode.js SMTP

Caddy (Proxy)

PropertyValue
ServiceCaddy
Current Version2.9.1
Default Ports443 (HTTPS), 80 (HTTP)
Binary Pathbotserver-stack/bin/proxy/caddy
Config Pathbotserver-stack/conf/proxy/Caddyfile
Data Pathbotserver-stack/data/proxy/
Log Filebotserver-stack/logs/caddy.log

Download URL:

https://github.com/caddyserver/caddy/releases/download/v2.9.1/caddy_2.9.1_linux_amd64.tar.gz

Purpose:

  • Automatic HTTPS with Let’s Encrypt
  • Reverse proxy for all services
  • Load balancing
  • HTTP/2 and HTTP/3 support

Alternatives:

AlternativeLicenseNotes
NginxBSD-2Industry standard
TraefikMITCloud-native, auto-discovery
HAProxyGPL-2.0High performance
EnvoyApache-2.0Service mesh ready

CoreDNS (DNS)

PropertyValue
ServiceCoreDNS
Current Version1.11.1
Default Port53
Binary Pathbotserver-stack/bin/dns/coredns
Config Pathbotserver-stack/conf/dns/Corefile
Log Filebotserver-stack/logs/coredns.log

Download URL:

https://github.com/coredns/coredns/releases/download/v1.11.1/coredns_1.11.1_linux_amd64.tgz

Purpose:

  • Local DNS resolution
  • Service discovery (*.botserver.local)
  • DNS-based load balancing

Alternatives:

AlternativeLicenseNotes
PowerDNSGPL-2.0Feature-rich, authoritative
UnboundBSDValidating resolver
dnsmasqGPL-2.0Lightweight

Forgejo (ALM/Git)

PropertyValue
ServiceForgejo
Current Version10.0.2
Default Port3000
Binary Pathbotserver-stack/bin/alm/forgejo
Config Pathbotserver-stack/conf/alm/
Data Pathbotserver-stack/data/alm/
Log Filebotserver-stack/logs/forgejo.log

Download URL:

https://codeberg.org/forgejo/forgejo/releases/download/v10.0.2/forgejo-10.0.2-linux-amd64

Purpose:

  • Git repository hosting
  • Issue tracking
  • CI/CD pipelines
  • Code review

Alternatives:

AlternativeLicenseNotes
GiteaMITOriginal project
GitLabMIT (CE)Full DevOps platform
GogsMITLightweight
OneDevMITBuilt-in CI/CD

LiveKit (Meeting/Video)

PropertyValue
ServiceLiveKit
Current Version2.8.2
Default Ports7880 (HTTP), 7881 (RTC)
Binary Pathbotserver-stack/bin/meeting/livekit-server
Config Pathbotserver-stack/conf/meeting/
Log Filebotserver-stack/logs/livekit.log

Download URL:

https://github.com/livekit/livekit/releases/download/v2.8.2/livekit_2.8.2_linux_amd64.tar.gz

Purpose:

  • Real-time video/audio communication
  • WebRTC infrastructure
  • Screen sharing
  • Recording

Alternatives:

AlternativeLicenseNotes
JitsiApache-2.0Full-featured, established
BigBlueButtonLGPL-3.0Education-focused
JanusGPL-3.0WebRTC gateway
mediasoupISCNode.js SFU

Optional Components

Qdrant (Vector Database)

PropertyValue
ServiceQdrant
Current VersionLatest
Default Ports6333 (HTTP), 6334 (gRPC)
Binary Pathbotserver-stack/bin/vector_db/qdrant

Download URL:

https://github.com/qdrant/qdrant/releases/latest/download/qdrant-x86_64-unknown-linux-gnu.tar.gz

Purpose:

  • Vector similarity search
  • Knowledge base embeddings
  • Semantic search

Alternatives:

AlternativeLicenseNotes
MilvusApache-2.0Distributed, scalable
WeaviateBSD-3GraphQL API
ChromaApache-2.0Simple, embedded
pgvectorPostgreSQLPostgreSQL extension

InfluxDB (Time Series)

PropertyValue
ServiceInfluxDB
Current Version2.7.5
Default Port8086
Binary Pathbotserver-stack/bin/timeseries_db/influxd

Download URL:

https://download.influxdata.com/influxdb/releases/influxdb2-2.7.5-linux-amd64.tar.gz

Purpose:

  • Metrics storage
  • Time-series analytics
  • Monitoring dashboards

Alternatives:

AlternativeLicenseNotes
TimescaleDBApache-2.0PostgreSQL extension
VictoriaMetricsApache-2.0Prometheus-compatible
QuestDBApache-2.0High-performance SQL
PrometheusApache-2.0Monitoring-focused

Default LLM Models

DeepSeek R1 Distill Qwen 1.5B

PropertyValue
FilenameDeepSeek-R1-Distill-Qwen-1.5B-Q3_K_M.gguf
Size~1.1 GB
RAM Required4 GB
Use CaseDefault conversational model

Download URL:

https://huggingface.co/bartowski/DeepSeek-R1-Distill-Qwen-1.5B-GGUF/resolve/main/DeepSeek-R1-Distill-Qwen-1.5B-Q3_K_M.gguf

BGE Small EN v1.5

PropertyValue
Filenamebge-small-en-v1.5-f32.gguf
Size~130 MB
RAM Required512 MB
Use CaseText embeddings for semantic search

Download URL:

https://huggingface.co/CompendiumLabs/bge-small-en-v1.5-gguf/resolve/main/bge-small-en-v1.5-f32.gguf

Configuration Files Reference

FilePurpose
3rdparty.tomlComponent download URLs and checksums
config/llm_releases.jsonPlatform-specific LLM builds
botserver-stack/conf/*/Per-component configuration
.envEnvironment variables (generated)

See Also

Security Auditing

Regular security audits ensure your botserver installation remains protected against known vulnerabilities. This guide covers automated scanning, manual reviews, and best practices.


Rust Dependency Auditing

cargo-audit

botserver uses cargo-audit to scan Rust dependencies for known vulnerabilities.

Install cargo-audit:

cargo install cargo-audit

Run audit:

cd botserver
cargo audit

Expected output (clean):

    Fetching advisory database from `https://github.com/RustSec/advisory-db`
      Loaded 650 security advisories (from ~/.cargo/advisory-db)
    Scanning Cargo.lock for vulnerabilities (425 crate dependencies)

Output with vulnerabilities:

Crate:     openssl
Version:   0.10.38
Title:     `openssl` `X509NameRef::entries` is unsound
Date:      2023-11-23
ID:        RUSTSEC-2023-0072
URL:       https://rustsec.org/advisories/RUSTSEC-2023-0072
Severity:  medium
Solution:  Upgrade to >=0.10.60

Automated CI/CD Auditing

Add to your CI pipeline (.github/workflows/security.yml):

name: Security Audit

on:
  push:
    branches: [main]
  pull_request:
  schedule:
    - cron: '0 0 * * *'  # Daily at midnight

jobs:
  audit:
    runs-on: ubuntu-latest
    steps:
      - uses: actions/checkout@v4
      - uses: rustsec/audit-check@v1
        with:
          token: ${{ secrets.GITHUB_TOKEN }}

Strict Auditing

Fail on any warning:

cargo audit --deny warnings

Fail on unmaintained crates:

cargo audit --deny unmaintained

Generate JSON report:

cargo audit --json > audit-report.json

Stack Component Vulnerabilities

CVE Monitoring

Monitor security advisories for each component:

Trivy Container Scanning

If using containers, scan with Trivy:

# Install Trivy
curl -sfL https://raw.githubusercontent.com/aquasecurity/trivy/main/contrib/install.sh | sh -s -- -b /usr/local/bin

# Scan filesystem
trivy fs --security-checks vuln,config ./botserver-stack/

# Scan specific binary
trivy fs --security-checks vuln ./botserver-stack/bin/vault/

Grype Binary Scanning

Scan binaries for vulnerabilities:

# Install Grype
curl -sSfL https://raw.githubusercontent.com/anchore/grype/main/install.sh | sh -s -- -b /usr/local/bin

# Scan directory
grype dir:./botserver-stack/bin/

Network Security Audit

Port Scanning

Verify only expected ports are open:

# Local port check
ss -tlnp | grep LISTEN

# Expected ports
# 8200  - Vault
# 5432  - PostgreSQL
# 8080  - Zitadel / API
# 9000  - MinIO API
# 9001  - MinIO Console
# 6379  - Valkey
# 8081  - LLM Server
# 8082  - Embedding Server
# 443   - HTTPS Proxy
# 53    - DNS

External port scan:

nmap -sT -p- localhost

TLS Certificate Audit

Check certificate validity:

# Check expiration
openssl x509 -in botserver-stack/conf/system/certificates/api/server.crt -noout -dates

# Check certificate chain
openssl verify -CAfile botserver-stack/conf/system/certificates/ca/ca.crt \
    botserver-stack/conf/system/certificates/api/server.crt

Firewall Rules

Ensure proper firewall configuration:

# UFW (Ubuntu)
sudo ufw status verbose

# iptables
sudo iptables -L -n -v

Recommended rules:

# Allow only necessary ports
sudo ufw default deny incoming
sudo ufw default allow outgoing
sudo ufw allow 443/tcp   # HTTPS
sudo ufw allow 8080/tcp  # API (if exposed)

Secrets Audit

Vault Health Check

# Check Vault seal status
curl -s http://localhost:8200/v1/sys/seal-status | jq

# List enabled auth methods
VAULT_ADDR=http://localhost:8200 vault auth list

# Audit enabled secrets engines
VAULT_ADDR=http://localhost:8200 vault secrets list

Environment Variable Audit

Check for leaked secrets:

# Search for hardcoded secrets
grep -r "password" --include="*.toml" --include="*.json" --include="*.csv" .
grep -r "secret" --include="*.toml" --include="*.json" --include="*.csv" .
grep -r "api_key" --include="*.toml" --include="*.json" --include="*.csv" .

# Check .env file permissions
ls -la .env
# Should be: -rw------- (600)

Rotate Secrets

Regular rotation schedule:

# Generate new database password
./botserver rotate-secret tables

# Generate new drive credentials
./botserver rotate-secret drive

# Rotate all secrets
./botserver rotate-secrets --all

Code Security Analysis

Static Analysis with Clippy

# Run Clippy with all lints
cargo clippy -- -W clippy::all -W clippy::pedantic -W clippy::nursery

# Security-focused lints
cargo clippy -- -W clippy::unwrap_used -W clippy::expect_used

SAST with Semgrep

# Install Semgrep
pip install semgrep

# Run Rust security rules
semgrep --config p/rust .

# Run all security rules
semgrep --config p/security-audit .

Dependency Review

Check for outdated dependencies:

# List outdated crates
cargo outdated

# Check for yanked crates
cargo audit --deny yanked

Database Security

PostgreSQL Audit

# Check authentication settings
cat botserver-stack/conf/tables/pg_hba.conf

# Verify SSL is enabled
psql $DATABASE_URL -c "SHOW ssl;"

# Check user permissions
psql $DATABASE_URL -c "SELECT * FROM pg_roles WHERE rolname NOT LIKE 'pg_%';"

Connection Security

Ensure encrypted connections:

-- Check current connections
SELECT datname, usename, ssl, client_addr 
FROM pg_stat_ssl 
JOIN pg_stat_activity ON pg_stat_ssl.pid = pg_stat_activity.pid;

Compliance Checks

OWASP Top 10

RiskMitigationStatus Check
InjectionParameterized queriesgrep -r "raw_sql" src/
Broken AuthZitadel handles authCheck Zitadel config
Sensitive DataVault encryptionvault status
XXENo XML parsingN/A
Broken AccessRBAC via ZitadelCheck permissions
Security MisconfigAudit configsReview conf/
XSSTemplate escapingAskama auto-escapes
Insecure DeserializationSerde validationCode review
Vulnerable Componentscargo auditAutomated
LoggingStructured logsCheck log config

SOC 2 Checklist

  • Access controls documented
  • Encryption at rest enabled
  • Encryption in transit (TLS)
  • Audit logging enabled
  • Backup procedures documented
  • Incident response plan
  • Vulnerability management process

Audit Schedule

Audit TypeFrequencyTool
Dependency vulnerabilitiesDaily (CI)cargo-audit
Container scanningWeeklyTrivy
Secret rotationMonthlyVault
Port scanningMonthlynmap
Full security reviewQuarterlyManual
Penetration testingAnnuallyExternal

Automated Security Script

Create security-audit.sh:

#!/bin/bash
set -e

echo "=== botserver Security Audit ==="
echo "Date: $(date)"
echo

echo "--- Rust Dependency Audit ---"
cargo audit --deny warnings || echo "WARN: Vulnerabilities found"

echo
echo "--- Checking for Hardcoded Secrets ---"
if grep -r "password.*=" --include="*.rs" src/ 2>/dev/null | grep -v "fn\|let\|//"; then
    echo "WARN: Potential hardcoded passwords found"
fi

echo
echo "--- Port Scan ---"
ss -tlnp | grep LISTEN

echo
echo "--- Certificate Expiry ---"
for cert in botserver-stack/conf/system/certificates/*/server.crt; do
    if [ -f "$cert" ]; then
        expiry=$(openssl x509 -in "$cert" -noout -enddate 2>/dev/null | cut -d= -f2)
        echo "$cert: $expiry"
    fi
done

echo
echo "--- Vault Status ---"
curl -s http://localhost:8200/v1/sys/seal-status 2>/dev/null | jq -r '.sealed' || echo "Vault not running"

echo
echo "=== Audit Complete ==="

Run periodically:

chmod +x security-audit.sh
./security-audit.sh > audit-$(date +%Y%m%d).log

Reporting Vulnerabilities

If you discover a security vulnerability in botserver:

  1. Do NOT create a public GitHub issue
  2. Email security@generalbots.ai with details
  3. Include steps to reproduce
  4. Allow 90 days for fix before disclosure

See Also

Backup and Recovery

Protecting your botserver data requires regular backups of databases, configurations, and file storage. This guide covers backup strategies, procedures, and disaster recovery.


What to Backup

ComponentData LocationPriorityMethod
PostgreSQLbotserver-stack/data/tables/Criticalpg_dump
Vaultbotserver-stack/data/vault/CriticalVault snapshot
MinIObotserver-stack/data/drive/Criticalmc mirror
Configurationsbotserver-stack/conf/HighFile copy
Bot PackagesS3 buckets (*.gbai)Highmc mirror
Modelsbotserver-stack/data/llm/MediumFile copy
Logsbotserver-stack/logs/LowOptional

Quick Backup Commands

# Full backup (all components)
./botserver backup

# Backup specific component
./botserver backup tables
./botserver backup drive
./botserver backup vault

# Backup to specific location
./botserver backup --output /mnt/backup/$(date +%Y%m%d)

Database Backup (PostgreSQL)

Full Database Dump

# Using pg_dump
pg_dump $DATABASE_URL > backup-$(date +%Y%m%d-%H%M%S).sql

# Compressed backup
pg_dump $DATABASE_URL | gzip > backup-$(date +%Y%m%d).sql.gz

# Custom format (faster restore)
pg_dump -Fc $DATABASE_URL > backup-$(date +%Y%m%d).dump

Incremental Backups with WAL

Enable WAL archiving in postgresql.conf:

wal_level = replica
archive_mode = on
archive_command = 'cp %p /backup/wal/%f'

Automated Database Backup Script

#!/bin/bash
# backup-database.sh

BACKUP_DIR="/backup/postgres"
RETENTION_DAYS=30
DATE=$(date +%Y%m%d-%H%M%S)

mkdir -p $BACKUP_DIR

# Create backup
pg_dump -Fc $DATABASE_URL > "$BACKUP_DIR/botserver-$DATE.dump"

# Remove old backups
find $BACKUP_DIR -name "*.dump" -mtime +$RETENTION_DAYS -delete

echo "Backup complete: botserver-$DATE.dump"

Database Restore

# From SQL dump
psql $DATABASE_URL < backup.sql

# From custom format (faster)
pg_restore -d $DATABASE_URL backup.dump

# Drop and recreate (clean restore)
pg_restore -c -d $DATABASE_URL backup.dump

Vault Backup

Snapshot Method

# Create Vault snapshot
VAULT_ADDR=http://localhost:8200 vault operator raft snapshot save vault-backup-$(date +%Y%m%d).snap

File-Based Backup

# Stop Vault first
./botserver stop vault

# Copy data directory
tar -czvf vault-data-$(date +%Y%m%d).tar.gz botserver-stack/data/vault/

# Copy unseal keys (store securely!)
cp botserver-stack/conf/vault/init.json /secure/location/

Vault Restore

# Stop Vault
./botserver stop vault

# Restore data
rm -rf botserver-stack/data/vault/*
tar -xzvf vault-data-backup.tar.gz -C botserver-stack/data/

# Start and unseal
./botserver start vault
./botserver unseal

Warning: Keep init.json (unseal keys and root token) in a secure, separate location!


Object Storage Backup (MinIO)

Using MinIO Client (mc)

# Configure mc
mc alias set local http://localhost:9000 $DRIVE_ACCESS_KEY $DRIVE_SECRET_KEY

# Backup all buckets
mc mirror local/ /backup/minio/

# Backup specific bot
mc mirror local/mybot.gbai /backup/bots/mybot.gbai/

Sync to Remote Storage

# Backup to S3
mc mirror local/ s3/botserver-backup/

# Backup to Backblaze B2
mc mirror local/ b2/botserver-backup/

# Backup to another MinIO
mc mirror local/ remote/botserver-backup/

Restore from Backup

# Restore all buckets
mc mirror /backup/minio/ local/

# Restore specific bucket
mc mirror /backup/bots/mybot.gbai/ local/mybot.gbai/

Configuration Backup

Full Configuration Backup

# Backup all configs
tar -czvf config-backup-$(date +%Y%m%d).tar.gz \
    botserver-stack/conf/ \
    3rdparty.toml \
    .env

# Exclude certificates (backup separately with encryption)
tar -czvf config-backup-$(date +%Y%m%d).tar.gz \
    --exclude='certificates' \
    botserver-stack/conf/

Certificate Backup (Encrypted)

# Backup certificates with encryption
tar -cz botserver-stack/conf/system/certificates/ | \
    gpg --symmetric --cipher-algo AES256 > certs-backup.tar.gz.gpg

Restore Configuration

# Restore configs
tar -xzvf config-backup.tar.gz

# Restore encrypted certificates
gpg --decrypt certs-backup.tar.gz.gpg | tar -xz

Full System Backup

Complete Backup Script

#!/bin/bash
# full-backup.sh

set -e

BACKUP_DIR="/backup/botserver/$(date +%Y%m%d)"
mkdir -p "$BACKUP_DIR"

echo "Starting full backup to $BACKUP_DIR"

# 1. Database
echo "Backing up database..."
pg_dump -Fc $DATABASE_URL > "$BACKUP_DIR/database.dump"

# 2. Vault snapshot
echo "Backing up Vault..."
VAULT_ADDR=http://localhost:8200 vault operator raft snapshot save "$BACKUP_DIR/vault.snap" 2>/dev/null || \
    tar -czvf "$BACKUP_DIR/vault-data.tar.gz" botserver-stack/data/vault/

# 3. Object storage
echo "Backing up drive..."
mc mirror local/ "$BACKUP_DIR/drive/" --quiet

# 4. Configurations
echo "Backing up configurations..."
tar -czvf "$BACKUP_DIR/config.tar.gz" \
    botserver-stack/conf/ \
    3rdparty.toml \
    .env \
    config/

# 5. Models (optional, large files)
if [ "$1" == "--include-models" ]; then
    echo "Backing up models..."
    tar -czvf "$BACKUP_DIR/models.tar.gz" botserver-stack/data/llm/
fi

# Create manifest
echo "Creating manifest..."
cat > "$BACKUP_DIR/manifest.txt" << EOF
botserver Backup
Date: $(date)
Host: $(hostname)

Contents:
- database.dump: PostgreSQL database
- vault.snap: Vault secrets
- drive/: Object storage contents
- config.tar.gz: Configuration files
EOF

echo "Backup complete: $BACKUP_DIR"
du -sh "$BACKUP_DIR"

Scheduled Backups

Add to crontab:

# Daily database backup at 2 AM
0 2 * * * /opt/botserver/scripts/backup-database.sh

# Weekly full backup on Sunday at 3 AM
0 3 * * 0 /opt/botserver/scripts/full-backup.sh

# Monthly backup with models
0 4 1 * * /opt/botserver/scripts/full-backup.sh --include-models

Disaster Recovery

Recovery Procedure

  1. Install fresh botserver

    ./botserver --skip-bootstrap
    
  2. Restore configurations

    tar -xzvf config-backup.tar.gz
    
  3. Restore Vault

    tar -xzvf vault-data.tar.gz
    ./botserver start vault
    ./botserver unseal
    
  4. Restore database

    ./botserver start tables
    pg_restore -d $DATABASE_URL database.dump
    
  5. Restore object storage

    ./botserver start drive
    mc mirror /backup/drive/ local/
    
  6. Start remaining services

    ./botserver start
    
  7. Verify

    ./botserver status
    ./botserver test
    

Recovery Time Objectives

ScenarioRTO TargetMethod
Single component failure< 15 minRestart/restore component
Database corruption< 1 hourpg_restore from backup
Full server failure< 4 hoursFull restore procedure
Data center failure< 24 hoursGeo-replicated restore

Backup Verification

Test Restore Regularly

# Restore to test environment
./botserver --test-restore /backup/latest/

# Verify database integrity
pg_restore --list database.dump
psql $DATABASE_URL -c "SELECT COUNT(*) FROM bots;"

# Verify drive contents
mc ls local/

Backup Integrity Checks

# Verify backup file integrity
sha256sum /backup/*/database.dump > /backup/checksums.txt

# Verify on restore
sha256sum -c /backup/checksums.txt

Cloud Backup Integration

AWS S3

# Configure AWS CLI
aws configure

# Sync backups to S3
aws s3 sync /backup/botserver/ s3://my-backup-bucket/botserver/

# Enable versioning for point-in-time recovery
aws s3api put-bucket-versioning \
    --bucket my-backup-bucket \
    --versioning-configuration Status=Enabled

Backblaze B2

# Configure rclone
rclone config

# Sync backups
rclone sync /backup/botserver/ b2:my-backup-bucket/botserver/

Encrypted Remote Backup

# Encrypt before upload
tar -cz /backup/botserver/ | \
    gpg --symmetric --cipher-algo AES256 | \
    aws s3 cp - s3://my-backup-bucket/botserver-$(date +%Y%m%d).tar.gz.gpg

Retention Policy

Backup TypeRetentionStorage
Hourly snapshots24 hoursLocal
Daily backups30 daysLocal + Remote
Weekly backups12 weeksRemote
Monthly backups12 monthsRemote (cold)
Yearly backups7 yearsArchive

Cleanup Script

#!/bin/bash
# cleanup-backups.sh

BACKUP_DIR="/backup/botserver"

# Remove daily backups older than 30 days
find $BACKUP_DIR/daily -mtime +30 -delete

# Remove weekly backups older than 12 weeks
find $BACKUP_DIR/weekly -mtime +84 -delete

# Remove monthly backups older than 12 months
find $BACKUP_DIR/monthly -mtime +365 -delete

See Also

Troubleshooting

This guide covers common issues you may encounter with botserver and their solutions.


Quick Diagnostics

Check Overall Status

# View all service status
./botserver status

# Check specific service
./botserver status llm
./botserver status tables
./botserver status vault

View Logs

# All logs
tail -f botserver-stack/logs/*.log

# Specific service
tail -100 botserver-stack/logs/llm.log
tail -100 botserver-stack/logs/postgres.log
tail -100 botserver-stack/logs/vault.log

# With filtering
grep -i error botserver-stack/logs/*.log
grep -i "failed\|error\|panic" botserver-stack/logs/*.log

System Resources

# Memory usage
free -h

# Disk space
df -h botserver-stack/

# Process list
ps aux | grep -E "llama|postgres|minio|vault|valkey"

# Open ports
ss -tlnp | grep LISTEN

Startup Issues

Bootstrap Fails

Symptom: ./botserver fails during initial setup

Common Causes & Solutions:

  1. Port already in use

    # Find what's using the port
    lsof -i :9000
    lsof -i :5432
    
    # Kill conflicting process
    kill -9 <PID>
    
    # Or change port in config
    
  2. Insufficient disk space

    # Check available space
    df -h
    
    # Clean up old installers
    rm -rf botserver-installers/*.old
    
    # Clean logs
    rm -f botserver-stack/logs/*.log.old
    
  3. Download failure

    # Clear cache and retry
    rm -rf botserver-installers/component-name*
    ./botserver bootstrap
    
    # Manual download
    curl -L -o botserver-installers/file.zip "URL"
    
  4. Permission denied

    # Fix permissions
    chmod +x botserver
    chmod -R u+rwX botserver-stack/
    

Vault Won’t Start

Symptom: Vault fails to initialize or unseal

Solutions:

  1. First-time setup failed

    # Reset Vault completely
    rm -rf botserver-stack/data/vault/*
    rm -f botserver-stack/conf/vault/init.json
    ./botserver bootstrap
    
  2. Vault is sealed

    # Check seal status
    curl http://localhost:8200/v1/sys/seal-status
    
    # Unseal manually
    ./botserver unseal
    
  3. Lost unseal keys

    # Check init.json exists
    cat botserver-stack/conf/vault/init.json
    
    # If lost, must reset Vault (DATA LOSS)
    ./botserver reset vault
    

Database Won’t Start

Symptom: PostgreSQL fails to start

Solutions:

  1. Corrupted data directory

    # Check PostgreSQL logs
    tail -50 botserver-stack/logs/postgres.log
    
    # Try recovery
    ./botserver-stack/bin/tables/bin/pg_resetwal -f botserver-stack/data/tables/
    
  2. Port conflict

    # Check if another PostgreSQL is running
    lsof -i :5432
    
    # Stop system PostgreSQL
    sudo systemctl stop postgresql
    
  3. Incorrect permissions

    chmod 700 botserver-stack/data/tables/
    

Service Issues

LLM Server Not Responding

Symptom: Requests to port 8081/8082 fail

Solutions:

  1. Check if running

    pgrep llama-server
    curl -k https://localhost:8081/health
    
  2. Model not found

    # Verify model exists
    ls -la botserver-stack/data/llm/
    
    # Re-download model
    ./botserver update llm
    
  3. Out of memory

    # Check memory usage
    free -h
    
    # Use smaller model or reduce context
    # Edit config.csv:
    # llm-server-ctx-size,2048
    
  4. GPU issues

    # Check CUDA
    nvidia-smi
    
    # Fall back to CPU
    # Edit config.csv:
    # llm-server-gpu-layers,0
    
  5. Restart LLM server

    pkill llama-server
    ./botserver start llm
    

Drive (MinIO) Issues

Symptom: File uploads/downloads fail

Solutions:

  1. Check MinIO status

    curl http://localhost:9000/minio/health/live
    
  2. Credential issues

    # Verify credentials from Vault
    ./botserver show-secret drive
    
    # Test with mc client
    mc alias set local http://localhost:9000 ACCESS_KEY SECRET_KEY
    mc ls local/
    
  3. Disk full

    df -h botserver-stack/data/drive/
    
    # Clean old versions
    mc rm --recursive --force local/bucket/.minio.sys/
    

Cache (Valkey) Issues

Symptom: Session errors, slow responses

Solutions:

  1. Check Valkey status

    ./botserver-stack/bin/cache/valkey-cli ping
    # Expected: PONG
    
  2. Memory issues

    ./botserver-stack/bin/cache/valkey-cli info memory
    
    # Flush cache if needed
    ./botserver-stack/bin/cache/valkey-cli FLUSHALL
    
  3. Connection refused

    # Check if running
    pgrep valkey-server
    
    # Restart
    ./botserver restart cache
    

Directory (Zitadel) Issues

Symptom: Login fails, authentication errors

Solutions:

  1. Check Zitadel logs

    tail -100 botserver-stack/logs/zitadel.log
    
  2. Database connection

    # Zitadel uses PostgreSQL
    psql $DATABASE_URL -c "SELECT 1;"
    
  3. Certificate issues

    # Regenerate certificates
    ./botserver regenerate-certs
    

Connection Issues

Cannot Connect to Database

Error: connection refused or authentication failed

Solutions:

  1. Verify DATABASE_URL

    echo $DATABASE_URL
    # Should be: postgres://user:pass@localhost:5432/dbname
    
  2. Check PostgreSQL is running

    pgrep postgres
    ./botserver status tables
    
  3. Test connection

    psql $DATABASE_URL -c "SELECT 1;"
    
  4. Check pg_hba.conf

    cat botserver-stack/conf/tables/pg_hba.conf
    # Ensure local connections are allowed
    

SSL/TLS Certificate Errors

Error: certificate verify failed or SSL handshake failed

Solutions:

  1. Regenerate certificates

    ./botserver regenerate-certs
    
  2. Check certificate validity

    openssl x509 -in botserver-stack/conf/system/certificates/api/server.crt -noout -dates
    
  3. Skip verification (development only)

    curl -k https://localhost:8081/health
    

Network Timeouts

Error: Requests timeout after waiting

Solutions:

  1. Check DNS resolution

    nslookup api.botserver.local
    
  2. Verify firewall rules

    sudo ufw status
    sudo iptables -L
    
  3. Check service is listening

    ss -tlnp | grep 8080
    

Performance Issues

Slow Response Times

Solutions:

  1. Check system resources

    top -b -n 1 | head -20
    iostat -x 1 3
    
  2. Database performance

    psql $DATABASE_URL -c "SELECT * FROM pg_stat_activity;"
    
    # Vacuum database
    psql $DATABASE_URL -c "VACUUM ANALYZE;"
    
  3. LLM performance

    # Reduce context size
    # config.csv: llm-server-ctx-size,2048
    
    # Use GPU layers
    # config.csv: llm-server-gpu-layers,35
    
  4. Enable caching

    # Verify cache is working
    ./botserver-stack/bin/cache/valkey-cli info stats
    

High Memory Usage

Solutions:

  1. Identify memory hogs

    ps aux --sort=-%mem | head -10
    
  2. Reduce LLM memory

    # Use quantized model (Q3_K_M instead of F16)
    # Reduce context: llm-server-ctx-size,1024
    # Reduce batch: llm-server-batch-size,256
    
  3. Limit PostgreSQL memory

    # Edit postgresql.conf
    shared_buffers = 256MB
    work_mem = 64MB
    

High Disk Usage

Solutions:

  1. Find large files

    du -sh botserver-stack/*
    du -sh botserver-stack/data/*
    
  2. Clean logs

    truncate -s 0 botserver-stack/logs/*.log
    
  3. Clean old installers

    # Keep only latest versions
    ls -la botserver-installers/
    rm botserver-installers/old-*
    
  4. Prune drive storage

    mc rm --recursive --older-than 30d local/bucket/
    

Update Issues

Component Update Failed

Symptom: Update command fails or service won’t start after update

Solutions:

  1. Clear cache and retry

    rm botserver-installers/component-name*
    ./botserver update component-name
    
  2. Checksum mismatch

    # Verify checksum
    sha256sum botserver-installers/file.zip
    
    # Compare with 3rdparty.toml
    grep sha256 3rdparty.toml | grep component
    
    # Update checksum if release changed
    
  3. Rollback to previous version

    # If old version cached
    ls botserver-installers/
    
    # Restore old binary
    cp botserver-installers/old-version.zip /tmp/
    unzip /tmp/old-version.zip -d botserver-stack/bin/component/
    

Database Migration Failed

Solutions:

  1. Check migration status

    ./botserver migrate --status
    
  2. Run migrations manually

    ./botserver migrate
    
  3. Rollback migration

    ./botserver migrate --rollback
    
  4. Reset from backup

    pg_restore -c -d $DATABASE_URL backup.dump
    

Common Error Messages

ErrorCauseSolution
connection refusedService not runningStart the service
permission deniedFile permissionschmod +x on binary
address already in usePort conflictKill conflicting process
out of memoryInsufficient RAMReduce model/context size
no such file or directoryMissing binary/configRe-run bootstrap
certificate verify failedSSL issuesRegenerate certificates
authentication failedWrong credentialsCheck Vault secrets
disk quota exceededDisk fullClean logs/old files
too many open filesulimit too lowulimit -n 65536
connection timed outNetwork/firewallCheck firewall rules

Getting Help

Collect Diagnostics

# Generate diagnostic report
./botserver diagnose > diagnostics-$(date +%Y%m%d).txt

# Include in bug reports:
# - botserver version
# - OS and architecture
# - Error messages
# - Relevant logs

Debug Logging

# Enable verbose logging
RUST_LOG=debug ./botserver

# Trace level (very verbose)
RUST_LOG=trace ./botserver

Community Support


See Also

Testing Architecture

Overview

The General Bots testing framework is designed with a multi-layered, isolated approach to ensure comprehensive coverage from individual components to complete user workflows.

Architecture Diagram

┌─────────────────────────────────────────────────────────────┐
│                     Test Execution Layer                     │
│  (GitHub Actions, CI/CD, Local Development)                │
└────────────────────┬────────────────────────────────────────┘
                     │
        ┌────────────┼────────────┐
        │            │            │
        ▼            ▼            ▼
   ┌─────────┐  ┌─────────┐  ┌──────────┐
   │  Unit   │  │ Integr. │  │   E2E    │
   │ Tests   │  │ Tests   │  │  Tests   │
   └────┬────┘  └────┬────┘  └─────┬────┘
        │            │            │
        └────────────┼────────────┘
                     │
        ┌────────────▼────────────┐
        │   Test Harness Layer    │
        │ (Context, Utils, Mocks) │
        └────────────┬────────────┘
                     │
        ┌────────────┼────────────┐
        │            │            │
        ▼            ▼            ▼
   ┌─────────┐  ┌─────────┐  ┌──────────┐
   │botserver│  │  Browser│  │ Services │
   │(Testing)│  │ (WebDrv)│  │(Mock/Iso)│
   └─────────┘  └─────────┘  └──────────┘
        │            │            │
        └────────────┼────────────┘
                     │
        ┌────────────▼────────────┐
        │  Temporary Stack Layer  │
        │ (Isolated Environments) │
        └────────────┬────────────┘
                     │
        ┌────────────┴────────────┐
        │                         │
        ▼                         ▼
   ┌─────────────┐          ┌──────────────┐
   │ PostgreSQL  │          │ Redis, MinIO │
   │ (Isolated)  │          │  (Isolated)  │
   └─────────────┘          └──────────────┘

Test Layers

1. Unit Tests

Purpose: Test individual components in isolation

Scope:

  • Single functions or methods
  • Mocked external dependencies
  • No database or external services

Example:

#![allow(unused)]
fn main() {
#[test]
fn test_message_formatting() {
    let msg = format_message("Hello");
    assert_eq!(msg, "Hello!");
}
}

Location: bottest/tests/unit/

2. Integration Tests

Purpose: Test multiple components working together

Scope:

  • Multi-component interactions
  • Real database connections
  • Service integration
  • Error handling across components

Example:

#![allow(unused)]
fn main() {
#[tokio::test]
async fn test_message_storage_and_retrieval() {
    let db = setup_test_db().await;
    let msg = Message::new("Hello");
    db.save(&msg).await.unwrap();
    let retrieved = db.get(msg.id).await.unwrap();
    assert_eq!(retrieved.text, "Hello");
}
}

Location: bottest/tests/integration/

3. End-to-End Tests

Purpose: Test complete user workflows

Scope:

  • Complete user journeys
  • Browser interactions
  • Multi-phase workflows
  • Real-world scenarios

Phases:

  1. Platform Loading
  2. botserver Initialization
  3. User Authentication
  4. Chat Interaction
  5. Logout & Session Management

Example:

#![allow(unused)]
fn main() {
#[tokio::test]
async fn test_complete_platform_flow_login_chat_logout() {
    let ctx = E2ETestContext::setup_with_browser().await?;
    
    verify_platform_loading(&ctx).await?;
    verify_botserver_running(&ctx).await?;
    test_user_login(browser, &ctx).await?;
    test_chat_interaction(browser, &ctx).await?;
    test_user_logout(browser, &ctx).await?;
    
    ctx.close().await;
}
}

Location: bottest/tests/e2e/

Test Harness

The test harness provides utilities for test setup and context management:

TestHarness
├── Setup utilities
│   ├── Create test database
│   ├── Start mock services
│   ├── Initialize configurations
│   └── Provision test data
├── Context management
│   ├── Resource tracking
│   ├── Cleanup coordination
│   └── Error handling
└── Helper functions
    ├── HTTP requests
    ├── Browser interactions
    └── Service mocking

E2ETestContext

Provides complete environment for E2E testing:

#![allow(unused)]
fn main() {
pub struct E2ETestContext {
    pub ctx: TestContext,
    pub server: botserverInstance,
    pub browser: Option<Browser>,
}

impl E2ETestContext {
    pub async fn setup() -> Result<Self>
    pub async fn setup_with_browser() -> Result<Self>
    pub fn base_url(&self) -> &str
    pub fn has_browser(&self) -> bool
    pub async fn close(self)
}
}

Temporary Stack Architecture

Isolated test environments for complete system integration:

/tmp/botserver-test-{timestamp}-{id}/
├── postgres/
│   ├── data/               ← PostgreSQL data files
│   ├── postgresql.log      ← Database logs
│   └── postgresql.conf     ← Configuration
├── redis/
│   ├── data/               ← Redis persistence
│   └── redis.log
├── minio/
│   ├── data/               ← S3-compatible storage
│   └── minio.log
├── botserver/
│   ├── config/
│   │   ├── config.toml     ← Application config
│   │   └── .env            ← Environment variables
│   ├── logs/
│   │   ├── botserver.log   ← Main application logs
│   │   ├── api.log         ← API logs
│   │   └── debug.log       ← Debug logs
│   ├── cache/              ← Local cache
│   └── state.json          ← Stack metadata
└── env.stack               ← Connection strings for tests

Isolation Strategy

Service Isolation

Each test gets dedicated service instances:

  • Database: Separate PostgreSQL cluster on port 5433
  • Cache: Separate Redis instance on port 6380
  • Storage: Separate MinIO instance on port 9001
  • API: Separate botserver on port 8000

Network Isolation

  • All services on localhost (127.0.0.1)
  • Non-standard ports to avoid conflicts
  • Docker containers for complete OS-level isolation

Data Isolation

  • Separate database schemas per test
  • Temporary file systems for storage
  • No shared configuration between tests
  • Automatic cleanup on completion

Test Execution Flow

1. Test Initialization
   ├─ Parse environment variables
   ├─ Check prerequisites (WebDriver, services)
   └─ Create test context

2. Stack Setup
   ├─ Create temporary directory
   ├─ Initialize databases
   ├─ Start services
   └─ Wait for readiness

3. Test Execution
   ├─ Setup phase
   ├─ Action phase
   ├─ Verification phase
   └─ Assertion phase

4. Cleanup
   ├─ Close browser connections
   ├─ Shutdown services gracefully
   ├─ Remove temporary directories
   └─ Report results

Browser Automation

Uses WebDriver (Selenium) protocol for browser testing:

Test Code
    ↓
Reqwest HTTP Client
    ↓
WebDriver Protocol (JSON-RPC)
    ↓
chromedriver / Selenium Server
    ↓
Chrome/Chromium Browser
    ↓
Test Verification

WebDriver Commands

  • Navigate to URL
  • Find elements by selector
  • Click buttons and links
  • Fill form inputs
  • Wait for elements
  • Execute JavaScript
  • Take screenshots
  • Get element text

Error Handling

Comprehensive error handling at all levels:

Test Execution
    │
    ├─ Setup Error
    │  └─ Fail fast, preserve environment
    │
    ├─ Execution Error
    │  ├─ Log detailed context
    │  ├─ Capture screenshots
    │  └─ Optionally preserve stack
    │
    └─ Cleanup Error
       └─ Log warning, continue cleanup

Performance Considerations

Test Execution Times

  • Unit Tests: ~0.1-1 second
  • Integration Tests: ~1-10 seconds
  • E2E Tests: ~30-60 seconds
  • Full Suite: ~2-3 minutes

Optimization Strategies

  1. Parallel Execution: Run independent tests simultaneously
  2. Caching: Reuse expensive resources
  3. Lazy Loading: Initialize only needed components
  4. Release Mode: Use --release for faster compilation
  5. Selective Testing: Run only relevant tests during development

CI/CD Integration

GitHub Actions Workflow

Trigger (push/PR)
    ↓
Setup Environment
    ├─ Install Rust
    ├─ Start WebDriver
    └─ Setup test infrastructure
    ↓
Run Tests
    ├─ Unit tests
    ├─ Integration tests
    └─ E2E tests
    ↓
Collect Artifacts
    ├─ Test results
    ├─ Coverage reports
    ├─ Screenshots/logs
    └─ Performance metrics
    ↓
Report Results
    └─ Pass/fail status

Best Practices

1. Test Organization

  • Keep tests focused and single-purpose
  • Use descriptive names
  • Group related tests
  • Organize by layer (unit/integration/e2e)

2. Test Design

  • Make tests independent
  • Use realistic data
  • Test both happy and error paths
  • Avoid test interdependencies

3. Test Maintenance

  • Keep tests up to date with code
  • Remove obsolete tests
  • Refactor test helpers
  • Monitor test execution time

4. Test Documentation

  • Document complex test logic
  • Explain test prerequisites
  • Document setup/teardown
  • Include troubleshooting tips

Debugging

Debug Helpers

  • RUST_LOG=debug - Verbose logging
  • HEADED=1 - Show browser UI
  • --nocapture - Print test output
  • --test-threads=1 - Run sequentially

Debug Techniques

  • Check server logs
  • Review screenshots
  • Inspect HTTP requests
  • Step through code
  • Use REPL for experimentation

Future Enhancements

  1. Load Testing - Concurrent user scenarios
  2. Visual Regression - Screenshot comparison
  3. Accessibility Testing - WCAG compliance
  4. Security Testing - Vulnerability scanning
  5. Performance Profiling - Memory and CPU analysis
  6. Multi-region - Test across deployments
  7. Snapshot Testing - Compare outputs over time

References

End-to-End Testing

End-to-end (E2E) testing validates complete user workflows from platform loading through authentication, interaction, and logout.

Overview

E2E tests simulate real user interactions:

  1. Platform Loading - UI and API infrastructure operational
  2. botserver Initialization - Backend service running and ready
  3. User Authentication - Login workflow functional
  4. Chat Interaction - Message sending and receiving
  5. Logout - Session management and access control

Complete Platform Flow Test

The main E2E test validates the entire user journey:

#![allow(unused)]
fn main() {
#[tokio::test]
async fn test_complete_platform_flow_login_chat_logout() {
    // Setup
    let ctx = E2ETestContext::setup_with_browser().await?;
    let browser = ctx.browser.as_ref().unwrap();

    // Phase 1: Platform Loading
    verify_platform_loading(&ctx).await?;

    // Phase 2: botserver Running
    verify_botserver_running(&ctx).await?;

    // Phase 3: User Login
    test_user_login(browser, &ctx).await?;

    // Phase 4: Chat Interaction
    test_chat_interaction(browser, &ctx).await?;

    // Phase 5: Logout
    test_user_logout(browser, &ctx).await?;

    ctx.close().await;
}
}

Test Phases

Phase 1: Platform Loading

Verifies UI and API infrastructure:

#![allow(unused)]
fn main() {
verify_platform_loading(&ctx).await?
}

Checks:

  • Health endpoint responds with 2xx status
  • API endpoints are accessible
  • Database migrations completed
  • Services are initialized

Phase 2: botserver Initialization

Verifies the backend service is operational:

#![allow(unused)]
fn main() {
verify_botserver_running(&ctx).await?
}

Checks:

  • Process is alive and responding
  • Configuration properly loaded
  • Dependencies connected (DB, cache, storage)
  • Health checks pass

Phase 3: User Authentication

Tests the login workflow:

#![allow(unused)]
fn main() {
test_user_login(browser, &ctx).await?
}

Tests:

  • Navigate to login page
  • Form elements present and functional
  • Accept valid test credentials (test@example.com / TestPassword123!)
  • Create session and authentication token
  • Redirect to dashboard/chat interface

Phase 4: Chat Interaction

Tests messaging functionality:

#![allow(unused)]
fn main() {
test_chat_interaction(browser, &ctx).await?
}

Tests:

  • Chat interface loads correctly
  • User can type and send messages
  • Bot responds with valid output
  • Message history persists
  • Multiple exchanges work correctly

Phase 5: Logout & Session Management

Tests secure session handling:

#![allow(unused)]
fn main() {
test_user_logout(browser, &ctx).await?
}

Tests:

  • Logout button/action works
  • Session is invalidated
  • User redirected to login page
  • Protected routes block unauthenticated access
  • Cannot access chat after logout

Running E2E Tests

HTTP-Only Tests (No Browser Required)

These tests verify API and infrastructure without browser automation:

cd gb/bottest

# Platform loading verification
cargo test --test e2e test_platform_loading_http_only -- --nocapture

# botserver startup verification
cargo test --test e2e test_botserver_startup -- --nocapture

Execution time: ~2-5 seconds

Complete Flow Tests (Requires WebDriver)

Full browser-based tests with user interactions:

# Start WebDriver first
chromedriver --port=4444 &

# Run complete platform flow
cargo test --test e2e test_complete_platform_flow_login_chat_logout -- --nocapture

# Run simplified flow
cargo test --test e2e test_login_and_chat_flow -- --nocapture

Execution time: ~30-60 seconds

WebDriver Setup

Option 1: Local Installation

# Download chromedriver from https://chromedriver.chromium.org/
# Place in PATH, then start:
chromedriver --port=4444

Option 2: Docker

docker run -d -p 4444:4444 selenium/standalone-chrome

Option 3: Docker Compose

docker-compose up -d webdriver

Environment Variables

Control test behavior:

VariableDefaultPurpose
HEADEDunsetShow browser window instead of headless
WEBDRIVER_URLhttp://localhost:4444WebDriver server endpoint
SKIP_E2E_TESTSunsetSkip E2E tests if set
RUST_LOGinfoLogging level: debug, info, warn, error
KEEP_TEMP_STACK_ON_ERRORunsetPreserve temp directory on failure

Examples

# Show browser UI for debugging
HEADED=1 cargo test --test e2e -- --nocapture

# Use custom WebDriver
WEBDRIVER_URL=http://localhost:4445 cargo test --test e2e -- --nocapture

# Verbose logging
RUST_LOG=debug cargo test --test e2e -- --nocapture

# Run single-threaded with output
cargo test --test e2e -- --nocapture --test-threads=1

Test Helpers

Reusable helper functions for custom tests:

#![allow(unused)]
fn main() {
// Verify platform is operational
verify_platform_loading(&ctx) -> Result<()>

// Verify botserver is running
verify_botserver_running(&ctx) -> Result<()>

// Perform login with credentials
test_user_login(browser, &ctx) -> Result<()>

// Send message and wait for response
test_chat_interaction(browser, &ctx) -> Result<()>

// Logout and verify session invalidation
test_user_logout(browser, &ctx) -> Result<()>
}

Test Context

Setup a test context for E2E testing:

#![allow(unused)]
fn main() {
use bottest::prelude::*;
use bottest::web::{Browser, BrowserConfig};

// HTTP-only context
let ctx = E2ETestContext::setup().await?;

// With browser automation
let ctx = E2ETestContext::setup_with_browser().await?;
let browser = ctx.browser.as_ref().unwrap();

// Access base URL
let url = ctx.base_url();

// Access running server
let is_running = ctx.server.is_running();

// Cleanup
ctx.close().await;
}

Common Issues

WebDriver Not Available

Problem: Test fails with “WebDriver not available”

Solution:

# Start WebDriver
chromedriver --port=4444
# or
docker run -d -p 4444:4444 selenium/standalone-chrome

Port Already in Use

Problem: Services fail to start due to port conflicts

Solution:

# Kill existing services
pkill -f chromedriver
pkill -f botserver
pkill -f postgres
pkill -f redis-server

Test Hangs or Timeout

Problem: Test appears to hang or timeout

Solution:

# Run with timeout and verbose output
timeout 120s RUST_LOG=debug cargo test --test e2e test_name -- --nocapture --test-threads=1

Browser Connection Issues

Problem: Browser fails to connect to WebDriver

Solution:

# Use different WebDriver port
WEBDRIVER_URL=http://localhost:4445 cargo test --test e2e -- --nocapture

Debugging

View Test Output

# Show all output
cargo test --test e2e test_name -- --nocapture

# With timestamps
RUST_LOG=debug cargo test --test e2e test_name -- --nocapture

# Save to file
cargo test --test e2e test_name -- --nocapture 2>&1 | tee test.log

Watch Browser in Action

# Run with visible browser
HEADED=1 cargo test --test e2e test_name -- --nocapture --test-threads=1

Check Server Logs

# Monitor logs while tests run
tail -f /tmp/bottest-*/botserver.log

# In another terminal:
cargo test --test e2e test_name -- --nocapture

Performance

Typical execution times:

TestTimeResources
Platform loading (HTTP-only)~2sMinimal
botserver startup (HTTP-only)~5sMinimal
Login and chat flow~20sBrowser + Memory
Complete flow with all phases~45sBrowser + Memory
Full E2E test suite~2-3 minHigh

Use release mode for faster execution:

cargo test --test e2e --release -- --nocapture

CI/CD Integration

GitHub Actions Example

name: E2E Tests
on: [push, pull_request]

jobs:
  e2e:
    runs-on: ubuntu-latest
    services:
      chromedriver:
        image: selenium/standalone-chrome
        options: --shm-size=2gb
    steps:
      - uses: actions/checkout@v3
      - uses: actions-rs/toolchain@v1
        with:
          toolchain: stable
      - run: cd gb/bottest && cargo test --test e2e -- --nocapture

Temporary Stack Architecture (Future)

When botserver implements --temp-stack, E2E tests will run in isolated environments:

botserver --temp-stack
# Creates: /tmp/botserver-test-{timestamp}-{random}/
# With isolated: PostgreSQL, Redis, MinIO, Mock LLM
# Auto-cleanup after test completion

Benefits:

  • ✓ Isolation - Each test in separate environment
  • ✓ Reproducibility - Consistent setup every time
  • ✓ Automation - No manual configuration
  • ✓ Safety - Won’t interfere with development
  • ✓ Cleanup - Automatic resource management
  • ✓ Parallel - Multiple tests simultaneously
  • ✓ CI/CD Ready - Perfect for automated pipelines

Writing Custom E2E Tests

Create new test files in gb/bottest/tests/e2e/:

#![allow(unused)]
fn main() {
#[tokio::test]
async fn test_my_feature() {
    // Setup context
    let ctx = E2ETestContext::setup_with_browser().await?;
    let browser = ctx.browser.as_ref().unwrap();

    // Navigate to feature
    browser.navigate(&format!("{}/my-feature", ctx.base_url())).await?;

    // Interact with UI
    browser.click("button.action").await?;
    browser.wait_for_element(".result", Duration::from_secs(10)).await?;

    // Verify results
    let text = browser.get_text(".result").await?;
    assert_eq!(text, "Expected result");

    // Cleanup
    ctx.close().await;
}
}

Register in tests/e2e/mod.rs:

#![allow(unused)]
fn main() {
mod my_feature;
}

Best Practices

  1. Keep tests focused - Test one user workflow per test
  2. Use meaningful names - test_complete_platform_flow not test_1
  3. Explicit waits - Use wait_for_element instead of sleep
  4. Test realistic flows - Use actual test credentials
  5. Verify results explicitly - Check status codes, UI elements, and state
  6. Clean up properly - Always call ctx.close().await
  7. Handle errors gracefully - Use ? operator for error propagation
  8. Make tests independent - Don’t rely on test execution order

Test Success Criteria

✓ Platform fully loads without errors ✓ botserver starts and becomes ready ✓ User can login with credentials ✓ Chat messages are sent and responses received ✓ User can logout and session is invalidated ✓ Protected routes block unauthenticated access ✓ Tests run consistently multiple times ✓ Tests complete within acceptable time (~60 seconds)

See Also

Performance Testing

CI/CD Integration

Development Setup

This guide covers setting up a development environment for contributing to General Bots.

Prerequisites

Required Software

  • Rust: 1.70 or later

    curl --proto '=https' --tlsv1.2 -sSf https://sh.rustup.rs | sh
    
  • PostgreSQL: Installed automatically during bootstrap

  • Git: For version control

    git --version  # Should be 2.0 or later
    

Optional Components

  • Drive: For S3-compatible storage (auto-installed by bootstrap)
  • Cache (Valkey): For caching (auto-installed by bootstrap)
  • LXC: For containerized development

Getting Started

1. Clone the Repository

git clone https://github.com/GeneralBots/botserver.git
cd botserver

2. Environment Setup

The .env file is created automatically during bootstrap with secure random credentials. No manual configuration needed.

# Bootstrap creates everything automatically
./botserver
DRIVE_SECRET=minioadmin
SERVER_HOST=127.0.0.1
SERVER_PORT=8080

3. Install Rust Dependencies

cargo fetch

4. Run Bootstrap

The bootstrap process installs and configures all required services:

cargo run

On first run, bootstrap will:

  • Install PostgreSQL (if needed)
  • Install drive (S3-compatible storage)
  • Install cache (Valkey)
  • Create database schema
  • Upload bot templates
  • Generate secure credentials

Development Workflow

Building the Project

# Debug build (faster compilation)
cargo build

# Release build (optimized)
cargo build --release

Running Tests

# Run all tests
cargo test

# Run specific test
cargo test test_name

# Run with output
cargo test -- --nocapture

Code Formatting

# Format all code
cargo fmt

# Check formatting without changes
cargo fmt -- --check

Linting

# Run clippy for lint checks
cargo clippy -- -D warnings

Project Structure

botserver/
├── src/
│   ├── main.rs              # Application entry point
│   ├── lib.rs               # Library root
│   ├── api_router.rs        # API routes
│   ├── core/                # Core functionality
│   │   ├── bootstrap/       # System initialization
│   │   ├── bot/            # Bot management
│   │   ├── config/         # Configuration
│   │   ├── session/        # Session management
│   │   └── shared/         # Shared utilities
│   ├── basic/              # BASIC interpreter
│   │   ├── compiler/       # Script compilation
│   │   └── keywords/       # Keyword implementations
│   ├── drive/              # Storage integration
│   └── llm/                # LLM providers
├── templates/              # Bot templates
├── migrations/             # Database migrations
├── web/                    # Web interface
└── Cargo.toml             # Dependencies

Database Setup

Manual Database Creation

If bootstrap doesn’t create the database:

# Connect to PostgreSQL
psql -U postgres

# Create user and database
CREATE USER gbuser WITH PASSWORD 'SecurePassword123!';
CREATE DATABASE generalbots OWNER gbuser;
\q

Running Migrations

Migrations run automatically, but can be run manually:

# Install diesel CLI
cargo install diesel_cli --no-default-features --features postgres

# Run migrations
diesel migration run

Common Development Tasks

Adding a New Keyword

  1. Create new file in src/basic/keywords/
  2. Implement the keyword function
  3. Register in src/basic/keywords/mod.rs
  4. Add tests

Adding an API Endpoint

  1. Define handler in appropriate module
  2. Add route in src/api_router.rs
  3. Update OpenAPI documentation
  4. Write integration tests

Modifying Database Schema

  1. Create migration:
    diesel migration generate migration_name
    
  2. Edit up.sql and down.sql
  3. Run migration:
    diesel migration run
    
  4. Update models in src/core/shared/models.rs

Remote Development Setup

SSH Configuration for Stable Connections

When developing on remote Linux servers, configure SSH for stable monitoring connections:

Edit ~/.ssh/config:

Host *
    ServerAliveInterval 60
    ServerAliveCountMax 5

This configuration:

  • ServerAliveInterval 60: Sends keepalive packets every 60 seconds
  • ServerAliveCountMax 5: Allows up to 5 missed keepalives before disconnecting
  • Prevents SSH timeouts during long compilations or debugging sessions
  • Maintains stable connections for monitoring logs and services

Remote Monitoring Tips

# Monitor botserver logs in real-time
ssh user@server 'tail -f botserver.log'

# Watch compilation progress
ssh user@server 'cd /path/to/botserver && cargo build --release'

# Keep terminal session alive
ssh user@server 'tmux new -s botserver'

Debugging

Debug Mode

Run with verbose output to troubleshoot issues:

RUST_LOG=trace cargo run

Check logs in the console output for debugging information.

Using VS Code

.vscode/launch.json:

{
  "version": "0.2.0",
  "configurations": [
    {
      "type": "lldb",
      "request": "launch",
      "name": "Debug botserver",
      "cargo": {
        "args": ["build"],
        "filter": {
          "name": "botserver",
          "kind": "bin"
        }
      },
      "args": [],
      "cwd": "${workspaceFolder}"
    }
  ]
}

Performance Profiling

Using Flamegraph

# Install flamegraph
cargo install flamegraph

# Profile the application
cargo flamegraph --bin botserver

Memory Profiling

# Install valgrind (Linux)
sudo apt-get install valgrind

# Run with memory profiling
valgrind --tool=memcheck cargo run

Testing with Different Features

Feature Flags

# Build with specific features
cargo build --features "llm,drive"

# Build without default features
cargo build --no-default-features

# Test with all features
cargo test --all-features

Troubleshooting

Common Issues

  1. Database Connection Failed

    • Check PostgreSQL is running
    • Verify DATABASE_URL is correct
    • Check user permissions
  2. Drive Connection Failed

    • Ensure drive is running on port 9000
    • Check DRIVE_ACCESSKEY and DRIVE_SECRET
  3. Port Already in Use

    • Change SERVER_PORT in .env
    • Kill existing process: lsof -i :9000
  4. Compilation Errors

    • Update Rust: rustup update
    • Clean build: cargo clean
    • Check dependencies: cargo tree

LXC Development

Using LXC Containers

# Create development containers
lxc-create -n botserver-dev-db -t download -- -d alpine -r 3.18 -a amd64
lxc-create -n botserver-dev-drive -t download -- -d alpine -r 3.18 -a amd64
lxc-create -n botserver-dev-cache -t download -- -d alpine -r 3.18 -a amd64

# Configure PostgreSQL container
lxc-start -n botserver-dev-db
lxc-attach -n botserver-dev-db -- sh -c "
  apk add postgresql14 postgresql14-client
  rc-service postgresql setup
  rc-service postgresql start
  psql -U postgres -c \"CREATE USER gbuser WITH PASSWORD 'password';\"
  psql -U postgres -c \"CREATE DATABASE botserver OWNER gbuser;\"
"

# Configure MinIO (Drive) container
lxc-start -n botserver-dev-drive
lxc-attach -n botserver-dev-drive -- sh -c "
  wget https://dl.min.io/server/minio/release/linux-amd64/minio
  chmod +x minio
  MINIO_ROOT_USER=driveadmin MINIO_ROOT_PASSWORD=driveadmin ./minio server /data --console-address ':9001' &
"

# Configure Redis (Cache) container
lxc-start -n botserver-dev-cache
lxc-attach -n botserver-dev-cache -- sh -c "
  apk add redis
  rc-service redis start
"

# Get container IPs
DB_IP=$(lxc-info -n botserver-dev-db -iH)
DRIVE_IP=$(lxc-info -n botserver-dev-drive -iH)
CACHE_IP=$(lxc-info -n botserver-dev-cache -iH)

echo "Database: $DB_IP:5432"
echo "Drive: $DRIVE_IP:9000"
echo "Cache: $CACHE_IP:6379"

Start all services:

lxc-start -n botserver-dev-db
lxc-start -n botserver-dev-drive
lxc-start -n botserver-dev-cache

Contributing Guidelines

See Contributing Guidelines for:

  • Code style requirements
  • Commit message format
  • Pull request process
  • Code review expectations

Getting Help

  • Check existing issues on GitHub
  • Join the community discussions
  • Review the documentation
  • Ask questions in pull requests

Next Steps

Local Development

Pull Requests

This guide covers the pull request process for contributing to botserver, from creation to merge. Understanding this process helps ensure your contributions are reviewed efficiently and merged successfully.

Overview

Pull requests are the primary method for contributing code to botserver. They provide a structured way to propose changes, enabling code review, discussion, and automated testing before changes are merged into the main codebase. Every contribution, whether a bug fix, new feature, or documentation update, follows this process.

Before Creating a PR

Check Existing Work

Before starting work on a contribution, search existing pull requests to avoid duplicating effort. Check the issue tracker for related discussions that might affect your approach. For major changes, open an issue first to discuss the design with maintainers and get feedback before investing significant time in implementation.

Prepare Your Branch

Create a feature branch from the main branch for your work. Keep your branch updated by regularly fetching from origin and rebasing on the latest main. This practice reduces merge conflicts and ensures your changes work with the most recent codebase.

git checkout -b feature/your-feature
git fetch origin
git rebase origin/main

Make Your Changes

Follow the established code standards documented in the standards guide. Write tests for any new functionality you add. Update documentation to reflect your changes. Keep commits atomic and logical, with each commit representing a single coherent change.

Creating a Pull Request

PR Title

Use clear, descriptive titles that follow the conventional commit format. Good titles include prefixes like “feat:” for new features, “fix:” for bug fixes, “docs:” for documentation updates, and “refactor:” for code restructuring. Examples of good titles include “feat: Add email notification support” and “fix: Resolve session timeout issue”. Avoid vague titles like “Fix bug” or “Update code” that do not convey what the PR actually does.

PR Description

The description should explain what the PR does and why. Start with a brief description of the change. Indicate the type of change whether it is a bug fix, new feature, breaking change, documentation update, performance improvement, or refactoring. List specific changes made with technical details and any side effects. Document testing performed including unit tests, integration tests, and manual testing. Note any documentation updates made. Link related issues using keywords like “Fixes #123” to automatically close issues when the PR merges. Include screenshots for UI changes.

PR Best Practices

Keep It Small

Focus each PR on one feature or fix rather than bundling multiple changes together. Aim for fewer than 500 lines changed when possible. Split large changes into multiple smaller PRs that can be reviewed independently. Smaller PRs are easier and faster to review, leading to quicker merge times and higher quality feedback.

Commit Organization

Organize commits logically with each commit representing a complete, working change. Good commit organization might include separate commits for adding a feature, adding tests for that feature, and updating documentation. Avoid mixing unrelated changes in a single commit. Well-organized commits make it easier to understand the progression of changes and to bisect issues if problems arise later.

Self-Review First

Before requesting review from others, review your own changes thoroughly. Check for any debug code or temporary changes that should not be committed. Verify there are no accidental changes to unrelated files. Ensure formatting is consistent with the codebase style. Test edge cases that the CI might not catch. This self-review catches obvious issues before they consume reviewer time.

Code Review Process

Requesting Review

When your PR is ready for review, mark it as ready if it was previously a draft. Request specific reviewers if you know who has relevant expertise. Add appropriate labels to categorize the PR. Link related issues in the description. Add comments on particularly complex areas of code to help reviewers understand your approach.

Responding to Feedback

Engage constructively with review feedback. Acknowledge feedback and note when you have addressed it with a commit reference. Explain your decisions when you chose a particular approach for good reasons. Ask for clarification when feedback is unclear. If you disagree with feedback, express your perspective respectfully and be open to discussion.

Making Changes

Address review comments promptly to keep the review process moving. Commit changes that address feedback with clear commit messages. You can amend commits if the changes are small corrections. Use force push with lease to update your PR branch safely while preserving the force push safety check.

Review Guidelines

For Reviewers

When reviewing PRs, examine code correctness to ensure the implementation is sound. Check test coverage to verify new code is properly tested. Verify documentation is updated to reflect changes. Consider performance implications of the changes. Evaluate security considerations especially for code handling user input or authentication. Ensure code style consistency with the rest of the codebase.

Provide constructive feedback with specific suggestions. Include code examples when they would clarify your point. Recognize good work when you see it. Remember that the goal is to improve the code while supporting the contributor.

Review Comments

Good review feedback is specific and actionable. Instead of saying “This is wrong,” explain what the issue is and suggest a solution. For example, you might suggest using a more idiomatic Rust pattern and show what the improved code would look like. This approach helps contributors learn and makes it clear how to address the feedback.

CI/CD Checks

Required Checks

All PRs must pass the automated CI checks before merging. These include cargo build for compilation verification, cargo test for unit tests, cargo fmt check for code formatting, cargo clippy for linting, and documentation builds. The CI runs automatically when you push changes to your PR branch.

Fixing Failed Checks

When CI checks fail, fix the issues locally before pushing updates. Run cargo fmt to fix formatting issues. Run cargo clippy with the fix flag to automatically fix many linting issues. Run cargo test locally to debug test failures with the nocapture flag to see output. Fix all issues and push updates to trigger a new CI run.

Merge Process

Merge Requirements

Before a PR can be merged, all CI checks must pass, at least one maintainer must approve the changes, all review conversations must be resolved, the branch must be up-to-date with main, and there must be no merge conflicts.

Merge Methods

Squash and merge is the preferred method for most PRs. This combines all commits into a single commit on main, keeping the history clean and making it easy to revert changes if needed. Rebase and merge preserves the individual commit history and is appropriate for PRs with well-organized, meaningful commits. Merge commits are rarely used and reserved for special circumstances.

After Merge

Clean Up

After your PR is merged, delete your local feature branch. GitHub automatically deletes the remote branch if configured to do so. Update your local main branch by checking out main and pulling the latest changes. This keeps your local repository clean and up-to-date.

Follow Up

Monitor the codebase after your changes merge to catch any issues that emerge. Respond to questions from other contributors about your changes. Update related documentation if you discover gaps. Close any related issues that were not automatically closed by the PR.

Common Issues

Merge Conflicts

When merge conflicts occur, update your branch with the latest main by fetching and rebasing. Git will pause at each conflict, allowing you to resolve it. Edit the conflicted files to resolve the conflicts, add the resolved files, and continue the rebase. If the conflicts become too complex, you can abort the rebase and try a different approach.

Large PR

If a PR becomes too large during development, consider closing it and splitting the work into smaller PRs. Create a tracking issue to coordinate the smaller PRs. Link all related PRs together so reviewers understand the bigger picture. Smaller, focused PRs are more likely to receive thorough review and merge quickly.

Stale PR

If a PR goes without activity for an extended period, ping the reviewers with a comment. Rebase on the latest main to ensure the changes still apply cleanly. Add a comment explaining the current status. If the PR is no longer relevant, close it with an explanation so others know not to wait for it.

Tips for Success

Communicate early about what you plan to implement to avoid wasted effort and get valuable design feedback. Test thoroughly rather than relying solely on CI since you understand your changes better than automated tests can. Be patient because reviewers have limited time and thorough review takes effort. Be responsive to feedback to keep the review process moving efficiently. Learn from reviews by treating feedback as an opportunity to improve your skills. Help others by reviewing other PRs when you have time, which builds goodwill and helps you learn the codebase.

Summary

Successful pull requests are well-prepared with a clear purpose, properly documented and tested, responsive to feedback, and focused on a single change. Following these guidelines helps maintain code quality and makes the review process smooth for everyone involved. The time invested in creating a good PR pays off in faster reviews, fewer revision cycles, and a better end result.

Community Guidelines

Welcome to the General Bots community! These guidelines ensure a welcoming environment.

Our Values

  • Inclusivity: Welcome all contributors
  • Respect: Professional interactions
  • Collaboration: Work together effectively
  • Quality: Maintain high standards
  • Learning: Share knowledge openly

Expected Behavior

  • Be respectful and considerate
  • Welcome newcomers
  • Give constructive feedback
  • Focus on community benefit
  • Show empathy

Communication

  • GitHub Issues for bugs/features
  • GitHub Discussions for questions
  • Pull Requests for contributions

See Also

Database Schema Overview

General Bots uses PostgreSQL as its primary database with Diesel ORM for type-safe database operations. The schema is designed to support multi-tenant bot hosting with comprehensive session management, user authentication, and content storage.

Core Architecture

The database schema follows several key design principles. All tables use UUID primary keys for globally unique identifiers that work across distributed systems. Created and updated timestamps provide audit trails for tracking data changes. Foreign key relationships maintain referential integrity between related entities. JSON fields offer flexible storage for dynamic configuration and metadata that doesn’t fit rigid schema definitions.

Database Schema Diagram

Entity Relationship Overview

Database Schema Overview

Core Tables Structure

Detailed Schema

Database Entity Details

Schema Categories

Organization & Bot Management

The organizations table provides multi-tenant organization support, isolating data between different customers or deployments. The bots table stores bot instances and their configurations. The bot_configuration table contains bot-specific settings and parameters. The bot_memories table provides persistent key-value storage for bots to maintain state across sessions.

User & Authentication

The users table stores user accounts with secure password storage using Argon2 hashing. The user_sessions table tracks active user sessions with authentication tokens. The user_login_tokens table manages authentication tokens for login flows. The user_preferences table contains user-specific settings and customizations.

Conversation & Messaging

The message_history table maintains complete conversation history between users and bots. The clicks table tracks user interaction events for analytics. The system_automations table stores scheduled tasks and automation rules that run without user intervention.

Knowledge Base

The kb_collections table defines knowledge base collection containers. The kb_documents table stores documents within those collections. The user_kb_associations table manages user access permissions to knowledge bases. The session_tool_associations table tracks which tools are available within specific sessions.

Tools & Integration

The basic_tools table stores BASIC script tool definitions compiled from .bas files. The user_email_accounts table manages email integration accounts for users. The email_drafts table stores draft emails being composed. The email_folders table organizes email folder structures.

Table Relationships

Session Flow

Session Flow Diagram

Knowledge Base Access

Knowledge Base Access

Primary Relationships

The bot hierarchy establishes that organizations contain multiple bots in a one-to-many relationship. Each bot has multiple configuration entries and memories associated with it.

User sessions connect users to bots through the session table. Users can have multiple sessions, and each session maintains its own message history. Bots also connect to sessions, enabling the many-to-many relationship between users and bots.

Knowledge management links bots to knowledge base collections, with each collection containing multiple documents. Sessions associate with knowledge bases through the user_kb_associations table.

Tool associations connect bots to their defined tools, and sessions link to available tools through the session_tool_associations junction table.

Data Types

The schema uses several PostgreSQL data types throughout. UUID fields serve as primary keys and foreign key references for globally unique identification. Text fields store variable-length string data without length constraints. Varchar fields hold fixed-length strings for codes and identifiers. Timestamptz fields store timestamps with timezone information for accurate time tracking across regions. Jsonb fields provide JSON storage with indexing capabilities for flexible schemas. Boolean fields represent binary flags and settings. Integer fields store counters and numeric values.

Indexing Strategy

Primary indexes exist on all id fields serving as primary keys. Foreign key relationships receive indexes for efficient joins. Timestamp fields are indexed to support time-based queries. Session tokens have indexes for fast authentication lookups.

Composite indexes optimize common query patterns. The combination of bot_id and user_id enables efficient session lookup. Collection_id with document_id accelerates knowledge retrieval. User_id paired with created_at supports history queries ordered by time.

Migration Management

Database migrations are managed through Diesel’s migration system. Migrations reside in the migrations/ directory with each migration containing both up.sql and down.sql files for applying and reverting changes. Version tracking occurs in the __diesel_schema_migrations table. The bootstrap process automatically applies pending migrations on startup.

Performance Considerations

Connection Pooling

The default connection pool maintains 10 connections to balance resource usage with concurrency. Pool size is configurable via environment variables for different deployment scales. Automatic connection recycling prevents stale connections from causing issues.

Query Optimization

Prepared statements cache query plans for repeated queries, improving performance. Batch operations handle bulk inserts efficiently rather than individual row insertions. Lazy loading defers loading of related entities until needed. Pagination limits result sets to manageable sizes for large tables.

Data Retention

Message history retention is configurable to balance storage costs with historical needs. Automatic cleanup removes expired sessions to free resources. An archival strategy moves old conversations to cold storage while maintaining accessibility.

Security Features

Data Protection

Password hashing uses the Argon2 algorithm for strong protection against brute-force attacks. AES-GCM encryption protects sensitive fields at rest. Secure random token generation creates unpredictable session identifiers. Diesel’s parameterized queries prevent SQL injection attacks.

Access Control

Row-level security is implemented through application logic that filters queries by user context. User isolation ensures sessions only access their own data. Bot isolation separates data by organization to prevent cross-tenant access. Audit logging records sensitive operations for compliance and security review.

Backup Strategy

Backup Types

Full database dumps capture complete point-in-time snapshots. Incremental WAL archiving provides continuous backup with minimal storage overhead. Point-in-time recovery support enables restoration to any moment within the retention window. Cross-region replication offers disaster recovery capabilities for critical deployments.

Restore Procedures

Automated restore testing validates backup integrity on a regular schedule. Version compatibility checks ensure backups restore correctly to the current schema. Data integrity validation confirms restored data matches expected checksums. Zero-downtime migration support enables schema changes without service interruption.

Monitoring

Key Metrics

Connection pool usage indicates whether the pool size needs adjustment. Query execution time reveals slow queries requiring optimization. Table sizes and growth rates inform capacity planning. Index effectiveness metrics show whether indexes are being utilized. Lock contention monitoring identifies concurrency bottlenecks.

Health Checks

Database connectivity verification ensures the connection pool can reach PostgreSQL. Migration status checks confirm all migrations have been applied. Replication lag monitoring applies to deployments with read replicas. Storage usage tracking prevents disk space exhaustion.

Best Practices

Always use migrations for schema changes rather than manual DDL to maintain consistency across environments. Never modify production data directly through SQL clients to avoid bypassing application logic. Test migrations in development first to catch issues before they affect production. Monitor performance metrics regularly to identify degradation early. Plan capacity based on growth projections to avoid emergency scaling. Document changes in migration files with comments explaining the purpose of each change. Use transactions for data consistency when multiple tables must be updated together. Implement retry logic for transient failures like connection timeouts or deadlocks.

Future Considerations

Partitioning for large tables like message_history would improve query performance and enable efficient data archival. Read replicas could scale read-heavy workloads across multiple database instances. Time-series optimization for metrics data would support analytics features. Full-text search indexes would enable natural language queries against stored content. Graph relationships could support advanced queries for interconnected data like conversation flows.

Database Tables

This section documents all database tables in General Bots, their structures, and purposes.

Core Tables

organizations

Stores organization/tenant information for multi-tenant deployments.

ColumnTypeDescription
org_idUUIDPrimary key
nameTEXTOrganization name
slugTEXTURL-friendly identifier
created_atTIMESTAMPTZCreation timestamp

bots

Bot instances and their basic configuration.

ColumnTypeDescription
idUUIDPrimary key
nameTEXTBot name
org_idUUIDForeign key to organizations
created_atTIMESTAMPTZCreation timestamp
updated_atTIMESTAMPTZLast update timestamp

bot_configuration

Stores bot-specific configuration parameters from config.csv.

ColumnTypeDescription
idUUIDPrimary key
bot_idUUIDForeign key to bots
keyTEXTConfiguration key
valueTEXTConfiguration value
created_atTIMESTAMPTZCreation timestamp
updated_atTIMESTAMPTZLast update timestamp

bot_memories

Persistent key-value storage for bots (used by GET BOT MEMORY/SET BOT MEMORY).

ColumnTypeDescription
idUUIDPrimary key
bot_idUUIDForeign key to bots
keyTEXTMemory key
valueTEXTMemory value
created_atTIMESTAMPTZCreation timestamp
updated_atTIMESTAMPTZLast update timestamp

User Management Tables

users

User accounts with authentication credentials.

ColumnTypeDescription
idUUIDPrimary key
usernameTEXTUnique username
emailTEXTEmail address
password_hashTEXTArgon2 hashed password
activeBOOLEANAccount status
created_atTIMESTAMPTZRegistration timestamp
updated_atTIMESTAMPTZLast update timestamp

user_sessions

Active user sessions for authentication and state management.

ColumnTypeDescription
idUUIDPrimary key
user_idUUIDForeign key to users
bot_idUUIDForeign key to bots
session_tokenTEXTUnique session identifier
expires_atTIMESTAMPTZSession expiration
created_atTIMESTAMPTZSession start
updated_atTIMESTAMPTZLast activity

user_login_tokens

Authentication tokens for login flows.

ColumnTypeDescription
idUUIDPrimary key
user_idUUIDForeign key to users
tokenTEXTLogin token
expires_atTIMESTAMPTZToken expiration
usedBOOLEANWhether token was used
created_atTIMESTAMPTZToken creation

user_preferences

User-specific settings and preferences.

ColumnTypeDescription
idUUIDPrimary key
user_idUUIDForeign key to users
preferencesJSONBPreferences data
created_atTIMESTAMPTZCreation timestamp
updated_atTIMESTAMPTZLast update

Conversation Tables

message_history

Complete conversation history between users and bots.

ColumnTypeDescription
idUUIDPrimary key
session_idUUIDForeign key to user_sessions
user_idUUIDForeign key to users
bot_idUUIDForeign key to bots
messageTEXTMessage content
senderTEXT‘user’ or ‘bot’
created_atTIMESTAMPTZMessage timestamp

clicks

Tracks user interactions with UI elements.

ColumnTypeDescription
idUUIDPrimary key
session_idUUIDForeign key to user_sessions
element_idTEXTUI element identifier
timestampTIMESTAMPTZClick timestamp

system_automations

Scheduled tasks and automation rules.

ColumnTypeDescription
idUUIDPrimary key
bot_idUUIDForeign key to bots
nameTEXTAutomation name
scheduleTEXTCron expression
scriptTEXTBASIC script to execute
activeBOOLEANWhether automation is active
created_atTIMESTAMPTZCreation timestamp
updated_atTIMESTAMPTZLast update

Knowledge Base Tables

kb_collections

Knowledge base collection definitions.

ColumnTypeDescription
idTEXTPrimary key (collection name)
bot_idUUIDForeign key to bots
nameTEXTCollection display name
descriptionTEXTCollection description
metadataJSONBAdditional metadata
created_atTIMESTAMPTZCreation timestamp
updated_atTIMESTAMPTZLast update

kb_documents

Documents stored in knowledge base collections.

ColumnTypeDescription
idTEXTPrimary key (document ID)
collection_idTEXTForeign key to kb_collections
bot_idUUIDForeign key to bots
nameTEXTDocument name
contentTEXTDocument content
metadataJSONBDocument metadata
embedding_idTEXTVector embedding reference
indexedBOOLEANWhether document is indexed
created_atTIMESTAMPTZUpload timestamp
updated_atTIMESTAMPTZLast update

user_kb_associations

Links user sessions to available knowledge bases.

ColumnTypeDescription
idTEXTPrimary key
session_idUUIDForeign key to user_sessions
collection_idTEXTForeign key to kb_collections
created_atTIMESTAMPTZAssociation timestamp

Tool Tables

basic_tools

BASIC script tool definitions.

ColumnTypeDescription
idTEXTPrimary key (tool name)
bot_idUUIDForeign key to bots
nameTEXTTool display name
descriptionTEXTTool description
parametersJSONBParameter definitions
scriptTEXTBASIC script implementation
metadataJSONBAdditional metadata
created_atTIMESTAMPTZCreation timestamp
updated_atTIMESTAMPTZLast update

session_tool_associations

Links sessions to available tools.

ColumnTypeDescription
idTEXTPrimary key
session_idUUIDForeign key to user_sessions
tool_idTEXTForeign key to basic_tools
created_atTIMESTAMPTZAssociation timestamp

Email Integration Tables

user_email_accounts

Email accounts configured for users.

ColumnTypeDescription
idUUIDPrimary key
user_idUUIDForeign key to users
email_addressTEXTEmail address
imap_serverTEXTIMAP server address
imap_portINTEGERIMAP port
smtp_serverTEXTSMTP server address
smtp_portINTEGERSMTP port
encrypted_passwordTEXTEncrypted email password
activeBOOLEANAccount status
created_atTIMESTAMPTZConfiguration timestamp
updated_atTIMESTAMPTZLast update

email_drafts

Draft emails created by users or bots.

ColumnTypeDescription
idUUIDPrimary key
user_idUUIDForeign key to users
account_idUUIDForeign key to user_email_accounts
to_addressesTEXT[]Recipient addresses
cc_addressesTEXT[]CC addresses
bcc_addressesTEXT[]BCC addresses
subjectTEXTEmail subject
bodyTEXTEmail body
attachmentsJSONBAttachment metadata
created_atTIMESTAMPTZDraft creation
updated_atTIMESTAMPTZLast edit

email_folders

Email folder organization.

ColumnTypeDescription
idUUIDPrimary key
account_idUUIDForeign key to user_email_accounts
nameTEXTFolder name
pathTEXTIMAP folder path
parent_idUUIDParent folder ID
message_countINTEGERNumber of messages
unread_countINTEGERUnread messages
created_atTIMESTAMPTZFolder creation
updated_atTIMESTAMPTZLast sync

Indexes

Primary Indexes

  • All id columns have primary key indexes
  • All foreign key columns have indexes for joins

Performance Indexes

  • user_sessions.session_token - for session lookup
  • message_history.created_at - for time-based queries
  • kb_documents.collection_id - for collection queries
  • bot_memories(bot_id, key) - composite for memory lookup

Full-Text Search Indexes

  • kb_documents.content - for document search (when enabled)
  • message_history.message - for conversation search (when enabled)

Database Relationships

This document describes the relationships between tables in the General Bots database schema.

Entity Relationship Overview

The database follows a hierarchical structure with organizations at the top, containing bots, which in turn manage users, sessions, and content.

Primary Relationships

Organization Hierarchy

organizations
    bots (1:N)
        bot_configuration (1:N)
        bot_memories (1:N)
        kb_collections (1:N)
            kb_documents (1:N)
        basic_tools (1:N)
        system_automations (1:N)

Each organization can have multiple bots, and each bot has its own configuration, memories, knowledge bases, tools, and automations. Cascade delete behavior means that deleting an organization removes all associated bots and their data.

User and Session Management

users
    user_sessions (1:N)
        message_history (1:N)
        clicks (1:N)
        user_kb_associations (1:N)
        session_tool_associations (1:N)
    user_login_tokens (1:N)
    user_preferences (1:1)
    user_email_accounts (1:N)
        email_drafts (1:N)
        email_folders (1:N)
            folder_messages (1:N)

Users can have multiple active sessions across different bots. Each session maintains its own message history and associations. Sessions link to both users and bots, forming a many-to-many relationship through the sessions table.

Bot-User Interaction

bots ←→ user_sessions ←→ users
              
    user_sessions:
        message_history
        user_kb_associations → kb_collections
        session_tool_associations → basic_tools
    
    bots:
        kb_collections
        basic_tools

Users interact with bots through sessions. Sessions dynamically associate with knowledge bases and tools as needed. Message history preserves the conversation context for continuity across interactions.

Foreign Key Constraints

Strong Relationships (CASCADE DELETE)

These relationships enforce referential integrity with cascade deletion.

The organizations to bots relationship means deleting an organization removes all its bots, with bots.org_id referencing organizations.org_id.

The bots to bot_configuration relationship means deleting a bot removes all its configuration, with bot_configuration.bot_id referencing bots.id.

The bots to bot_memories relationship means deleting a bot removes all its memories, with bot_memories.bot_id referencing bots.id.

The user_sessions to message_history relationship means ending a session removes its message history, with message_history.session_id referencing user_sessions.id.

Weak Relationships (SET NULL/RESTRICT)

These relationships maintain data integrity without cascade deletion.

The users to user_sessions relationship sets session.user_id to NULL when a user is deleted, preserving conversation history for audit purposes while making the session anonymous.

The kb_collections to kb_documents relationship restricts deletion if documents exist, requiring explicit document deletion first to prevent accidental data loss.

The user_email_accounts to email_drafts relationship preserves drafts when an email account is deleted, allowing draft recovery or reassignment to other accounts.

Many-to-Many Relationships

Sessions ↔ Knowledge Bases

user_sessions ←→ user_kb_associations ←→ kb_collections

The user_kb_associations junction table allows dynamic KB activation per session. Multiple knowledge bases can be active simultaneously, enabling conversations that draw from several information sources.

Sessions ↔ Tools

user_sessions ←→ session_tool_associations ←→ basic_tools

The session_tool_associations junction table enables tools to be loaded per session as needed. This supports dynamic tool discovery where available capabilities vary based on context.

Relationship Cardinality

One-to-one relationships exist between users and user_preferences, where each user has exactly one preferences record.

One-to-many relationships include organizations to bots, bots to bot_configuration, bots to kb_collections, kb_collections to kb_documents, users to user_sessions, user_sessions to message_history, and user_email_accounts to email_drafts.

Many-to-many relationships exist between user_sessions and kb_collections through user_kb_associations, between user_sessions and basic_tools through session_tool_associations, and between users and bots through user_sessions.

Referential Integrity Rules

Insert Order

When inserting data, follow this sequence: organizations first, then bots, then bot_configuration. For user data, insert users first, then user_sessions, then message_history. Knowledge base data requires kb_collections before kb_documents. Tools require basic_tools before session_tool_associations.

Delete Order (reverse of insert)

When deleting data, reverse the insert order: message_history first, then user_sessions, then users. For tools, delete session_tool_associations before basic_tools. For knowledge bases, delete kb_documents before kb_collections. For organizational data, delete bot_configuration, then bots, then organizations.

Orphan Prevention

Automatic Cleanup

Sessions expire based on the expires_at timestamp. Orphaned associations are cleaned by background jobs that run periodically. Temporary data has TTL settings that trigger automatic removal.

Manual Cleanup Required

Some data requires manual cleanup. Unused kb_documents should be periodically reviewed and removed. Old message_history should be cleared based on retention policy. Expired user_login_tokens should be purged.

Performance Implications

Hot Paths

These relationships are frequently traversed and should be optimized.

The user_sessions to message_history path benefits from an index on (session_id, created_at DESC) and is used for conversation display.

The bots to bot_memories path benefits from an index on (bot_id, key) and is used by GET BOT MEMORY and SET BOT MEMORY operations.

The kb_collections to kb_documents path benefits from an index on (collection_id, indexed) and is used for semantic search.

Join Optimization

Common join patterns benefit from composite indexes.

User session context queries join user_sessions with users on user_sessions.user_id = users.id and with bots on user_sessions.bot_id = bots.id.

Knowledge base loading joins user_kb_associations with kb_collections on user_kb_associations.collection_id = kb_collections.id and kb_documents on kb_collections.id = kb_documents.collection_id.

Tool discovery joins session_tool_associations with basic_tools on session_tool_associations.tool_id = basic_tools.id filtered by session_id and bot_id.

Data Consistency Patterns

Transaction Boundaries

Certain operations must be atomic.

Session creation requires inserting the user_session record, initializing default associations, and creating the initial message all within a single transaction.

Tool registration requires inserting the basic_tool record, updating bot_configuration, and refreshing active sessions together.

Document upload requires inserting the kb_document record, triggering the indexing job, and updating collection metadata atomically.

Eventual Consistency

Some operations can be eventually consistent.

Vector embeddings allow document upload to complete first, with asynchronous indexing creating embeddings afterward. Search becomes available after processing completes.

Email synchronization saves account configuration immediately, then background sync fetches emails asynchronously. Folders and counts update as sync progresses.

Best Practices

Always use foreign keys for data integrity to catch relationship violations at the database level. Index foreign key columns for join performance to avoid full table scans on relationship traversals. Use transactions for related updates to maintain consistency across multiple tables.

Implement soft deletes for audit trails where regulations require historical data retention. Monitor constraint violations in logs to catch application bugs early. Plan cascade paths carefully to avoid unintended data deletion.

Document relationship changes in migrations so the team understands schema evolution over time.

Glossary

Quick lookup for General Bots terms. If you’re lost, start here.


A

A2A Protocol - Agent-to-Agent Protocol. Enables bots to communicate and delegate tasks to each other in multi-agent systems. Messages include request, response, broadcast, and delegate types.

ADD BOT - BASIC keyword to add a bot to the current session with triggers, tools, or schedules.

Argon2 - Password hashing algorithm used for secure credential storage. Makes brute-force attacks computationally infeasible.

Auto-Bootstrap - The automatic first-run process that installs and configures all dependencies: PostgreSQL, cache, storage, and LLM servers.


B

BASIC - The scripting language for General Bots dialogs. Inspired by the 1964 language, simplified for conversational AI. Powers all .bas scripts with keywords like TALK, HEAR, and LLM.

BM25 - Best Match 25. Sparse retrieval algorithm for keyword-based search. Used in hybrid RAG alongside dense (semantic) search.

BOOK - BASIC keyword to schedule calendar appointments.

Bot Memory - Persistent storage scoped to a single bot, shared across all users. Access with SET BOT MEMORY and GET BOT MEMORY.

Bot Package - A folder ending in .gbai containing everything to run a bot: scripts, documents, and configuration.

BotSession - The active conversation between user and bot. Tracks state, history, and context. Persists to database, cached for speed.

Bootstrap - Initial setup process that installs all dependencies automatically on first launch.

BROADCAST TO BOTS - BASIC keyword to send a message to all bots in the current session.


C

Cache - In-memory storage component for sessions, temporary data, and semantic caching. Provides sub-millisecond access times.

Collection - A folder of documents in .gbkb/ that becomes searchable knowledge. Each subfolder is a separate collection.

Chunking - The process of splitting documents into smaller pieces for embedding and retrieval. Default chunk size is optimized for context windows.

config.csv - The configuration file for each bot. Simple key-value pairs in CSV format. Lives in the .gbot/ folder.

Context - Information available to the LLM during a conversation. Includes history, knowledge base results, and user-provided context via SET CONTEXT.

Context Compaction - Automatic summarization of older conversation history to fit within token limits while preserving important information.

CREATE DRAFT - BASIC keyword to compose and save an email draft to the user’s mailbox.

CREATE TASK - BASIC keyword to create a task with assignee and due date.


D

DELEGATE TO BOT - BASIC keyword to send a task to another bot and optionally wait for a response.

Dense Search - Semantic search using vector embeddings. Finds content by meaning rather than exact keywords.

Dialog - A .bas script defining conversation flow. Contains BASIC code with keywords like TALK and HEAR.

Drive - Built-in S3-compatible object storage. Stores documents, templates, and uploads. Auto-installed during bootstrap.


E

Embedding - Text converted to numerical vectors for similarity search. Similar meanings produce similar vectors.

Embedding Model - Neural network that generates embeddings. Default is BGE, replaceable with any GGUF-compatible model.

Episodic Memory - Summaries of past conversations stored for long-term context. Automatically generated when conversations end.

Event Handler - BASIC code triggered by events. Use ON keyword with triggers like "login", "email", or cron expressions.


F

FIND - BASIC keyword to search database tables with filter criteria. Returns matching records.

FOR EACH - BASIC keyword for iterating over collections and query results.


G

.gbai - “General Bot AI” package folder. Contains the entire bot. Example: support.gbai/ becomes the bot at /support.

.gbdialog - Subfolder containing BASIC scripts. Must include start.bas as the entry point. Tools go in tools/ subdirectory.

.gbdrive - File storage configuration subfolder. Maps to Drive buckets for document management.

.gbkb - “Knowledge Base” subfolder. Each subdirectory becomes a searchable collection with automatic indexing.

.gbot - Configuration subfolder containing config.csv with bot settings.

.gbtheme - Optional UI customization subfolder for CSS, images, and HTML templates.

General Bots - Open-source enterprise conversational AI platform. Combines LLMs with structured dialogs, knowledge bases, and multi-channel support.

GET - BASIC keyword to retrieve data from APIs, files, or session variables.

GET BOT MEMORY - BASIC keyword to retrieve persistent bot-level data.

GET USER MEMORY - BASIC keyword to retrieve cross-session user data accessible from any bot.

GraphQL - Query language for APIs. Supported via the GRAPHQL keyword for complex data retrieval.


H

HEAR - BASIC keyword to wait for and capture user input. name = HEAR stores the response in a variable.

Hot Reload - Automatic reloading of BASIC scripts when files change. No restart needed.

Hybrid Search - RAG approach combining dense (semantic) and sparse (keyword) retrieval using Reciprocal Rank Fusion.

HTMX - Frontend library used for dynamic UI updates without full page reloads.


I

INSERT - BASIC keyword to add records to database tables.

Intent - What the user wants to accomplish. Detected from natural language via LLM classification.


K

Keyword - A BASIC command like TALK, HEAR, or LLM. About 50+ available. Written in uppercase by convention.

Knowledge Base (KB) - Documents searchable by the bot. Organized in folders under .gbkb/. Activate with USE KB "foldername".


L

LiveKit - WebRTC platform used for video meetings in General Bots.

LLM - Large Language Model. The AI that powers natural conversation. Supports OpenAI, Anthropic, Groq, and local models via llama.cpp.

llama.cpp - C++ library for running LLM inference locally. Used for self-hosted model deployment.

Local-First - Architecture principle where everything runs locally by default. No cloud dependencies required.


M

MCP - Model Context Protocol. Standard format for defining tools that LLMs can call. Supported alongside OpenAI function format.

Memory - Data persistence system with four scopes: User Memory (cross-bot), Bot Memory (per-bot), Session Memory (temporary), and Episodic Memory (conversation summaries).

Model Routing - Dynamic selection of LLM models based on task requirements. Use USE MODEL "fast", "quality", "code", or "auto".

Multi-Agent - Architecture where multiple specialized bots collaborate on complex tasks.

Multi-Channel - Same bot works across WhatsApp, Telegram, Teams, Web, and other channels without modification.


N

No Forms - General Bots philosophy since 2017: people should converse, not fill forms. Conversations replace traditional UI forms.


O

ON - BASIC keyword to define event handlers for triggers, schedules, or webhooks.

OIDC - OpenID Connect. Authentication protocol handled by the Directory service (Zitadel).


P

Package Manager - Built-in system that installs bot packages. Drop a .gbai folder and it’s automatically loaded.

PARAM - Declares tool parameters. PARAM name, email means the tool needs these inputs. LLM collects them automatically.

PostgreSQL - The database for General Bots. Stores users, sessions, messages, and bot configuration. Auto-installed and auto-configured.

POST - BASIC keyword to make HTTP POST requests to external APIs.

Pragmatismo - Brazilian software company that created and maintains General Bots.


Q

Qdrant - Vector database for semantic search at scale. Optional component for large knowledge bases.


R

RAG - Retrieval-Augmented Generation. Pattern where relevant documents are retrieved and provided to the LLM as context.

Reranking - Optional LLM-based scoring of search results for improved relevance. Adds latency but improves quality.

Rhai - Rust scripting engine that powers the BASIC interpreter. Sandboxed and safe.

RRF - Reciprocal Rank Fusion. Algorithm for combining rankings from multiple search methods in hybrid RAG.

RUN PYTHON / JAVASCRIPT / BASH - BASIC keywords to execute code in sandboxed environments.


S

SAVE - BASIC keyword to write data to CSV files or database tables.

Script - A .bas file with BASIC code. start.bas is the entry point; other scripts are tools or utilities.

Semantic Cache - Caching system that matches similar (not just identical) queries to reuse LLM responses.

Semantic Search - Finding content by meaning rather than exact keywords. Powered by embeddings and vector similarity.

SEND MAIL - BASIC keyword to send emails with optional HTML and attachments.

Session - Active conversation state between user and bot. Expires after inactivity (default 30 minutes).

Session Memory - Temporary storage for the current conversation. Access with SET and GET.

SET - BASIC keyword to store values in session variables or update database records.

SET BOT MEMORY - BASIC keyword to store persistent bot-level data.

SET CONTEXT - BASIC keyword to add information to the LLM context. Influences all subsequent responses.

SET SCHEDULE - BASIC keyword for cron-based task scheduling. Accepts natural language like "every monday at 9am".

SET USER MEMORY - BASIC keyword to store cross-session user data accessible from any bot.

Sparse Search - Keyword-based search using algorithms like BM25. Excels at exact matches and rare terms.

SSE - Server-Sent Events. Used for real-time streaming of LLM responses.

Stalwart - Email server component providing IMAP/SMTP/JMAP support.

Suite - The complete General Bots workspace application with Chat, Drive, Tasks, Mail, Calendar, and other apps.

SWITCH - BASIC keyword for multi-way conditional branching.


T

TALK - BASIC keyword to send messages to the user. Supports text, markdown, and multimedia.

Template - Pre-built bot configuration in the templates/ folder. Copy and modify to create new bots.

Token - Unit of text for LLMs. Roughly 4 characters. Context windows are measured in tokens.

Tool - A .bas file the LLM can call automatically. Define with PARAM declarations and a DESCRIPTION. Place in the tools/ folder.

TRANSFER CONVERSATION - BASIC keyword to hand off the entire conversation to another bot.


U

UPDATE - BASIC keyword to modify existing database records.

USE KB - BASIC keyword to activate a knowledge base for semantic search. USE KB "policies" makes the policies collection searchable.

USE MODEL - BASIC keyword to switch LLM models. Options: "fast", "quality", "code", or "auto".

USE TOOL - BASIC keyword to enable a tool for LLM use. The AI determines when to call it.

User Memory - Persistent storage scoped to a user, accessible across all bots and sessions.


V

Vault - HashiCorp Vault. Secrets management service for storing credentials securely. Only VAULT_* environment variables are used.

Vector - Mathematical representation of meaning. Similar meanings produce similar vectors.

Vector Database - Database optimized for storing and searching embeddings. Qdrant is the default option.


W

WAIT - BASIC keyword to pause execution for a specified duration.

WEBHOOK - BASIC keyword to create HTTP endpoints that trigger bot actions.

WebSocket - Real-time connection for chat. Enables instant messaging without polling. Path: /ws.


Z

Zitadel - Identity and access management service. Handles authentication, users, and permissions.


Package Extensions

ExtensionPurpose
.gbaiComplete bot package
.gbdialogBASIC scripts
.gbkbKnowledge base documents
.gbotBot configuration
.gbthemeUI customization
.gbdriveFile storage mapping
.basBASIC script file

Common Confusions

“Do I need containers?” - No. botserver installs everything directly or in optional LXC containers.

“What database?” - PostgreSQL, automatically installed and configured.

“What about scaling?” - Single server handles 1000+ concurrent users. Scale by running multiple instances.

“Is BASIC really BASIC?” - Inspired by BASIC, not strict implementation. Simplified and focused on conversations.

“Can I use TypeScript/Python/etc?” - BASIC handles conversation logic. Use RUN PYTHON/JAVASCRIPT for code execution, or integrate via REST API.

“Is it production-ready?” - Yes. Used in production since 2016, current Rust version since 2023.


General Bots

Contact

Get in Touch

General Bots has been powering conversational AI solutions since 2018. We’re here to help you build intelligent automation and enhanced customer engagement.

Support

For technical support and documentation:

Business Inquiries

Interested in implementing General Bots for your organization?

Pragmatismo Inovações Ltda.
Avenida Rio Branco, 177, Sala 201 a 2201
Rio de Janeiro - Brasil
CNPJ: 40.293.841/0001-59

📞 Phone: +55 21 4040-2160 (24h)
📧 Email: info@pragmatismo.com.br
🌐 Website: pragmatismo.com.br

Schedule a Demo

Want to see General Bots in action? Our team can demonstrate:

  • Custom bot development
  • LLM integration
  • Knowledge base setup
  • Multi-channel deployment
  • Enterprise automation

Schedule a meeting to explore how General Bots can transform your business operations.

Data Protection

Data Protection Officer (DPO)
Rodrigo Rodriguez
📧 security@pragmatismo.com.br

Open Source Community

General Bots is open source and we welcome contributions:

  • Report issues on GitHub
  • Submit pull requests
  • Join discussions
  • Share your use cases

General Bots® - Enterprise conversational AI platform
Copyright © 2016-2025 Pragmatismo Inovações Ltda.


General Bots

Feature System

Version: 6.2.0

General Bots uses Cargo’s feature flags to create modular, size-optimized builds. This allows you to include only the functionality you need.


Feature Dependency Tree

Feature Dependency Tree

Quick Start

Building with Specific Features

# Minimal build (chat only)
cargo build --features "chat" --no-default-features

# Chat + Cloud Storage
cargo build --features "chat,drive" --no-default-features

# Spreadsheet + Cloud Storage
cargo build --features "sheet,drive" --no-default-features

# Chat with Local LLM
cargo build --features "chat,llm" --no-default-features

# Full productivity suite
cargo build --features "full"

Feature Categories

🗣️ Communication Apps

FeatureDependenciesDescription
chatNoneBasic chat functionality
peopleNoneContact management
maillettre, mailparse, imap, native-tlsEmail integration
meetlivekitVideo conferencing
whatsappNoneWhatsApp integration
telegramNoneTelegram integration
instagramNoneInstagram integration
msteamsNoneMicrosoft Teams integration
socialNoneSocial media features

📋 Productivity Apps

FeatureDependenciesDescription
calendarNoneCalendar functionality
taskscron, automationTask management with scheduling
projectquick-xmlProject management (MS Project)
goalsNoneGoals tracking
workspaceNoneSingle workspace
ticketsNoneTicket system
billingNoneBilling system

📄 Document Apps

FeatureDependenciesDescription
docsdocx-rs, ooxmlsdkWord document processing
sheetcalamine, spreadsheet-odsSpreadsheet processing
slidesooxmlsdkPresentation processing
paperdocs, pdf-extractPDF processing
driveaws-config, aws-sdk-s3, aws-smithy-async, pdf-extractCloud storage (S3)

🎥 Media Apps

FeatureDependenciesDescription
videoNoneVideo features
playerNoneMedia player
canvasNoneDrawing/canvas

🧠 Learning & Research

FeatureDependenciesDescription
learnNoneLearning features
researchllm, vectordbResearch with AI
sourcesNoneData sources

📊 Analytics

FeatureDependenciesDescription
analyticsNoneAnalytics features
dashboardsNoneDashboard UI
monitoringsysinfoSystem monitoring

🔧 Development Tools

FeatureDependenciesDescription
designerNoneUI designer
editorNoneCode/text editor
automationrhai, cronScripting automation

⚙️ Core Technologies

FeatureDependenciesDescription
llmNoneLLM integration flag
vectordbqdrant-clientVector database
cacheredisRedis caching
compliancecsvCompliance reporting
consolecrossterm, ratatui, monitoringTerminal UI
jemalloctikv-jemallocator, tikv-jemalloc-ctlMemory allocator
progress-barsindicatifProgress indicators

Pre-Defined Bundles

minimal

minimal = ["chat"]

Smallest possible build. Just chat functionality.

lightweight

lightweight = ["chat", "drive", "tasks", "people"]

Small but useful for basic operations.

default

default = ["chat", "drive", "tasks", "automation", "cache", "directory"]

Balanced default configuration.

full

full = [
    "chat", "people", "mail",
    "tasks", "calendar",
    "drive", "docs",
    "llm", "cache", "compliance"
]

Everything useful for a complete deployment.


Common Scenarios

📱 Chat + Drive (Minimum Cloud)

cargo build --features "chat,drive" --no-default-features

Use case: Basic chat with file storage capabilities.

📊 Sheets + Drive

cargo build --features "sheet,drive" --no-default-features

Use case: Spreadsheet processing with cloud storage.

⚠️ Note: sheet does NOT require drive for local file processing. Add drive only if you need cloud storage.

🤖 Chat + Local LLM

cargo build --features "chat,llm" --no-default-features

Use case: Chat with local LLM integration (limited resources).

🏢 Office Suite

cargo build --features "docs,sheet,slides,drive" --no-default-features

Use case: Full document processing suite.

📧 Email-Focused

cargo build --features "chat,mail,cache" --no-default-features

Use case: Chat with email integration.


Feature Validation

Some features have implicit dependencies:

If you enable…You automatically get…
tasksautomation
paperdocs
researchllm, vectordb
consolemonitoring
communicationsAll communication features + cache
productivityAll productivity features + cache
documentsAll document features

Size Comparison

Build ConfigurationApproximate Size
minimal~15 MB
lightweight~25 MB
default~35 MB
full~60 MB

Sizes are approximate and vary based on platform and optimization level.

Attendance Suite - Plano Completo

Visão Geral

O módulo Attendance (Atendimento) é o sistema central de gestão de conversas humano-bot que permite transfers smooth entre o assistente IA e atendentes humanos. Integra nativamente com WhatsApp (inclui voice calls), Telegram, Teams, CRM, Marketing, Email e o motor Basic.

┌─────────────────────────────────────────────────────────────────────────────────┐
│                              ATTENDANCE SUITE                                    │
├─────────────────────────────────────────────────────────────────────────────────┤
│                                                                                 │
│   ┌─────────────┐     ┌─────────────┐     ┌─────────────┐     ┌─────────────┐   │
│   │   WHATSAPP  │     │  TELEGRAM   │     │     SMS     │     │  INSTAGRAM  │   │
│   │   +Voice    │     │   +Voice    │     │             │     │             │   │
│   └──────┬──────┘     └──────┬──────┘     └──────┬──────┘     └──────┬──────┘   │
│          │                   │                   │                   │           │
│          └───────────────────┴─────────┬─────────┴───────────────────┘           │
│                                          │                                       │
│                                          ▼                                       │
│   ┌─────────────┐               ┌─────────────────┐                             │
│   │  MESSENGER  │               │  LIVEKIT + SIP  │                             │
│   └──────┬──────┘               │  Video/Audio    │                             │
│          │                       │  STT/TTS        │                             │
│          │                       └────────┬────────┘                             │
│          │                                │                                       │
│          │                                ▼                                       │
│   ┌──────┴──────┐               ┌─────────────────┐                             │
│   │    WEB      │               │  ATTENDANCE     │                             │
│   │   Chat      │──────────────►│    ENGINE       │◄────────────               │
│   └─────────────┘               └────────┬────────┘    │                          │
│                                          │             │                          │
│          ┌──────────────────────────────┼─────────────┴───────────┐              │
│          │                              │                         │              │
│          ▼                              ▼                         ▼              │
│   ┌─────────────┐          ┌─────────────────────┐     ┌────────────────────┐ │
│   │     CRM     │          │  DESTINATION CHANNELS│     │    EMAIL           │ │
│   │   MODULE    │          │  ┌────────┐ ┌───────┐│     │    MODULE          │ │
│   └─────────────┘          │  │ TEAMS │ │Google ││     └────────────────────┘ │
│                             │  │       │ │ Chat ││                            │
│   ┌─────────────┐          │  └────────┘ └───────┘│                            │
│   │  MARKETING  │          │  ┌───────┐ ┌───────┐│                            │
│   │   MODULE    │          │  │WhatsApp│ │ Web  ││                            │
│   └─────────────┘          │  │       │ │Console│                            │
│                            │  └───────┘ └───────┘│                            │
│                            └─────────────────────┘                              │
└─────────────────────────────────────────────────────────────────────────────────┘

┌─────────────────────────────────────────────────────────────────────────────────┐ │ ATTENDANCE SUITE │ ├─────────────────────────────────────────────────────────────────────────────────┤ │ │ │ ┌─────────────┐ ┌─────────────┐ ┌─────────────┐ ┌─────────────┐ │ │ │ WHATSAPP │ │ TELEGRAM │ │ SMS │ │ INSTAGRAM │ │ │ └──────┬──────┘ └──────┬──────┘ └──────┬──────┘ └──────┬──────┘ │ │ │ │ │ │ │ │ └───────────────────┴─────────┬─────────┴───────────────────┘ │ │ │ │ │ ▼ │ │ ┌─────────────┐ ┌─────────────────┐ ┌────────────┐│ │ │ MESSENGER │ │ LIVEKIT + SIP │ │ TEAMS ││ │ └──────┬──────┘ │ Video/Audio │ └─────┬──────┘│ │ │ │ Screen Share │ │ │ │ ┌──────┴──────┐ └────────┬────────┘ ┌─────┴─────┐│ │ │ SLACK │ │ │ WEB ││ │ └─────────────┘ ▼ └───────────┘│ │ ┌─────────────────────────────────────────────────────────────────────────┐ │ │ │ CHANNEL ROUTER │ │ │ │ • Detecção de canal (whatsapp/telegram/sms/web/instagram/slack/teams) │ │ │ │ • Normalização de mensagens │ │ │ │ • Comandos de atendente (/queue, /take, /resolve, /video, /call) │ │ │ └────────────────────────────────┬────────────────────────────────────────┘ │ │ │ │ │ ▼ │ │ ┌─────────────────────────────────────────────────────────────────────────┐ │ │ │ ATTENDANCE ENGINE │ │ │ │ ┌─────────────┐ ┌─────────────┐ ┌─────────────┐ ┌─────────────┐ │ │ │ │ │ QUEUE │ │ ATTENDANT │ │ LLM │ │ MEETING │ │ │ │ │ │ MANAGER │ │ MANAGER │ │ ASSIST │ │ (LiveKit) │ │ │ │ │ └─────────────┘ └─────────────┘ └─────────────┘ └─────────────┘ │ │ │ └────────────────────────────────┬────────────────────────────────────────┘ │ │ │ │ │ ┌─────────────────────────┼─────────────────────────┐ │ │ │ │ │ │ │ ▼ ▼ ▼ │ │ ┌─────────────┐ ┌─────────────┐ ┌─────────────┐ │ │ │ CRM │ │ MARKETING │ │ EMAIL │ │ │ │ MODULE │ │ MODULE │ │ MODULE │ │ │ └─────────────┘ └─────────────┘ └─────────────┘ │ │ │ └─────────────────────────────────────────────────────────────────────────────────┘


---

## 1. Integração com Canais (WhatsApp/Telegram/SMS/Web/Instagram/LiveKit/SIP)

### 1.0 Arquitetura de Canais Suportados

┌──────────────────────────────────────────────────────────────────────────────────┐ │ CANAIS DE ATENDIMENTO │ ├──────────────────────────────────────────────────────────────────────────────────┤ │ │ │ ┌─────────────┐ ┌─────────────┐ ┌─────────────┐ ┌─────────────┐ │ │ │ WHATSAPP │ │ TELEGRAM │ │ SMS │ │ INSTAGRAM │ │ │ │ (Voice) │ │ │ │ (Twilio) │ │ Direct │ │ │ └──────┬──────┘ └──────┬──────┘ └──────┬──────┘ └──────┬──────┘ │ │ │ │ │ │ │ │ └────────────────┼─────────────────┼────────────────┘ │ │ │ │ │ │ ▼ ▼ ┌──────────────────────────┐ │ │ ┌────────────────────┐ │ │ DESTINOS ATENDIMENTO │ │ │ │ LIVEKIT + SIP │ │ │ (Teams / Google Chat) │ │ │ │ Video/Audio Calls │ │ │ │ │ │ │ Screen Sharing │────────┼────│ ┌──────────┐ ┌────────┐ │ │ │ │ Voice STT/TTS │ │ │ │ TEAMS │ │GOOGLE │ │ │ │ └────────────────────┘ │ │ │ │ │ CHAT │ │ │ │ │ │ └──────────┘ └────────┘ │ │ │ ┌─────────────┐ │ └──────────────────────────┘ │ │ │ MESSENGER │──────────────────────────┘ │ │ │ Facebook │ │ │ │ └─────────────┘ ┌──────┴──────┐ │ │ │ CHANNEL │ │ │ ┌─────────────┐ │ ROUTER │ │ │ │ WEB │─────────────────────┤ │ │ │ │ Chat │ └───────────────┘ │ │ └─────────────┘ │ └──────────────────────────────────────────────────────────────────────────────────┘


### 1.0.1 Canais de Entrada

| Canal | Tipo | Status | Suporte a Videochamada | Voice (STT/TTS) |
|-------|------|--------|------------------------|-----------------|
| **WhatsApp** | Mensageria | ✅ Estável | ❌ Não | ✅ Sim |
| **Telegram** | Mensageria | ✅ Estável | ✅ Botões | ✅ Sim |
| **SMS** | Mensageria | ✅ Estável | ❌ Não | ❌ Não |
| **Instagram** | Mensageria | ✅ Estável | ❌ Não | ❌ Não |
| **Messenger** | Mensageria | ✅ Parcial | ❌ Não | ❌ Não |
| **Teams** | Mensageria | ✅ Parcial | ✅ Embed | ✅ Sim |
| **Web Chat** | Mensageria | ✅ Estável | ✅ LiveKit | ✅ Sim |
| **LiveKit/SIP** | Video/Audio | ✅ Estável | ✅ Completo | ✅ Completo |

### 1.0.2 Destinos de Atendimento Humano

| Destino | Descrição | Status |
|---------|-----------|--------|
| **Teams** | Atendente recebe no Microsoft Teams | ✅ Implementado |
| **Google Chat** | Atendente recebe no Google Chat | 🔜 Planejado |
| **WhatsApp** | Atendente responde via WhatsApp | ✅ Implementado |
| **Web Console** | Atendente via interface web | ✅ Implementado |

### 1.1 Arquitetura de Mensagens

O Attendance actúa como **middleware** entre os canais de entrada e o motor Basic:

MENSAGEM ENTRADA │ ▼ ┌──────────────────┐ │ CHANNEL ADAPTER │ ──► Detecta canal de origem │ (WhatsApp/TG/ │ │ SMS/Web) │ └────────┬─────────┘ │ ▼ ┌──────────────────┐ │ NEEDS_HUMAN? │ ──► Verifica flag na sessão │ │ │ • false → BASIC │ ──► Processa via motor Basic │ • true → ATD │ ──► Encaminha para atendimento humano └────────┬─────────┘ │ ▼ ┌──────────────────┐ │ RESPONSE │ ──► Retorna resposta ao canal │ ROUTER │ original └──────────────────┘


### 1.2 Fluxo WhatsApp

```python
# Quando cliente envia mensagem via WhatsApp:

1. WhatsAppAdapter recebe webhook
2. SessionLoader verifica needs_human:
   
   IF session.needs_human == true:
       # Routing para Attendance
       attendance_handler.process(session, message, "whatsapp")
   ELSE:
       # Routing para Basic Engine
       basic_engine.execute(session, message)

3. Se attendente responde:
   WhatsAppAdapter.send_message(attendant_response)

1.2.1 WhatsApp Voice (Chamadas de Voz)

O WhatsApp suporta chamadas de voz com STT (Speech-to-Text) e TTS (Text-to-Speech):

┌─────────────────────────────────────────────────────────────────────────────┐
│                    WHATSAPP VOICE CALL FLOW                                  │
├─────────────────────────────────────────────────────────────────────────────┤
│                                                                             │
│   Cliente ──[Liga]──► WhatsApp ──[Webhook]──► BotServer                    │
│                                              │                              │
│                                              ▼                              │
│                                    ┌──────────────────┐                    │
│                                    │  Voice Handler   │                    │
│                                    │  ┌────────────┐  │                    │
│                                    │  │ STT (Whisper)│ │ ──► Texto        │
│                                    │  └────────────┘  │                    │
│                                    └────────┬─────────┘                    │
│                                             │                              │
│                                             ▼                              │
│                                    ┌──────────────────┐                    │
│                                    │  Basic Engine   │                    │
│                                    │  ou Attendance  │                    │
│                                    └────────┬─────────┘                    │
│                                             │                              │
│                                             ▼                              │
│                                    ┌──────────────────┐                    │
│                                    │  TTS (BotModels) │                    │
│                                    │  ┌────────────┐  │                    │
│                                    │  │Coqui/OpenAI│ │ ──► Áudio         │
│                                    │  └────────────┘  │                    │
│                                    └────────┬─────────┘                    │
│                                             │                              │
│                                             ▼                              │
│                              WhatsApp ◄──[Audio]── BotServer               │
│                                                                             │
└─────────────────────────────────────────────────────────────────────────────┘

Configuração

name,value
whatsapp-voice-response,true
botmodels-url,http://localhost:5000
botmodels-api-key,

Componente BotModels (STT/TTS)

O sistema usa botmodels para processamento de voz:

# botmodels/src/services/speech_service.py

class SpeechService:
    def stt(self, audio_url: str) -> str:
        # Whisper para transcrição
        # Groq como fallback rápido
        pass
    
    def tts(self, text: str, voice: str = "alloy") -> str:
        # Coqui TTS (local)
        # OpenAI TTS
        # Google Translate TTS (fallback)
        pass

Fluxo de Voz no Attendance

1. Cliente liga no WhatsApp
2. WhatsApp envia webhook de chamada
3. Sistema atende e inicia gravação
4. Áudio é processado via STT → Texto
5. Texto é processado:
   
   SE needs_human = true:
       → Attendente recebe transcrição
       → Attendente responde (texto ou voz)
       → Resposta → TTS → Áudio → WhatsApp
   
   SE needs_human = false:
       → Basic Engine processa
       → Resposta → TTS → Áudio → WhatsApp

Comandos de Voz

ComandoDescrição
/voice onAtivar respostas de voz
/voice offDesativar respostas de voz
/callSolicitar chamada de volta

Exemplos

' Ativar resposta de voz
SET SESSION "voice_response", true

' Desativar
SET SESSION "voice_response", false

' Verificar se é chamada de voz
IF session.call_type = "voice" THEN
    TALK "Entendi. Deixe-me verificar."
    ' Gera resposta em áudio automaticamente
END IF

1.3 Fluxo: Cliente Diz “Oi” no WhatsApp → Attendente

Este é o cenário mais comum. Quando um cliente inicia conversa com “Oi” no WhatsApp:

┌─────────────────────────────────────────────────────────────────────────────┐
│            FLUXO: CLIENTE DIZ "OI" NO WHATSAPP                            │
├─────────────────────────────────────────────────────────────────────────────┤
│                                                                             │
│  1. CLIENTE ENVIA "Oi"                                                      │
│     │                                                                       │
│     ▼                                                                       │
│  2. WHATSAPP ADAPTER RECEBE WEBHOOK                                        │
│     │                                                                       │
│     ▼                                                                       │
│  3. SESSION LOADER VERIFICA needs_human                                     │
│     │                                                                       │
│     ├─────────────────────────────┬─────────────────────────────────────┐  │
│     │                             │                                     │  │
│     ▼                             ▼                                     │  │
│  needs_human = false          needs_human = true                        │  │
│     │                             │                                     │  │
│     ▼                             ▼                                     │  │
│  BASIC ENGINE              ATTENDANCE QUEUE                               │
│  processa "Oi"              ├── Adiciona à fila                          │  │
│  (bot responde)             ├── Define priority                          │  │
│                             └── Notifica attendants (WebSocket)          │  │
│                                    │                                      │  │
│                                    ▼                                      │  │
│                             ATTENDANTE VÊ NOTIFICAÇÃO                      │  │
│                                    │                                      │  │
│                                    ▼                                      │  │
│                             /take ou clica em "Aceitar"                   │  │
│                                    │                                      │  │
│                                    ▼                                      │  │
│                             CHAT ATIVO                                     │  │
│                             └── Attendente digita resposta                 │  │
│                                    │                                      │  │
│                                    ▼                                      │  │
│                             RESPOSTA → WHATSAPP → CLIENTE                  │  │
│                                                                             │
└─────────────────────────────────────────────────────────────────────────────┘

1.3.1 Passo a Passo Detalhado

Passo 1: Cliente envia “Oi” → WhatsApp API → Webhook → BotServer

Passo 2: Sistema verifica needs_human:

#![allow(unused)]
fn main() {
fn check_needs_human(session: &UserSession) -> bool {
    session.context_data.get("needs_human")
        .and_then(|v| v.as_bool())
        .unwrap_or(false)
}
}

Passo 3: Se needs_human = false → Basic Engine processa → Bot responde

Passo 4: Se needs_human = true:

  1. Adiciona à fila de atendimento
  2. Notifica attendants online (WebSocket)
  3. Attendente vê notificação
  4. Attendente clica “Aceitar” ou /take
  5. Attendente digita resposta
  6. Resposta → WhatsApp → Cliente

1.3.2 Attendant Recebe via WhatsApp

Configuração em attendant.csv:

id,name,channel,phone
att-001,Maria Santos,whatsapp,+5511999990001

Notificação:

📱 *Nova conversa*
De: +5511988887777 (João Silva)
Mensagem: Oi

Digite: /take para aceitar

Attendente responde → WhatsApp → Cliente

1.3.3 Attendants via Interface (Users Table)

Não usa mais attendant.csv. Usa a tabela users existente:

┌─────────────────────────────────────────────────────────────────────────────┐
│              ATTENDANTS VIA INTERFACE - users table                          │
├─────────────────────────────────────────────────────────────────────────────┤
│                                                                             │
│  ┌─────────────────────────────────────────────────────────────────────┐   │
│  │                    CRIAÇÃO DE FILA (UI)                             │   │
│  │  ┌─────────────────────────────────────────────────────────────┐    │   │
│  │  │ Nome da Fila: Suporte WhatsApp                               │    │   │
│  │  │ Descrição: Atendimentos via WhatsApp                        │    │   │
│  │  │                                                              │    │   │
│  │  │ Canais: ☑ WhatsApp ☐ Telegram ☐ Web ☐ Instagram           │    │   │
│  │  │                                                              │    │   │
│  │  │ Usuários (atendentes):                                     │    │   │
│  │  │   ☑ Maria Santos (maria@empresa.com)                       │    │   │
│  │  │   ☑ João Silva (joao@empresa.com)                          │    │   │
│  │  │   ☐ Ana Costa (ana@empresa.com)                            │    │   │
│  │  │                                                              │    │   │
│  │  │ [Criar Fila]                                                │    │   │
│  │  └─────────────────────────────────────────────────────────────┘    │   │
│  └─────────────────────────────────────────────────────────────────────┘   │
│                                                                             │
│  ┌─────────────────────────────────────────────────────────────────────┐   │
│  │                    FILAS CONFIGURADAS                               │   │
│  │  ────────────────────────────────────────────────────────────────   │   │
│  │  📋 Fila                  │ Canais        │ Atendentes │ Status      │   │
│  │  ────────────────────────────────────────────────────────────────   │   │
│  │  Suporte WhatsApp         │ WhatsApp      │ 3 ativos   │ Ativa      │   │
│  │  Vendas                  │ Web, WhatsApp │ 2 ativos   │ Ativa      │   │
│  │  Técnica                 │ Telegram      │ 1 ativo    │ Ativa      │   │
│  └─────────────────────────────────────────────────────────────────────┘   │
└─────────────────────────────────────────────────────────────────────────────┘

1.3.4 Modelo de Dados - Filas

-- Tabela de Filas de Atendimento
CREATE TABLE attendance_queues (
    id UUID PRIMARY KEY,
    name VARCHAR(100) NOT NULL,
    description TEXT,
    channels JSONB DEFAULT '["whatsapp"]',
    is_active BOOLEAN DEFAULT true,
    priority_order INTEGER DEFAULT 0,
    max_wait_seconds INTEGER DEFAULT 600,
    auto_assign BOOLEAN DEFAULT true,
    bot_id UUID REFERENCES bots(id),
    created_by UUID REFERENCES users(id),
    created_at TIMESTAMPTZ DEFAULT NOW(),
    updated_at TIMESTAMPTZ DEFAULT NOW()
);

-- Membros da Fila (users ↔ queue)
CREATE TABLE attendance_queue_members (
    id UUID PRIMARY KEY,
    queue_id UUID REFERENCES attendance_queues(id),
    user_id UUID REFERENCES users(id),
    is_active BOOLEAN DEFAULT true,
    max_conversations INTEGER DEFAULT 5,
    priority INTEGER DEFAULT 0,
    created_at TIMESTAMPTZ DEFAULT NOW()
);

1.3.5 API de Filas

#![allow(unused)]
fn main() {
// Criar fila
POST /api/attendance/queues
{
    "name": "Suporte WhatsApp",
    "channels": ["whatsapp"],
    "user_ids": ["uuid-1", "uuid-2"]
}

// Adicionar usuário à fila
POST /api/attendance/queues/{id}/members
{"user_id": "uuid", "max_conversations": 5}
}

1.3.6 Atender Cliente Existente do CRM

#![allow(unused)]
fn main() {
// 1. Busca cliente no CRM
let customer = crm_contacts::table
    .filter(crm_contacts::phone.eq(phone))
    .first::<CrmContact>(conn)?;

// 2. Seleciona fila pelo canal
let queue = attendance_queues::table
    .filter(attendance_queues::channels.contains("whatsapp"))
    .filter(attendance_queues::is_active.eq(true))
    .first::<AttendanceQueue>(conn)?;

// 3. Seleciona próximo atendente (round-robin)
let member = attendance_queue_members::table
    .filter(attendance_queue_members::queue_id.eq(queue.id))
    .filter(attendance_queue_members::is_active.eq(true))
    .order(attendance_queue_members::priority.asc())
    .first::<QueueMember>(conn)?;

// 4. Associa ao usuário
let session = UserSession {
    needs_human: true,
    assigned_to: Some(member.user_id),  // ← users.id
    queue_id: Some(queue.id),
    customer_id: Some(customer.id),  // ← CRM contact
    ..
};
}

1.3.7 Fluxo com Cliente CRM

Cliente CRM existente
    │
    ▼
Envia mensagem WhatsApp
    │
    ▼
Identifica canal → fila específica
    │
    ▼
Seleciona próximo atendente (users)
    │
    ▼
Attendant vê dados do CRM:
  "João Silva - joao@email.com"
  "Cliente desde: 2022"
  "Total compras: R$ 5.000"
    │
    ▼
Responde
    │
    ▼
Ticket.assigned_to = users.id
Ticket.customer_id = crm_contacts.id

1.3.8 Console Web

┌─────────────────────────────────────────────────────────────────┐
│                    FILA DE ATENDIMENTO                          │
├─────────────────────────────────────────────────────────────────┤
│  🎫 #1 - Maria Santos (Você)                                    │
│     WhatsApp • João Silva (+55 11 98888-7777)                  │
│     "Oi" • 30s                                                  │
│     [Resolver] [Transferir]                                     │
│                                                                 │
│  🎫 #2 - João Silva                                             │
│     WhatsApp • Cliente Novo                                     │
│     "Preciso de ajuda" • 2min                                  │
│     [Aceitar]                                                   │
└─────────────────────────────────────────────────────────────────┘

1.3.5 WebSocket Notificação

{
  "type": "new_conversation",
  "session_id": "abc-123",
  "channel": "whatsapp",
  "customer": {"name": "João Silva", "phone": "+5511988887777"},
  "message": "Oi"
}

1.4 Fluxo Telegram

Mesma lógica do WhatsApp, com comandos específicos:

/start - Iniciar conversa
/agent - Solicitar atendente humano
/queue - Ver fila (atendente)
/resolve - Encerrar atendimento (atendente)

1.4 Fluxo SMS

SMS recebido → Normalizar → Verificar needs_human →
  → Se true: Attendance (com limite de 160 chars)
  → Se false: Basic Engine

1.5 Modo Bypass (Midleman)

O Attendance pode actuar como midleman puro (sem IA):

┌────────────┐     ┌────────────┐     ┌────────────┐
│  CLIENTE   │────►│   BOT      │────►│ ATENDENTE  │
│  (WhatsApp)│     │ (Attendance│     │  HUMANO    │
│            │◄────│   Bypass)  │◄────│            │
└────────────┘     └────────────┘     └────────────┘

Configuração:

name,value
attendance-bypass-mode,true
attendance-auto-transfer,true
attendance-transfer-keywords,human,atendente,pessoa,atendimento

1.6 Transferência para Teams

O attendance pode enviar a conversa para Microsoft Teams onde o atendente recebe a mensagem:

┌─────────────────────────────────────────────────────────────────────────────┐
│                  TRANSFER TO TEAMS FLOW                                      │
├─────────────────────────────────────────────────────────────────────────────┤
│                                                                             │
│   Cliente        Bot                     Attendance      Microsoft Teams      │
│   (WhatsApp)                                                       (Atendente)  │
│       │             │                         │                  │           │
│       │────────────►│                         │                  │           │
│       │  Mensagem  │                         │                  │           │
│       │             │                         │                  │           │
│       │  (precisa  │                         │                  │           │
│       │   humano)  │                         │                  │           │
│       │             │                         │                  │           │
│       │             ├────────────────────────►│                  │           │
│       │             │  TRANSFER TO HUMAN      │                  │           │
│       │             │  destination=teams      │                  │           │
│       │             │                         │                  │           │
│       │             │                         ├─────────────────►│           │
│       │             │                         │  Mensagem Teams │           │
│       │             │                         │                  │           │
│       │◄────────────┤◄────────────────────────┤  Resposta       │           │
│       │  Resposta  │   (forwarded back)      │                  │           │
│       │             │                         │                  │           │
└─────────────────────────────────────────────────────────────────────────────┘

Configuração Teams

name,value
teams-enabled,true
teams-app-id,
teams-app-password,
teams-tenant-id,
teams-bot-id,
attendance-default-destination,teams

Transferir para Teams

' Transferir para Teams
TRANSFER TO HUMAN "support", "normal", "Cliente precisa de ajuda", "teams"

' Ou especificar o destino
result = TRANSFER TO HUMAN({
    department: "support",
    destination: "teams"
})

Comandos no Teams

O atendente pode usar comandos no Teams:

/resolve - Encerrar atendimento
/transfer @nome - Transferir para outro atendente
/queue - Ver fila
/context - Ver contexto do cliente

1.7 Transferência para Google Chat

Planejado para futuras implementações:

┌─────────────────────────────────────────────────────────────────────────────┐
│              GOOGLE CHAT DESTINATION (PLANEJADO)                             │
├─────────────────────────────────────────────────────────────────────────────┤
│                                                                             │
│   Cliente ──► WhatsApp ──► Bot ──► Attendance ──► Google Chat ──► Atendente │
│                                                                           │
│   Configuração futura:                                                     │
│   name,value                                                              │
│   google-chat-enabled,true                                                │
│   google-chat-bot-token,                                                  │
│   google-chat-space-id,                                                   │
│                                                                           │
└─────────────────────────────────────────────────────────────────────────────┘

1.6.1 Teams Voice Calls

O Teams suporta chamadas de voz e vídeo diretamente:

name,value
teams-voice-enabled,true
teams-meeting-enabled,true
' Criar reunião Teams para atendimento
result = CREATE MEETING({
    "type": "teams",
    "title": "Suporte - " + customer.name,
    "participants": [customer.email]
})

TALK "Vou iniciar uma reunião Teams com você."
TALK result.join_url

1.10 Instagram Direct

1.10.1 Configuração

name,value
instagram-enabled,true
instagram-access-token,
instagram-app-secret,
instagram-webhook-verify-token,

1.10.2 Fluxo

Instagram User → Instagram API → Webhook → Channel Router → Attendance
                                                                    ↓
                                              needs_human=true → Fila de Atendimento
                                                                    ↓
                                              Atendente responde → Instagram API → User

1.10.3 Limitações do Instagram

RecursoSuporteObservação
TextoSuportado
ImagensDownload e reenvio
VídeosDownload e reenvio
Áudio⚠️Limitado
VideochamadaNão disponível na API
Compartilhamento de telaNão disponível

1.10.4 Workaround para Videochamada

Quando cliente Instagram precisa de videochamada:

' Instagram não suporta videochat nativo
' Ofereça alternativas:

TALK "Para melhor atendê-lo, gostaria de fazer uma videochamada?"
TALK "Posso criar uma sala de reunião agora. Clique no link:"
TALK meeting_link

' Attendente cria reunião via comando
' /video ou /call

1.11 LiveKit + SIP (Videochamadas)

1.11.1 Arquitetura LiveKit

O sistema já possui integração com LiveKit para videochamadas:

┌─────────────────────────────────────────────────────────────────────────────┐
│                          LIVEKIT INTEGRATION                                 │
├─────────────────────────────────────────────────────────────────────────────┤
│                                                                             │
│  ┌─────────────┐    ┌──────────────┐    ┌─────────────────────────────┐  │
│  │  Attendance │───►│ Meeting      │───►│ LiveKit Room                 │  │
│  │   Queue     │    │   Service    │    │ ┌─────────────────────────┐ │  │
│  └─────────────┘    └──────────────┘    │ │ • Video (WebRTC)         │ │  │
│                                           │ │ • Audio                  │ │  │
│  ┌─────────────┐    ┌──────────────┐    │ │ • Screen Sharing         │ │  │
│  │  Atendente  │───►│ Token        │───►│ │ • Transcription (AI)     │ │  │
│  │  Browser    │    │   Generator  │    │ │ • Recording              │ │  │
│  └─────────────┘    └──────────────┘    │ │ • Whiteboard             │ │  │
│                                           │ └─────────────────────────┘ │  │
│  ┌─────────────┐    ┌──────────────┐    │ │                           │  │
│  │   Cliente   │───►│ Join URL     │───►│ │ SIP Gateway (futuro)     │  │
│  │  Browser    │    │              │    │ │ • PSTN inbound           │  │
│  └─────────────┘    └──────────────┘    │ │ • PSTN outbound          │  │
│                                           │ │ • SIP trunk              │  │
│                                           └───────────────────────────────┘  │
└─────────────────────────────────────────────────────────────────────────────┘

1.11.2 Configuração LiveKit

name,value

# LiveKit Core
livekit-url,wss://livekit.yourserver.com
livekit-api-key,
livekit-api-secret,
livekit-room-prefix,attendance-

# SIP Configuration (futuro)
sip-enabled,false
sip-trunk-name,
sip-phone-number,
sip-inbound-route,
sip-outbound-route,

# Recording
livekit-recording-enabled,true
livekit-storage-bucket,recordings

# Transcription
livekit-transcription-enabled,true
transcription-language,pt-BR

1.11.3 Iniciar Videochamada no Attendance

Comando do Atendente:

/video - Iniciar videochamada
/video link - Gerar link para cliente
/video invite @cliente - Convidar para sala ativa
/video end - Encerrar videochamada

Comando BASIC:

' Criar sala de reunião para atendimento
result = CREATE MEETING({
    "title": "Atendimento - " + customer.name,
    "type": "support",
    "expires_in": 3600,
    "max_participants": 2,
    "recording": false,
    "transcription": true
})

IF result.success THEN
    SET SESSION "meeting_room", result.room_id
    SET SESSION "meeting_url", result.join_url
    
    TALK "Vou iniciar uma videochamada para melhor atendê-lo."
    TALK result.join_url
    
    ' Notifica atendente
    NOTIFY attendant, "Cliente entrou na sala: " + result.join_url
END IF

1.11.4 Compartilhamento de Tela

Durante videochamada:

' Atendente pode compartilhar tela
' Cliente pode compartilhar tela

' Detectar compartilhamento
IF meeting.participant.shared_screen THEN
    TALK "Cliente está compartilhando a tela"
END IF

' Solicitar compartilhamento
meeting.request_screen_share(participant_id)

1.11.5 Fluxo de Videochamada no Attendance

1. Cliente entra em contato (qualquer canal)
2. Atendente aceita o atendimento
3. Atendente decide fazer videochamada:
   /video

4. Sistema cria sala LiveKit
5. Sistema gera link de acesso
6. Envia link para cliente (mesmo canal ou email)

7. Cliente clica no link
8. Navegador abre → Permissões de câmera/microfone
9. Entra na sala de videochamada

10. Ambos (atendente + cliente) podem:
    • Ver vídeo
    • Ouvir áudio
    • Compartilhar tela
    • Ver transcrição ao vivo
    • Usar whiteboard

11. /resolve → Encerrar atendimento
12. Sala é fechada ou arquivada

1.11.6 API de Meeting

#![allow(unused)]
fn main() {
// Endpoints disponíveis em botserver/src/meet/mod.rs

POST /api/meet/create           // Criar sala
GET  /api/meet/rooms            // Listar salas
GET  /api/meet/rooms/{id}       // Obter sala
POST /api/meet/rooms/{id}/join  // Entrar na sala
POST /api/meet/token            // Gerar token
POST /api/meet/transcription    // Iniciar transcrição
POST /api/meet/invite           // Enviar convite

// WebSocket
WS  /api/meet/ws               // WebSocket de meeting

// Conversations
POST /api/meet/conversations/create
POST /api/meet/conversations/{id}/join
POST /api/meet/conversations/{id}/calls/start
}

1.12 SIP Gateway (Futuro)

1.12.1 Arquitetura SIP

┌─────────────────────────────────────────────────────────────────────────────┐
│                           SIP GATEWAY (PLANEJADO)                           │
├─────────────────────────────────────────────────────────────────────────────┤
│                                                                             │
│   PSTN Network                                                             │
│        │                                                                    │
│        ▼                                                                    │
│  ┌─────────────┐      ┌──────────────┐      ┌─────────────────────────┐   │
│  │   SIP       │─────►│   LiveKit    │─────►│   Attendance           │   │
│  │   Trunk     │      │   Gateway    │      │   Queue                │   │
│  └─────────────┘      └──────────────┘      └─────────────────────────┘   │
│                              │                                              │
│                     ┌────────┴────────┐                                    │
│                     │                 │                                    │
│                     ▼                 ▼                                    │
│              ┌────────────┐    ┌────────────┐                            │
│              │   Inbound  │    │  Outbound  │                            │
│              │   Calls    │    │   Calls    │                            │
│              └────────────┘    └────────────┘                            │
│                                                                             │
└─────────────────────────────────────────────────────────────────────────────┘

1.12.2 Casos de Uso SIP

CenárioDescrição
InboundCliente liga para número fixo → Direcionado para fila de atendimento
OutboundAtendente faz ligação para cliente → ID do atendimento
CallbackCliente agenda retorno → Sistema liga na hora marcada
IVRMenu de opções antes de entrar na fila

1.12.3 Comandos SIP

/call - Iniciar ligação
/call back - Ligar de volta
/callback 5511999999999 - Agendar retorno
/hangup - Desligar
/hold - Colocar em espera
/transfer - Transferir ligação

2. Integração com CRM Module

2.1 Dados do Cliente em Tempo Real

O Attendance partilha dados com CRM para contexto do atendente:

#![allow(unused)]
fn main() {
// attendance/queue.rs - dados do cliente disponíveis
struct QueueItem {
    session_id: Uuid,
    user_id: Uuid,
    bot_id: Uuid,
    channel: String,
    // Campos CRM
    user_name: String,
    user_email: Option<String>,
    user_phone: Option<String>,
    // Contexto adicional
    last_message: String,
    priority: i32,
    assigned_to: Option<Uuid>,
}
}

2.2 Busca Automática de Dados CRM

' Quando transfer para humano, busca dados do CRM
TRANSFER TO HUMAN "support"

' O sistema automaticamente busca:
customer = FIND "customers", "phone='" + session.phone + "'"
IF customer FOUND THEN
    SET SESSION "customer_name", customer.name
    SET SESSION "customer_tier", customer.tier
    SET SESSION "customer_lifetime_value", customer.ltv
END IF

2.3 Campos CRM Disponíveis

CampoDescriçãoExemplo
customer_idID único do clientecust-001
nameNome completoJoão Silva
emailEmailjoao@email.com
phoneTelefone+5511999999999
tierTier do clientepremium, gold, standard
ltvLifetime Value15000.00
last_purchaseÚltima compra2024-01-15
tagsTags do clientevip,dev,nps-9

2.4 Automação CRM via Attendance

' Regra: Se cliente premium, transfere com alta prioridade
customer = FIND "customers", "phone='" + session.phone + "'"

IF customer.tier = "premium" THEN
    TRANSFER TO HUMAN "vip-support", "high", "Cliente premium"
ELSE
    TRANSFER TO HUMAN "support"
END IF

2.5 Logging de Atendimento

name,value
attendance-crm-logging,true
attendance-log-fields,session_id,customer_id,attendant_id,start_time,end_time,sentiment

3. Integração com Marketing Module

3.1 Campanhas de Proactive Outreach

O Attendance pode iniciar conversas via Marketing:

# Marketing module.trigger_attendance()
# Envia mensagem proativa e marca needs_human=true

MENSAGEM: "Olá João! Temos uma oferta especial para você."
           "[Atendente disponível para conversar]"

SET SESSION "needs_human", true
SET SESSION "campaign_id", "summer-sale-2024"
SET SESSION "lead_source", "marketing-campaign"

3.2 Dados de Campanha em Attendance

CampoDescrição
campaign_idID da campanha营销
campaign_nameNome da campanha
utm_sourceFonte UTM
utm_mediumMedio UTM
ad_idID do anúncio
segmentSegmento do lead

3.3 Qualification de Leads

' Após atendimento, marca lead como qualificationado
IF attendant.resolved THEN
    customer = FIND "customers", "phone='" + session.phone + "'"
    
    IF customer NOT FOUND THEN
        ' Cria novo lead
        CREATE "leads", {
            "name": session.user_name,
            "phone": session.phone,
            "source": session.lead_source,
            "status": "contacted",
            "attendant_id": attendant.id,
            "notes": conversation.summary
        }
    ELSE
        ' Atualiza existente
        UPDATE "customers", customer.id, {
            "status": "qualified",
            "last_contact": NOW(),
            "attendant_id": attendant.id
        }
    END IF
END IF

4. Integração com Email Module

4.1 Notifications por Email

O Attendance pode enviar emails de notificação:

name,value
attendance-email-notify,true
attendance-email-template,attendant-assignment
attendance-email-recipient,attendant
attendance-email-bcc,supervisor@empresa.com

4.2 Tipos de Notificação

TipoQuandoDestinatário
new_assignmentNova conversa atribuídaAtendente
queue_alertFila > 10 conversasSupervisor
customer_waitingCliente aguardando > 5minAtendente
sla_breachSLA violadoGerente
resolvedAtendimento encerradoCliente (opcional)

4.3 Email como Canal de Resposta

# Se cliente não está no WhatsApp, pode responder por email

IF channel = "email" THEN
    # Renderiza template de resposta
    response = EMAIL.render(
        template="attendant-response",
        data={
            "customer_name": session.user_name,
            "message": attendant.message,
            "attendant_name": attendant.name,
            "company": config.company_name
        }
    )
    
    EMAIL.send(
        to=customer.email,
        subject=f"Resposta: {original_subject}",
        body=response
    )
END IF

5. Integração com Bot e Basic Engine

5.1 Palavra-chave TRANSFER TO HUMAN

' Transferência simples
TRANSFER TO HUMAN

' Transferência com destino específico
TRANSFER TO HUMAN "João Silva"
TRANSFER TO HUMAN "suporte técnico"
TRANSFER TO HUMAN "vendas", "high"

' Transferência com contexto
TRANSFER TO HUMAN "suporte", "normal", "Cliente com problema no pagamento"

5.2 Estados da Sessão

#![allow(unused)]
fn main() {
struct UserSession {
    id: Uuid,
    bot_id: Uuid,
    user_id: Uuid,
    // Flag principal de attendance
    needs_human: bool,
    
    // Dados do attendance
    context_data: HashMap<String, Value> {
        "attendant_id": "att-001",
        "attendant_name": "Maria",
        "queue_position": 3,
        "transfer_reason": "Problema técnico",
        "transfer_time": "2024-01-15T10:30:00Z",
    }
}
}
' Checar se precisa de humano
IF session.needs_human THEN
    TALK "Você está em atendimento humano."
END IF

' Obter posição na fila
position = session.queue_position

' Obter atendente atual
attendant = session.attendant_id

' Retornar para bot (apenas atendente)
SET SESSION "needs_human", false

5.4 API REST de Attendance

# Endpoints disponíveis

GET  /api/attendance/queue                 # Lista fila
GET  /api/attendance/attendants            # Lista atendentes
POST /api/attendance/assign                # Atribui conversa
POST /api/attendance/transfer              # Transfere entre atendentes
POST /api/attendance/resolve/<session_id>  # Resolve atendimento
GET  /api/attendance/insights              # Métricas

# Endpoints LLM Assist
POST /api/attendance/llm/tips              # Dicas IA
POST /api/attendance/llm/polish             # Polir mensagem
POST /api/attendance/llm/smart-replies     # Respostas sugeridas
GET  /api/attendance/llm/summary/<id>      # Resumo conversa
POST /api/attendance/llm/sentiment         # Análise sentimento

6. Arquitetura de Filas e Atendentes

6.1 Estrutura de Dados

#![allow(unused)]
fn main() {
// Queue Item - Item na fila de atendimento
struct QueueItem {
    session_id: Uuid,
    user_id: Uuid,
    bot_id: Uuid,
    channel: String,          // whatsapp, telegram, sms, web
    user_name: String,
    user_email: Option<String>,
    last_message: String,
    waiting_time_seconds: i64,
    priority: i32,          // 0=low, 1=normal, 2=high, 3=urgent
    status: QueueStatus,    // waiting, assigned, active, resolved
    assigned_to: Option<Uuid>,
    assigned_to_name: Option<String>,
}

// Attendant - Atendente humano
struct Attendant {
    id: String,             // att-001
    name: String,
    channel: String,        // all, whatsapp, telegram, web
    preferences: String,    // sales, support, technical
    department: Option<String>,
    status: AttendantStatus, // online, busy, away, offline
    active_conversations: i32,
}
}

6.2 Status de Atendentes

StatusDescriçãoRecebe Novas Conversas?
onlineDisponível✅ Sim
busyEm atendimento❌ Não
awayTemporariamente indisponível❌ Não
offline离线❌ Não

6.3 Prioridades de Conversa

PrioridadeValorUso
low0Consultas gerais
normal1Padrão
high2Clientes VIP, tempo-sensível
urgent3Escalações, reclamações

6.4 Routing Inteligente

def route_to_attendant(session, attendants):
    # 1. Filtra por canal
    eligible = [a for a in attendants 
                if a.channel in ["all", session.channel]]
    
    # 2. Filtra por status
    eligible = [a for a in eligible if a.status == "online"]
    
    # 3. Ordena por carga de trabalho
    eligible.sort(key=lambda a: a.active_conversations)
    
    # 4. Aplica preferências
    if session.topic:
        preferred = [a for a in eligible 
                    if a.preferences == session.topic]
        if preferred:
            return preferred[0]
    
    # 5. Retorna menor carga
    return eligible[0] if eligible else None

7. Módulo LLM Assist

7.1 Funcionalidades

FuncionalidadeDescriçãoComando WhatsApp
tipsDicas para o atendente/tips
polishPolir mensagem antes de enviar/polish <msg>
smart-repliesRespostas sugeridas/replies
summaryResumo da conversa/summary
sentimentAnálise de sentimentoAutomático

7.2 Exemplo de Uso

Cliente: Preciso.cancelar meu pedido

Atendente: /tips
Bot: 💡 Dicas:
    • Cliente quer cancelar pedido
    • Pergunte o número do pedido
    • Verifique política de cancelamento

Atendente: /polish Gostaria de me ajudar com o cancelamento
Bot: ✨ Polido:
    "Olá! Ficarei feliz em ajudá-lo com o cancelamento."

Atendente: Olá! Ficarei feliz em ajudá-lo com o cancelamento.
[Enviado para cliente]

8. Configuração Completa

8.1 config.csv

name,value

# === ATENDIMENTO BÁSICO ===
crm-enabled,true
attendance-enabled,true

# === FILA ===
attendance-queue-size,50
attendance-max-wait-seconds,600
attendance-priority-default,normal

# === ATENDENTES ===
attendance-auto-assign,true
attendance-slack-webhook,

# === CANAIS ===
attendance-whatsapp-commands,true
attendance-telegram-commands,true

# === BYPASS MODE ===
attendance-bypass-mode,false
attendance-auto-transfer,false
attendance-transfer-keywords,human,atendente,pessoa,falar com

# === LLM ASSIST ===
attendant-llm-tips,true
attendant-polish-message,true
attendant-smart-replies,true
attendant-auto-summary,true
attendant-sentiment-analysis,true

# === CRM INTEGRATION ===
attendance-crm-logging,true
attendance-customer-fields,name,email,phone,tier,ltv

# === EMAIL NOTIFICATIONS ===
attendance-email-notify,false
attendance-email-template,attendant-response

8.2 attendant.csv

id,name,channel,preferences,department,aliases,phone,email
att-001,Maria Santos,all,sales,commercial,maria;mari,5511999990001,maria@empresa.com
att-002,João Silva,whatsapp;web,support,support,joao;js,5511999990002,joao@empresa.com
att-003,Ana Costa,telegram,technical,engineering,ana;anc,5511999990003,ana@empresa.com
att-004,Pedro Oliveira,all,collections,finance,pedro;po,5511999990004,pedro@empresa.com

9. Fluxos de Conversa

9.1 Fluxo 1: Cliente Solicita Humano

Cliente: Quero falar com uma pessoa
    │
    ▼
Bot detecta keyword "pessoa"
    │
    ▼
TRANSFER TO HUMAN
    │
    ├──► needs_human = true
    ├──► Adiciona à fila
    ├──► Notifica atendentes (WebSocket)
    └──► Envia "Aguarde, transferir para atendente..."
    │
    ▼
Atendente recebe notificação
    │
    ▼
Atendente aceita /take
    │
    ▼
Chat entre cliente e atendente
    │
    ▼
Atendente /resolve
    │
    ├──► needs_human = false
    └──► Volta para Bot

9.2 Fluxo 2: Bot Transfere Automaticamente

Cliente: Não consigo acessar minha conta
    │
    ▼
Bot tenta resolver (3 tentativas)
    │
    ├──► Falha → Analisa sentimento
    │
    ▼
IF sentiment.score < -0.5 OR intent = "escalate" THEN
    │
    ▼
    TRANSFER TO HUMAN "support", "high", "Tentativas=3, Sentimento=negativo"

9.3 Fluxo 3: Bypass Mode (Midleman)

Cliente: (mensagem WhatsApp)
    │
    ▼
Attendance detecta:
    needs_human = true (via config bypass)
    attendance-bypass-mode = true
    │
    ▼
SEM passar pelo Basic Engine
    │
    ├──► Direto para fila
    └──► Notifica atendentes
    │
    ▼
Atendente responde
    │
    ▼
Response enviada diretamente para WhatsApp

9.4 Fluxo 4: Videochamada (LiveKit)

Cliente: Preciso de ajuda com problema técnico
    │
    ▼
Bot tenta resolver (3 tentativas)
    │
    ▼
IF complexidade > threshold THEN
    │
    ▼
    TRANSFER TO HUMAN "suporte técnico"
    │
    ▼
Atendente aceita
    │
    ▼
Atendente: /video
    │
    ├──► Cria sala LiveKit
    ├──► Gera link de acesso
    └──► Envia link para cliente
    │
    ▼
Cliente acessa link
    │
    ├──► Pede permissão câmera/mic
    ├──► Entra na sala
    └──► Vídeochat dimulai
    │
    ├──► Compartilhamento de tela
    ├──► Whiteboard
    └──► Transcrição em tempo real
    │
    ▼
/resolve → Sala encerrada
    │
    ├──► Gravação disponível (se enabled)
    ├──► Transcrição salva
    └──► Retorna para Bot

9.5 Fluxo 5: Videochamada Direta (cliente inicia)

Cliente: (do WhatsApp)
Quero fazer videochamada
    
    │
    ▼
Bot detecta intent = "video_call"
    │
    ▼
TALK "Vou criar uma sala de videochamada para você."
    
    │
    ▼
CREATE MEETING({type: "support"})
    │
    ▼
TALK "Clique no link para entrar: " + meeting_url
    
    │
    ▼
Atendente já está na sala esperando
    │
    ▼
Cliente entra → Videochamada inicia

10. Métricas e Analytics

10.1 KPIs de Atendimento

KPIDescriçãoMeta
avg_wait_timeTempo médio de espera< 60s
first_response_timeTempo até 1ª resposta< 30s
resolution_rateTaxa de resolução> 85%
customer_satisfactionNPS pós-atendimento> 7
attendant_utilizationUtilização dos atendentes> 70%
transfers_rateTaxa de transferência< 20%

10.1.1 KPIs de Videochamada

KPIDescriçãoMeta
video_call_requestsSolicitações de videochamada-
video_calls_completedVideochamadas completadas> 80%
avg_video_durationDuração média de videochamadas< 15min
screen_share_usageUso de compartilhamento de tela> 40%
transcription_accuracyAcurácia da transcrição> 90%

10.2 Dashboard

┌────────────────────────────────────────────┐
│         ATTENDANCE DASHBOARD               │
├────────────────────────────────────────────┤
│                                            │
│  FILA: 5 │ ATENDENTES: 8/10 │ ONLINE: 6    │
│                                            │
│  ┌─────────────┐ ┌─────────────┐          │
│  │ TEMPO MÉDIO │ │ RESOLUÇÃO   │          │
│  │   45s       │ │   92%       │          │
│  └─────────────┘ └─────────────┘          │
│                                            │
│  POR CANAL:                                │
│  WhatsApp  ████████████ 65%                │
│  Web       ██████ 25%                      │
│  Telegram  ██ 10%                          │
│                                            │
└────────────────────────────────────────────┘

11. Casos de Uso

11.1 E-commerce - Suporte

  1. Cliente pergunta sobre pedido
  2. Bot tenta resolver com informações do pedido
  3. Se não conseguir após 3 tentativas → TRANSFER TO HUMAN “suporte”
  4. Atendente recebe contexto completo (pedido, cliente)
  5. Atendente resolve → /resolve
  6. Sistema cria/atualiza ticket no CRM

11.2 Vendas - Qualificação

  1. Lead entra via WhatsApp (campanha)
  2. Bot faz qualificação inicial
  3. Se lead = “quente” → TRANSFER TO HUMAN “vendas”, “high”
  4. Atendente de vendas recebe com dados do lead
  5. Atendente fecha venda → /resolve
  6. Sistema cria oportunidade no CRM

11.3 Cobrança - Negociação

  1. Cliente em atraso recebe mensagem proativa
  2. Se cliente responde → needs_human = true
  3. Atendente de cobrança recebe
  4. Negocia dívida → registra no CRM
  5. /resolve → cliente volta para fluxo de cobrança

11.4 Suporte Técnico - Escalação

  1. Cliente reporta problema técnico
  2. Bot tenta solução básica
  3. Se complexidade > threshold → TRANSFER TO HUMAN “técnico”
  4. Atendente técnico com acesso a sistema
  5. Resolve ou escala para equipe de TI

12. Troubleshooting

12.1 Problemas Comuns

ProblemaCausaSolução
Mensagem não vai para atendentecrm-enabled=falseAtivar em config.csv
Atendente não recebe notificaçãoStatus != onlineVerificar attendant.csv
Transfer não encontra ninguémNenhum atendente onlineConfigurar horário ou fallback
Cliente preso em modo humano/resolve não executadoExecutar manualmente
WhatsApp não entrega respostaPhone inválidoVerificar país + número

12.2 Problemas de Videochamada

ProblemaCausaSolução
Link de videochamada não funcionaSala expiradaGerar novo link
Cliente sem câmera/micPermissão negadaOrientar cliente
Videochamada travaRede instávelReduzir qualidade
Transcrição não funcionaAPI key inválidaVerificar config
Gravação não iniciaStorage cheioLimpar espaço

12.3 Debug

# Ver fila de atendimento
GET /api/attendance/queue

# Ver atendentes
GET /api/attendance/attendants

# Ver sessão específica
GET /api/session/<session_id>

# Logs de attendance
grep "attendance" botserver.log

13. Evolução Futura

13.1 Features Planejadas

  • Multi-tenant - Múltiplas empresas
  • Skills-based routing - Routing por habilidade
  • SLA alerts - Alertas de SLA
  • Chatbot cobros - Chatbot para cobrança
  • Video call - ✅ Implementado (LiveKit)
  • Screen sharing - ✅ Implementado
  • Co-browse - Compartilhamento de tela
  • Knowledge base - Base de conhecimento
  • Canned responses - Respostas pré-definidas

13.2 Integrações Atuais e Futuras

Canais de Entrada (Implementados)

CanalStatusVoice (STT/TTS)
WhatsApp✅ Estável✅ Implementado
Telegram✅ Estável✅ Implementado
Instagram✅ Parcial❌ Não
Messenger✅ Parcial❌ Não
Teams✅ Parcial✅ Implementado
Web Chat✅ Estável✅ Implementado
SMS✅ Estável❌ Não
LiveKit/SIP✅ Estável✅ Completo

Destinos de Atendimento Humano

DestinoStatusDescrição
Teams✅ ImplementadoAtendente recebe no MS Teams
Google Chat🔜 PlanejadoAtendente recebe no Google Chat
WhatsApp✅ ImplementadoAtendente responde via WA
Web Console✅ ImplementadoInterface web

Features Planejadas

  • Multi-tenant - Múltiplas empresas
  • Skills-based routing - Routing por habilidade
  • SLA alerts - Alertas de SLA
  • Chatbot cobros - Chatbot para cobrança
  • Video call - ✅ Implementado (LiveKit)
  • Screen sharing - ✅ Implementado
  • WhatsApp Voice - ✅ Implementado (STT/TTS)
  • Teams Voice - ✅ Implementado
  • Co-browse - Compartilhamento de tela
  • Knowledge base - Base de conhecimento
  • Canned responses - Respostas pré-definidas
  • SIP Gateway - Planejado
  • PSTN Calls - Planejado

15. Kanban View para Fila de Atendimento

15.1 Visão Geral

O Kanban é uma view visual para gerenciar a fila de atendimento, permitindo arrastar cards entre colunas.

┌─────────────────────────────────────────────────────────────────────────────┐
│                         KANBAN - FILA DE ATENDIMENTO                        │
├─────────────────────────────────────────────────────────────────────────────┤
│                                                                             │
│  ┌─────────────┐  ┌─────────────┐  ┌─────────────┐  ┌─────────────┐       │
│  │   NOVOS     │  │  EM ATEND.  │  │  AGUARDANDO │  │  RESOLVIDOS │       │
│  │  (New)      │  │  (Active)   │  │  (Pending)  │  │  (Done)     │       │
│  ├─────────────┤  ├─────────────┤  ├─────────────┤  ├─────────────┤       │
│  │ ┌─────────┐ │  │ ┌─────────┐ │  │ ┌─────────┐ │  │ ┌─────────┐ │       │
│  │ │ Card #1 │ │  │ │ Card #3 │ │  │ │ Card #5 │ │  │ │ Card #7 │ │       │
│  │ │ João    │ │  │ │ Maria   │ │  │ │ Ana     │ │  │ │ Resolv. │ │       │
│  │ │ WhatsApp │ │  │ │ WhatsApp│ │  │ │ Telegram│ │  │ │ 15min   │ │       │
│  │ └────┬────┘ │  │ └────┬────┘ │  │ └────┬────┘ │  │ └─────────┘ │       │
│  │      ▼      │  │      │      │  │      ▼      │  │             │       │
│  │ ┌─────────┐ │  │      └──────┼──►│             │  │             │       │
│  │ │ Card #2 │ │  │             │  │             │  │             │       │
│  │ │ Carlos  │ │  │             │  │             │  │             │       │
│  │ │ Instagram│ │  │             │  │             │  │             │       │
│  │ └─────────┘ │  │             │  │             │  │             │       │
│  └─────────────┘  └─────────────┘  └─────────────┘  └─────────────┘       │
│                                                                             │
│ drag & drop → mover cards entre colunas                                    │
└─────────────────────────────────────────────────────────────────────────────┘

15.2 Colunas do Kanban

ColunaStatusDescrição
newNovosClientes aguardando primeiro atendimento
activeEm AtendimentoJá aceitos por attendant
pendingAguardandoCliente não respondeu
resolvedResolvidosAtendimento concluído

15.3 Estrutura do Card

┌────────────────────────────────────────┐
│ #ID - João Silva                    │
│ ─────────────────────────────────────  │
│ 📱 WhatsApp • +55 11 98888-7777       │
│ 💬 "Preciso de ajuda com meu pedido"  │
│ ─────────────────────────────────────  │
│ ⏱️ 5min │ Prioridade: Alta │ Att: Maria │
│ Tags: [vip] [pedido]                   │
└────────────────────────────────────────┘

15.4 Implementação

API Endpoints

#![allow(unused)]
fn main() {
// GET - Listar com grouping por status
GET /api/attendance/kanban?bot_id={id}

// PUT - Mover card entre colunas
PUT /api/attendance/kanban/move
{
    "session_id": "uuid",
    "from_status": "new",
    "to_status": "active"
}
}

Frontend (attendant.js)

// Renderizar Kanban
function renderKanban(queueItems) {
    const columns = {
        new: queueItems.filter(i => i.status === 'waiting'),
        active: queueItems.filter(i => i.status === 'active'),
        pending: queueItems.filter(i => i.status === 'pending'),
        resolved: queueItems.filter(i => i.status === 'resolved')
    };
    
    columns.forEach((items, status) => {
        renderColumn(status, items);
    });
}

// Drag & Drop
function setupDragDrop() {
    document.querySelectorAll('.kanban-card').forEach(card => {
        card.draggable = true;
        card.addEventListener('dragend', handleDragEnd);
    });
}

16. Tickets (Issues) Integrados ao Atendimento

16.1 Conceito

Cada atendimento pode gerar um Ticket/Issue que é rastreado e relacionado ao CRM.

┌─────────────────────────────────────────────────────────────────────────────┐
│                    TICKET INTEGRATION FLOW                                   │
├─────────────────────────────────────────────────────────────────────────────┤
│                                                                             │
│  Cliente (WhatsApp)                                                        │
│       │                                                                     │
│       ▼                                                                     │
│  Attendance Queue ─────► Criar Ticket                                       │
│       │                  │                                                  │
│       │                  ▼                                                  │
│       │            ┌─────────────┐                                         │
│       │            │   Ticket    │                                         │
│       │            │  #TIC-001   │                                         │
│       │            │ Status: Open│                                         │
│       │            │ Priority: H │                                         │
│       │            └──────┬──────┘                                         │
│       │                   │                                                │
│       ▼                   ▼                                                │
│  Attendente           assigned_to (users table)                            │
│       │                   │                                                │
│       │                   ▼                                                │
│       │            CRM / Compliance Issues                                 │
│       │                                                                     │
│       ▼                                                                     │
│  /resolve → Ticket status = resolved                                       │
│                                                                             │
└─────────────────────────────────────────────────────────────────────────────┘

16.2 Modelo de Dados

Tabela: attendance_tickets (nova) ou usar compliance_issues

-- Opção 1: Nova tabela
CREATE TABLE attendance_tickets (
    id UUID PRIMARY KEY,
    session_id UUID NOT NULL,
    ticket_number SERIAL,
    subject TEXT NOT NULL,
    description TEXT,
    status VARCHAR(50) DEFAULT 'open',  -- open, in_progress, pending, resolved, closed
    priority VARCHAR(20) DEFAULT 'normal',  -- low, normal, high, urgent
    category VARCHAR(50),  -- sales, support, billing, technical
    
    -- Relacionamento com users
    assigned_to UUID REFERENCES users(id),
    
    -- Relacionamento com atendente atual
    attendant_id VARCHAR(50),
    
    -- Campos de tempo
    created_at TIMESTAMPTZ DEFAULT NOW(),
    updated_at TIMESTAMPTZ DEFAULT NOW(),
    resolved_at TIMESTAMPTZ,
    
    -- Integração
    channel VARCHAR(20),  -- whatsapp, telegram, web
    customer_id UUID,  -- crm_contacts
    contact_phone VARCHAR(20),
    contact_email VARCHAR(100),
    
    -- Tags e custom fields
    tags JSONB,
    custom_fields JSONB
);

-- Opção 2: Usar compliance_issues existente (recomendado)
-- Já tem: id, bot_id, title, description, status, severity, assigned_to, created_at, updated_at

16.3 Relacionamento com Users

A tabela users já existe:

#![allow(unused)]
fn main() {
// Schema: users table
pub struct User {
    id: Uuid,           // PK - usar em assigned_to
    username: String,
    email: String,
    password_hash: String,
    is_active: bool,
    is_admin: bool,
    created_at: DateTime<Utc>,
    updated_at: DateTime<Utc>,
}
}

Workflow:

#![allow(unused)]
fn main() {
// 1. Attendant aceita atendimento
POST /api/attendance/assign
{
    "session_id": "uuid",
    "attendant_id": "att-001"
}

// 2. Sistema busca user pelo attendant
let user = users::table
    .filter(users::email.like("%attendant%"))
    .first::<User>(conn)
    .ok();

// 3. Cria/associa ticket
let ticket = AttendanceTicket {
    assigned_to: user.id,  // ← UUID da tabela users
    attendant_id: Some("att-001".to_string()),
    ..
};
}

16.4 Integração com CRM

O ticket pode criar/atualizar no CRM:

' Quando ticket é criado
ticket = CREATE "attendance_tickets", {
    "subject": "Problema com pedido",
    "priority": "high",
    "channel": "whatsapp",
    "customer_id": customer.id
}

' Quando resolvido
UPDATE "attendance_tickets", ticket.id, {
    "status": "resolved",
    "resolved_at": NOW()
}

' Sincroniza com CRM
CREATE "crm_deals", {
    "name": "Ticket #" + ticket.number,
    "stage": "closed_won",
    "contact_id": ticket.customer_id
}

16.5 API de Tickets

#![allow(unused)]
fn main() {
// Endpoints
GET    /api/attendance/tickets              // Listar tickets
GET    /api/attendance/tickets/{id}         // Detalhe ticket
POST   /api/attendance/tickets              // Criar ticket
PUT    /api/attendance/tickets/{id}         // Atualizar ticket
DELETE /api/attendance/tickets/{id}         // Deletar ticket

// Relacionar com atendimento
POST   /api/attendance/tickets/{id}/assign     // Atribuir a user
POST   /api/attendance/tickets/{id}/resolve    // Resolver
POST   /api/attendance/tickets/{id}/transfer   // Transferir
}

17. Integração com CRM (Pipeline de Vendas)

17.1 ModeloCRM Existente

O sistema já tem tables CRM:

#![allow(unused)]
fn main() {
// Estruturas existentes em contacts/crm.rs
pub struct CrmContact {
    id: Uuid,
    org_id: Uuid,
    bot_id: Uuid,
    first_name: Option<String>,
    last_name: Option<String>,
    email: Option<String>,
    phone: Option<String>,
    // ... outros campos
    owner_id: Option<Uuid>,  // ← Pode usar users.id
}

pub struct CrmDeal {
    id: Uuid,
    name: String,
    value: f64,
    stage: String,  // ← Pipeline stage
    contact_id: Option<Uuid>,
    owner_id: Option<Uuid>,
}

pub struct CrmPipelineStage {
    id: Uuid,
    name: String,
    order_index: i32,
    probability: f64,
}
}

17.2 Integração Attendance ↔ CRM

┌─────────────────────────────────────────────────────────────────────────────┐
│                    ATTENDANCE + CRM INTEGRATION                              │
├─────────────────────────────────────────────────────────────────────────────┤
│                                                                             │
│  ┌──────────────┐    ┌──────────────┐    ┌──────────────┐                │
│  │  Attendance │    │   Tickets    │    │     CRM      │                │
│  │   Queue     │    │              │    │  Pipeline    │                │
│  └──────┬───────┘    └──────┬───────┘    └──────┬───────┘                │
│         │                    │                    │                         │
│         │                    │                    │                         │
│         ▼                    ▼                    ▼                         │
│  ┌─────────────────────────────────────────────────────────────────┐       │
│  │                     shared users table                           │       │
│  │                     (assigned_to → users.id)                    │       │
│  └─────────────────────────────────────────────────────────────────┘       │
│                                                                             │
│  Fluxo:                                                                    │
│  1. Attendance cria Ticket                                                 │
│  2. Ticket.assigned_to = users.id                                         │
│  3. CRM Deal pode referenciar Contact do ticket                            │
│  4. Pipeline stages controlam status                                       │
│                                                                             │
└─────────────────────────────────────────────────────────────────────────────┘

17.3 Pipeline de Vendas no Attendance

' Configurar pipeline stages
' already exists: crm_pipeline_stages table

' Criar Deal a partir do atendimento
IF intent = "comprar" OR intent = "interesse" THEN
    ' Identifica ou cria contato
    contact = FIND "crm_contacts", "phone='" + session.phone + "'"
    
    IF contact NOT FOUND THEN
        contact = CREATE "crm_contacts", {
            "first_name": session.user_name,
            "phone": session.phone,
            "source": "whatsapp"
        }
    END IF
    
    ' Cria deal no pipeline
    deal = CREATE "crm_deals", {
        "name": "Oportunidade - " + contact.first_name,
        "contact_id": contact.id,
        "stage": "qualification",
        "owner_id": ticket.assigned_to
    }
    
    TALK "Perfeito! Vou criar uma proposta para você."
END IF

17.4 Dashboard Unificado

┌─────────────────────────────────────────────────────────────────────────────┐
│                    ATTENDANCE + CRM DASHBOARD                                │
├─────────────────────────────────────────────────────────────────────────────┤
│                                                                             │
│  ┌─────────────────────┐  ┌──────────────────────────────────────────────┐  │
│  │   ATENDIMENTOS     │  │              PIPELINE CRM                    │  │
│  │   ─────────────    │  │  ┌─────┐ ┌─────┐ ┌─────┐ ┌─────┐ ┌─────┐  │  │
│  │   Hoje: 45         │  │  │New  │ │Qual │ │Prop │ │Neg  │ │Won  │  │  │
│  │   Resolvidos: 38   │  │  │ $5K │ │$12K│ │$20K│ │$8K │ │$15K│  │  │
│  │   Em aberto: 7     │  │  └─────┘ └─────┘ └─────┘ └─────┘ └─────┘  │  │
│  │   Tempo médio: 8min│  │                                                │  │
│  └─────────────────────┘  └──────────────────────────────────────────────┘  │
│                                                                             │
│  ┌──────────────────────────────────────────────────────────────────────┐  │
│  │  TICKETS RECENTES                                                     │  │
│  │  ─────────────────────────────────────────────────────────────────── │  │
│  │  #TIC-001 | João Silva | Suporte | Alta | Maria | Aberto           │  │
│  │  #TIC-002 | Ana Costa  | Vendas  | Média| João  | Pendente         │  │
│  │  #TIC-003 | Carlos     | Técnico | Baixa| Maria | Resolvido        │  │
│  └──────────────────────────────────────────────────────────────────────┘  │
│                                                                             │
└─────────────────────────────────────────────────────────────────────────────┘

18. Resumo: O que Faltava

FeatureStatusDescrição
Kanban View🔜 PlanejadoView visual da fila com drag-drop
Tickets (Issues)🔜 PlanejadoUsar compliance_issues ou nova tabela
Filas via Interface🔜 PlanejadoCRUD de filas + membros (users)
assigned_to → users✅ Já existeusers.id como FK
CRM Pipeline✅ Já existecrm_deals + crm_pipeline_stages

18.1 Modelo Novo (Sem attendant.csv)

attendance_queues
  ├── name: "Suporte WhatsApp"
  ├── channels: ["whatsapp"]
  └── members: [user_id, ...]  ← users table

attendance_queue_members
  ├── queue_id: attendance_queues.id
  ├── user_id: users.id  ← Attendente
  └── max_conversations: 5

18.2 Fluxo Completo

Cliente WhatsApp "Oi"
    │
    ▼
Identifica cliente no CRM (por phone)
    │
    ▼
Busca fila pelo canal → "Suporte WhatsApp"
    │
    ▼
Seleciona próximo atendente (round-robin)
    │
    ▼
Session.assigned_to = users.id
Session.customer_id = crm_contacts.id
    │
    ▼
Kanban: Card em "Novos"
    │
    ▼
Attendente aceita → Card move para "Em Atendimento"
    │
    ▼
Attendente responde
    │
    ▼
resolve → Card move para "Resolvidos"
    │
    ▼
Ticket criado com:
  - assigned_to = users.id
  - customer_id = crm_contacts.id

18.3 Próximos Passos

  1. Criar tabelas attendance_queues e attendance_queue_members
  2. Criar UI para gerenciar filas e membros
  3. Criar API Kanban
  4. Adaptar Tickets para usar users.id
  5. Dashboard Unificado Attendance + CRM

19. Comparação com Enterprise Grade (Zendesk, Freshdesk, Intercom)

19.1 Matriz de Features

FeatureOurs (Planned)ZendeskFreshdeskIntercomPriority
CANAIS
WhatsAppAlta
TelegramAlta
InstagramAlta
Web ChatAlta
EmailAlta
SMSMédia
TeamsAlta
Voice/Phone🔜Alta
Facebook MessengerMédia
TICKETING
Criação automáticaAlta
Status workflowAlta
PrioridadesAlta
Categorias/TagsAlta
assigned_to → usersAlta
Ticket relacional CRMAlta
ATENDIMENTO
Filas (Queues)Alta
Round-robinAlta
Skills-based routing🔜Alta
Kanban View🔜Alta
Chat em tempo realAlta
AI/AUTOMAÇÃO
Sentiment analysisAlta
Smart repliesAlta
Auto-responder🔜Alta
Resumo IAAlta
Tips para atendenteAlta
CRM
Integração CRMAlta
360° customer viewAlta
Pipeline de vendasAlta
Criar Deal do ticket🔜Alta
SLA
SLA rules🔜Alta
Alerts de SLA🔜Alta
DASHBOARD
Métricas básicasAlta
Relatórios custom🔜Média
KNOWLEDGE
Base de conhecimento🔜Média
FAQ auto🔜Média
VIDEO
VideochamadaAlta
Screen shareAlta
INTEGRAÇÕES
Webhooks🔜Alta
API RESTAlta

19.2 Gap Analysis - O que Faltando

FeatureComplexidadeDescrição
Skills-based routingAltaRoute baseado em habilidade do atendente
Kanban ViewMédiaDrag-drop entre colunas
SLA ManagementAltaRegras, alertas, métricas
Auto-responderMédiaRespostas automáticas por IA
Knowledge BaseAltaArtigos, FAQs, busca
Relatórios customMédiaQueries, gráficos custom
WebhooksMédiaNotificações externas
Voice/Phone (PSTN)AltaIntegração com telefonia

19.3 Comparação Detalhada

Ours vs Zendesk

┌─────────────────────────────────────────────────────────────────────────────┐
│                        ZENDESK FEATURES                                      │
├─────────────────────────────────────────────────────────────────────────────┤
│                                                                             │
│  ✅ Já temos:                      ❌ Faltando:                            │
│  ─────────────────                ──────────────                           │
│  • Multi-channel                  • SLA Management completo                 │
│  • Ticket creation               • Knowledge base                         │
│  • User assignment               • Auto-responder IA                      │
│  • Real-time chat                • Custom reporting                        │
│  • LLM assist (tips/replies)    • Webhooks                                │
│  • Video calls                   • Marketplace apps                        │
│  • CRM integration               • Customer portals                       │
│  • Kanban (planejado)                                                      │
│                                                                             │
└─────────────────────────────────────────────────────────────────────────────┘

Ours vs Freshdesk

┌─────────────────────────────────────────────────────────────────────────────┐
│                      FRESHDESK FEATURES                                      │
├─────────────────────────────────────────────────────────────────────────────┤
│                                                                             │
│  ✅是我们 (Planned):              ❌ Faltando:                            │
│  ─────────────────                ──────────────                           │
│  • Omnichannel                   • Freddy AI (auto-responder)             │
│  • Ticket lifecycle              • Knowledge base                         │
│  • Queue management              • Custom objects                         │
│  • Round-robin                   • Approval workflows                     │
│  • Skills-based (planejado)      • Portal self-service                   │
│  • CRM integration               • SLAs                                   │
│  • Video meetings                • Advanced analytics                      │
│                                                                             │
└─────────────────────────────────────────────────────────────────────────────┘

19.4 Roadmap de Implementação

FASE 1 (Imediato - 2 semanas)
├── ✅ Filas via Interface (users)
├── ✅ assigned_to → users.id
├── 🔜 Kanban View
└── 🔜 Tickets integrados

FASE 2 (1 mês)
├── 🔜 Skills-based routing
├── 🔜 SLA Management
└── 🔜 Auto-responder IA

FASE 3 (2 meses)
├── 🔜 Knowledge Base
├── 🔜 Custom Reporting
└── 🔜 Webhooks

FASE 4 (3 meses)
├── 🔜 Voice/PSTN
├── 🔜 Portal Self-service
└── 🔜 Advanced Integrations

19.5 Conclusão

O plano atual cobre ~70% das features enterprise-grade:

CategoriaCobertura
Canais90%
Ticketing85%
Atendimento80%
AI/Automação75%
CRM85%
SLA30%
Dashboard60%
Knowledge20%
Video90%
Integrações50%

Próximas prioridades:

  1. ✅ Filas via UI + users (em desenvolvimento)
  2. 🔜 Kanban View
  3. 🔜 Skills-based routing
  4. 🔜 SLA Management
  5. 🔜 Knowledge Base

14. Arquivo de Referência

Ver também: