EXECUTIVE VISION: GENERAL BOTS PLATFORM
OPEN SOURCE ENTERPRISE AI PLATFORM
General Bots 6.1 delivers enterprise-grade AI capabilities with full data sovereignty. Own your infrastructure, control your data, deploy anywhere.
INVESTOR HIGHLIGHTS
| Differentiator | What Sets Us Apart |
|---|---|
| LLM Orchestrator | Single PROMPT.md per project defines bot personality across all channels and features |
| Hybrid AI+Human | Seamless bot-to-human handoff with AI-assisted attendants |
| True Omnichannel | WhatsApp, Teams, Instagram, Telegram, Web - ONE codebase, ONE personality |
| No Vendor Lock-in | Self-hosted, export anytime, migrate freely |
| Industry-Standard CRM | Built-in collections, scheduling, follow-ups, sales automation |
FEATURE OVERVIEW
| CAPABILITY | WHAT IT DOES | BUSINESS IMPACT | TIME TO VALUE |
|---|---|---|---|
| AI-POWERED CONVERSATIONS | Multi-channel bot orchestration with any LLM (OpenAI, Anthropic, Groq, local models) | Significant reduction in customer service costs | < 1 hour |
| LLM-ASSISTED ATTENDANTS | Real-time tips, smart replies, message polish, sentiment analysis for human agents | 50% faster response times | Immediate |
| TRANSFER TO HUMAN | Seamless bot-to-human handoff with full context | Hybrid support workflows | 5 minutes |
| WHATSAPP ATTENDANT | Human agents respond via WhatsApp with /commands | Mobile workforce support | Immediate |
| KNOWLEDGE BASES | Vector-indexed document collections with semantic search (Qdrant/FAISS) | Faster information retrieval | 15 minutes |
| EMAIL AUTOMATION | Full IMAP/SMTP integration with intelligent routing | Automated inbox management | 5 minutes |
| LLM-ASSISTED BASIC | Plain English programming with LLM code generation | No programming skills needed | Immediate |
| DRIVE INTEGRATION | S3-compatible storage with automatic document processing | Scalable storage | 2 minutes |
| ENTERPRISE SECURITY | Argon2 hashing, JWT tokens, TLS everywhere | Bank-grade security out of the box | Built-in |
| INSTANT THEMING | CSS-based UI customization | Brand consistency | < 30 seconds |
| COMPLIANCE READY | Built-in attendance, audit logs, GDPR/LGPD/HIPAA support | Regulatory compliance | Pre-configured |
| NVIDIA GPU SUPPORT | CUDA acceleration for LLM operations | Faster AI responses | When available |
| OMNICHANNEL | WhatsApp, Teams, Instagram, Telegram, Slack, Web - ONE codebase | Unified customer engagement | Single deploy |
| CALENDAR MANAGEMENT | Full scheduling, meeting coordination, availability tracking | Efficient scheduling | 3 minutes |
| TASK AUTOMATION | Cron-based scheduling, workflow orchestration | 24/7 automation | 5 minutes |
| WHITEBOARD COLLABORATION | Real-time collaborative drawing and diagramming | Visual team collaboration | Instant |
| VIDEO CONFERENCING | LiveKit WebRTC integration for meetings | High-quality meetings | 10 minutes |
| ANALYTICS DASHBOARD | Real-time metrics, usage patterns, performance monitoring | Data-driven decisions | Built-in |
| AUTOMATED REPORTS | Scheduled reports, custom metrics, export to PDF/Excel | Executive visibility | 2 minutes |
| BACKUP & RESTORE | Automated backups, point-in-time recovery, export as ZIP | Data protection | Automatic |
| MONITORING & ALERTS | System health, performance metrics, custom alerts | High availability | Pre-configured |
| DOCUMENT PROCESSING | OCR, PDF extraction, Excel parsing, image analysis | Document automation | Automatic |
| MIGRATION TOOLS | Import from Office 365, Google Workspace, Slack | Seamless transition | < 1 day |
| API GATEWAY | REST, GraphQL, Webhooks, WebSocket support | Integration ready | Ready |
| USER DIRECTORY | LDAP/AD replacement, SSO, group management | Central authentication | 15 minutes |
| VOICE PROCESSING | Speech-to-text, text-to-speech, voice commands | Voice interfaces | 5 minutes |
| CRM AUTOMATIONS | Follow-ups, collections, scheduling, sales pipelines | Revenue operations | 10 minutes |
LLM-ASSISTED ATTENDANT FEATURES
When conversations transfer from bot to human, the LLM continues working as a copilot:
| Feature | Description | Investor Appeal |
|---|---|---|
| Real-time Tips | AI analyzes customer message and provides actionable tips | “AI copilot for support” |
| Message Polish | One-click grammar and tone improvement | Reduces training costs |
| Smart Replies | 3 contextual reply suggestions | 50% faster responses |
| Auto Summary | Instant conversation summary on handoff | Faster transfers |
| Sentiment Analysis | Real-time emotional state + escalation warnings | Prevent churn |
WhatsApp Commands for Attendants:
/queue - View waiting customers
/take - Take next conversation
/tips - Get AI tips
/polish - Improve message
/replies - Get suggestions
/summary - Conversation summary
/resolve - Mark complete
DEPLOYMENT OPTIONS
Option 1: Pragmatismo Managed Hosting
- Fully managed infrastructure
- Access via: YourCompany.pragmatismo.com.br
- Professional support included
- Complete data ownership
Option 2: Self-Hosted
- Deploy on your own infrastructure
- Full control over hardware and configuration
- Access via your own domain
- No external dependencies
Option 3: Hybrid Deployment
- Run locally with cloud backup
- Export everything as ZIP anytime
- Move between hosting options freely
- No vendor lock-in
COMPETITIVE ADVANTAGE
| vs Competitors | General Bots Advantage |
|---|---|
| Zendesk/Intercom | Self-hosted, AI-first, no per-seat pricing |
| Dialogflow | Native WhatsApp, human handoff, CRM built-in |
| Botpress | LLM orchestration, multi-channel, enterprise features |
| n8n/Zapier | Conversational AI, not just workflows |
| Custom Development | 90% faster deployment, proven architecture |
TECHNICAL ARCHITECTURE
| COMPONENT | TECHNOLOGY | PERFORMANCE |
|---|---|---|
| Core Runtime | Rust + Tokio | Millions of concurrent connections |
| Database | PostgreSQL + Diesel | Sub-millisecond queries |
| Vector Search | Qdrant/FAISS | 100M+ documents indexed |
| Caching | Redis + Semantic Cache | 95% cache hit ratio |
| Message Queue | Built-in async channels | Zero latency routing |
| File Processing | Parallel PDF/DOC/Excel extraction + OCR | 1000 docs/minute |
| Security Layer | TLS 1.3 + Argon2 + JWT | Enterprise-grade security |
| Video Infrastructure | LiveKit WebRTC | 4K video, 50ms latency |
| Time-Series Metrics | InfluxDB 3 | 2.5M+ points/sec ingestion |
| Backup System | Incremental snapshots | RPO < 1 hour |
| API Gateway | Axum + Tower middleware | 100K requests/second |
| Task Scheduler | Cron + async workers | Millisecond precision |
FEATURE TIERS
Core Edition (Default)
- UI Server
- Console Interface
- Chat functionality
- Automation engine
- Task management
- Drive integration
- LLM support
- Redis caching
- Directory services
Standard Edition
- All Core features plus:
- Email integration (IMAP/SMTP)
- Calendar management
- Video meetings (LiveKit)
- Enhanced automation
Enterprise Edition
- All Standard features plus:
- Compliance monitoring (LGPD/GDPR/HIPAA)
- Attendance tracking with LLM assist
- Vector database (Qdrant)
- NVIDIA GPU acceleration
- Advanced monitoring
- gRPC support
- Multi-channel messaging (WhatsApp, Teams, Instagram)
- Human handoff with AI copilot
- CRM automations (collections, scheduling, sales)
QUICK START
# config.csv - Enable all features
name,value
crm-enabled,true
attendant-llm-tips,true
attendant-polish-message,true
attendant-smart-replies,true
attendant-auto-summary,true
attendant-sentiment-analysis,true
# attendant.csv - Configure your team
id,name,channel,preferences,department,aliases
att-001,John Smith,all,sales,commercial,john;johnny
att-002,Maria Santos,whatsapp,support,customer-service,maria
Result: Full hybrid AI+Human support system in minutes.
Full Edition
- All features enabled
- Complete platform capabilities
COMPLIANCE & PRIVACY
General Bots includes built-in compliance templates:
Privacy Rights Center (privacy.gbai)
- Data Access Requests - LGPD Art. 18 / GDPR Art. 15
- Data Rectification - LGPD Art. 18 III / GDPR Art. 16
- Data Erasure - LGPD Art. 18 VI / GDPR Art. 17 (Right to be Forgotten)
- Data Portability - LGPD Art. 18 V / GDPR Art. 20
- Consent Management - LGPD Art. 8 / GDPR Art. 7
- Processing Objection - LGPD Art. 18 IV / GDPR Art. 21
Supported Frameworks
- LGPD (Lei Geral de Proteção de Dados - Brazil)
- GDPR (General Data Protection Regulation - EU)
- HIPAA (Health Insurance Portability and Accountability Act)
- CCPA (California Consumer Privacy Act)
- SOC 2 (Service Organization Control)
- ISO 27001 (Information Security Management)
QUICK START
# Install BotServer
cargo install botserver
# Initialize your deployment
botserver --init my-company
# Start the server
botserver --start
PLATFORM COMPARISON
| Aspect | Traditional SaaS | General Bots |
|---|---|---|
| Licensing | Per-user monthly fees | Open source (AGPL) |
| Data Location | Vendor cloud | Your choice |
| Customization | Limited | Unlimited |
| AI Models | Fixed provider | Any provider |
| Source Code | Closed | Open |
| Vendor Lock-in | High | None |
| Data Portability | Often difficult | Full export anytime |
INTEGRATION CAPABILITIES
LLM Providers
- OpenAI (GPT-5, o3)
- Anthropic (Claude Sonnet 4.5, Opus 4.5)
- Meta (Llama)
- DeepSeek
- Local models via Ollama
- Any OpenAI-compatible API
Communication Channels
- WhatsApp Business
- Microsoft Teams
- Telegram
- Slack
- Web chat
- SMS
Storage Backends
- AWS S3
- MinIO
- Any S3-compatible storage
- Local filesystem
Directory Services
- Built-in user management
- LDAP integration
- Active Directory
- OAuth/OIDC SSO
ABOUT PRAGMATISMO
Pragmatismo develops General Bots as an open-source platform for enterprise AI and automation. Our focus is on delivering practical, production-ready solutions that organizations can deploy and customize to meet their specific needs.
Repository: github.com/GeneralBots/BotServer
License: AGPL-3.0
QUICK START
Ready to see it in action? Skip to the hands-on guide:
⚡ Quick Start: Run Your First Bot in 5 Minutes →
Or continue reading for the full journey:
| Path | Time | Best For |
|---|---|---|
| Quick Start | 5 min | Developers who want to dive in immediately |
| Introduction | 10 min | Understanding the “No Forms” philosophy |
| Chapter 01 | 15 min | Complete installation and first conversation |
NEXT STEPS
Introduction to General Bots
⚡ Want to skip ahead? Quick Start → gets you running in 5 minutes.
Build conversational AI bots in minutes, not months. General Bots lets you create intelligent chatbots by writing simple BASIC scripts and dropping in your documents. No complex frameworks, no cloud dependencies, no AI expertise required.
| Your Goal | Go To |
|---|---|
| Run a bot NOW | Quick Start |
| Understand the vision | Keep reading below |
| Write your first script | Chapter 06: BASIC Dialogs |
| Add documents to bot | Chapter 02: Packages |
The No Forms Movement
Since 2017, Pragmatismo has championed the No Forms philosophy. The idea is simple but revolutionary:
People should converse, not fill forms.
Traditional software forces users into rigid forms with dropdowns, checkboxes, and validation errors. But humans don’t communicate that way. We talk. We explain. We ask questions.
General Bots was born from this vision: replace forms with conversations.
Before: The Form Experience
┌─────────────────────────────────────────┐
│ Customer Support Form │
├─────────────────────────────────────────┤
│ Name: [_______________] │
│ Email: [_______________] │
│ Department: [Select ▼] │
│ Priority: ○ Low ○ Medium ○ High │
│ Subject: [_______________] │
│ Description: │
│ [ ] │
│ [ ] │
│ │
│ Attachments: [Choose File] │
│ │
│ [Submit] │
│ │
│ ⚠️ Error: Email format invalid │
│ ⚠️ Error: Description required │
└─────────────────────────────────────────┘
Problems:
- Intimidating for users
- Requires learning the interface
- Validation errors frustrate
- No guidance or context
- One-size-fits-all approach
After: The Conversation Experience
Benefits:
- Natural and intuitive
- Guides users step by step
- Adapts to each situation
- No errors, just clarifications
- Feels like talking to a human
Projections, Not Screens
The No Forms philosophy extends beyond chat. In General Bots:
- Visualizations replace dashboards - Data is projected contextually, not displayed in static grids
- Conversations replace menus - Ask for what you need, don’t hunt through options
- AI handles complexity - The system adapts, users don’t configure
- Voice-first design - Everything works without a screen
This is why General Bots focuses on:
- Conversational interfaces - Chat, voice, messaging
- Contextual projections - Show relevant info when needed
- Minimal UI - The less interface, the better
- AI interpretation - Understand intent, not just input
Quick Example
Want a student enrollment bot? Here’s all you need:
- Drop your documents in a
.gbkbfolder:
edu.gbkb/
enrollment-policy.pdf
course-catalog.pdf
- Write a simple tool (optional):
' enrollment.bas
PARAM name, email, course
SAVE "enrollments.csv", name, email, course
TALK "Welcome to " + course + "!"
- Chat naturally:
User: I want to enroll in computer science
Bot: I'll help you enroll! What's your name?
User: Sarah Chen
Bot: Welcome to Computer Science, Sarah!
No form. No UI. Just conversation.
What Makes General Bots Different
Just Run It
./botserver
That’s it. No Kubernetes, no cloud accounts. The bootstrap process installs everything locally in 2-5 minutes. PostgreSQL, vector database, object storage, cache - all configured automatically with secure credentials stored in Vault.
Real BASIC, Real Simple
We brought BASIC back for conversational AI. See our complete keyword reference:
' save-note.bas - A simple tool
PARAM topic, content
SAVE "notes.csv", topic, content, NOW()
TALK "Note saved!"
Four lines. That’s a working tool the AI can call automatically.
Documents = Knowledge
Drop PDFs, Word docs, or text files into .gbkb/ folders. They’re instantly searchable. No preprocessing, no configuration, no pipelines. The bot answers questions from your documents automatically.
Tools = Functions
Create .bas files that the AI discovers and calls automatically. Need to save data? Send emails? Call APIs? Just write a tool. The AI figures out when and how to use it.
Architecture at a Glance
General Bots is a single binary that includes everything:
One process, one port, one command to run. Deploy anywhere - laptop, server, LXC container.
Real-World Use Cases
Customer Support Bot
documents: FAQs, policies, procedures
tools: ticket creation, status lookup
result: 24/7 support that actually helps
Employee Assistant
documents: HR policies, IT guides, company info
tools: leave requests, equipment orders
result: Instant answers, automated workflows
Sales Catalog Bot
documents: product specs, pricing sheets
tools: quote generation, order placement
result: Interactive product discovery
Meeting Assistant
documents: agenda, previous minutes
tools: action item tracking, scheduling
result: AI-powered meeting facilitator
The Package System
Bots are organized as packages - just folders with a naming convention:
my-bot.gbai/ # Package root
├── my-bot.gbdialog/ # BASIC scripts
│ └── start.bas # Entry point
├── my-bot.gbkb/ # Knowledge base
│ ├── policies/ # Document collection
│ └── procedures/ # Another collection
└── my-bot.gbot/ # Configuration
└── config.csv # Bot settings
Copy the folder to deploy. That’s it. No XML, no JSON schemas, no build process.
Getting Started in 3 Steps
1. Install (2 minutes)
wget https://github.com/GeneralBots/botserver/releases/latest/botserver
chmod +x botserver
./botserver
2. Open Browser
http://localhost:8080
3. Start Chatting
The default bot is ready. Ask it anything. Modify templates/default.gbai/ to customize.
Core Philosophy
- No Forms - Conversations replace forms everywhere
- Simplicity First - If it needs documentation, it’s too complex
- Everything Included - No external dependencies to manage
- Production Ready - Secure, scalable, enterprise-grade from day one
- AI Does the Work - Don’t write logic the LLM can handle
- Projections Over Screens - Show data contextually, not in dashboards
Technical Highlights
- Language: Written in Rust for performance and safety
- Database: PostgreSQL with Diesel ORM
- Cache: Redis-compatible cache for sessions
- Storage: S3-compatible object store (MinIO)
- Vectors: Qdrant for semantic search
- Security: Vault for secrets, Argon2 passwords, AES encryption
- Identity: Zitadel for authentication and MFA
- LLM: OpenAI API, Anthropic, Groq, or local models
- Scripting: Rhai-powered BASIC interpreter
A Brief History
2017 - Pragmatismo launches General Bots with the No Forms manifesto. The vision: conversational interfaces should replace traditional forms in enterprise software.
2018-2020 - Node.js implementation gains traction. Hundreds of bots deployed across banking, healthcare, education, and government sectors in Brazil and beyond.
2021-2023 - Major enterprises adopt General Bots for customer service automation. The platform handles millions of conversations.
2024 - Complete rewrite in Rust for performance, security, and reliability. Version 6.0 introduces the new architecture with integrated services.
Today - General Bots powers conversational AI for organizations worldwide, staying true to the original vision: people should converse, not fill forms.
What’s Next?
- Chapter 01 - Install and run your first bot
- Chapter 02 - Understanding packages
- Chapter 06 - Writing BASIC dialogs
- Templates - Explore example bots
Community
General Bots is open source (AGPL-3.0) developed by Pragmatismo.com.br and contributors worldwide.
- GitHub: https://github.com/GeneralBots/botserver
- Version: 6.1.0
- Status: Production Ready
Ready to build your bot? Turn to Chapter 01 and let’s go!
Built with ❤️ from Brazil since 2017
Chapter 01: Run and Talk
⚡ In a hurry? Jump straight to Quick Start — you’ll be chatting with your bot in 5 minutes.
Get General Bots running and have your first conversation.
What You’ll Achieve
By the end of this chapter, you will:
- Have General Bots running on your machine
- Understand what happens during bootstrap
- Complete your first conversation with a bot
- Know how sessions and channels work
Choose Your Path
| If You Want To… | Go To |
|---|---|
| Get running immediately | Quick Start |
| Understand the architecture first | Overview |
| See all installation options | Installation |
| Learn about conversations | First Conversation |
| Understand session management | Sessions and Channels |
The 30-Second Version
./botserver
Open http://localhost:8080. Start chatting. That’s it.
Everything installs automatically on first run—PostgreSQL, storage, cache, and your first bot.
How It Works
Topics in This Chapter
Overview
What General Bots does and how it fits together.
Quick Start
The fastest path from zero to running bot.
Installation
Detailed setup options including LXC containers and production deployment.
First Conversation
Understanding how the bot responds and learns.
Sessions and Channels
How conversations are managed across WhatsApp, Web, Telegram, and more.
Coming From Executive Vision?
If you just read the Executive Vision, here’s what to know:
- Everything in that feature table? It’s all included in the single
botserverbinary - No configuration needed — Bootstrap detects your system and sets everything up
- Start simple — Run it, chat with it, then customize
The philosophy is: get running first, understand later.
Prerequisites
- Operating System: Linux, macOS, or Windows (WSL2 recommended)
- Disk Space: ~2GB for botserver-stack
- RAM: 4GB minimum, 8GB recommended
- Ports: 8080 (web), 5432 (database), 9000 (storage)
No Docker required. No cloud accounts. No API keys to start.
Next Step
Overview
BotServer is an open-source conversational AI platform built in Rust that enables developers to create, deploy, and manage intelligent bots with minimal configuration. This chapter provides a comprehensive introduction to the platform’s architecture, capabilities, and design philosophy.
Core Philosophy
BotServer was designed around five guiding principles that shape every aspect of the platform. Zero Configuration means the system works out of the box with sensible defaults, eliminating lengthy setup processes. The Package-Based approach ensures bots are self-contained in .gbai folders that can be copied and deployed anywhere. BASIC Scripting provides simple, accessible programming for conversation flows that non-programmers can understand and modify. Multi-Channel support means you deploy once and run everywhere across Web, WhatsApp, Teams, and other platforms. Knowledge-First design provides built-in document management and semantic search as core capabilities rather than afterthoughts.
Architecture Overview
BotServer uses a modular architecture organized into three distinct layers that work together to provide a complete conversational AI platform.
Storage Layer
The storage layer handles all data persistence needs. The SQL database stores structured data including users, sessions, and configurations using PostgreSQL with the Diesel ORM. Object storage provides S3-compatible file storage for documents and uploads, typically using MinIO for self-hosted deployments. The high-performance cache layer handles sessions and frequently accessed data using a Redis-compatible store. An optional vector database enables semantic search capabilities for knowledge bases using Qdrant or similar vector stores.
Application Layer
The application layer contains the core bot functionality. The Bot Engine processes conversations and manages state across interactions. The BASIC Interpreter executes conversation scripts written in the General Bots dialect of BASIC. The Package Manager handles bot deployment, lifecycle management, and hot-reloading of changes. Channel Adapters connect to various messaging platforms, translating between platform-specific formats and the internal message representation.
Service Layer
The service layer provides the infrastructure that supports bot operations. The UI Server handles HTTP API requests and WebSocket connections for real-time chat interfaces. The Scheduler executes cron-based tasks for automation and maintenance. LLM Integration connects to language models whether hosted locally or in the cloud. Authentication integrates with directory services for user management and access control.
Key Features
Conversation Management
BotServer provides comprehensive conversation management capabilities. Sessions persist across interactions, maintaining context and state throughout multi-turn dialogs. The context management system tracks conversation history and user information across interactions. Parallel conversation handling allows a single bot instance to manage thousands of simultaneous conversations efficiently.
Knowledge Base System
The knowledge base system turns your documents into searchable, AI-accessible information. Document ingestion supports PDF, TXT, MD, and DOCX formats with automatic text extraction. The indexing pipeline processes documents into searchable chunks stored in the vector database. Semantic search finds relevant information based on meaning rather than just keyword matching. Context injection automatically provides relevant document excerpts to the LLM when generating responses.
BASIC Scripting Language
The BASIC scripting language makes bot development accessible to everyone. The simple syntax allows non-programmers to read and modify conversation flows. Built-in keywords handle common tasks like sending messages, saving data, and calling APIs. The tool integration system lets you create callable functions that the AI can invoke automatically. Event-driven programming support enables reactive bots that respond to schedules, webhooks, and system events.
Multi-Channel Support
Deploy your bot once and reach users across multiple channels. The web chat interface provides an embeddable widget for websites. WhatsApp Business API integration enables customer service on the world’s most popular messaging platform. Microsoft Teams support brings your bot into enterprise collaboration spaces. Email integration allows conversational interactions through traditional email. SMS support via providers enables text message interactions for users without data connectivity.
Enterprise Features
BotServer includes capabilities required for enterprise deployments. Multi-tenancy support allows a single installation to serve multiple organizations with complete isolation. Role-based access control restricts actions based on user roles and permissions. Comprehensive audit logging tracks all actions for compliance and debugging. Horizontal scaling distributes load across multiple instances. High availability configurations ensure continuous operation even during failures.
System Requirements
Minimum Requirements
For development and testing purposes, BotServer runs comfortably on modest hardware. You need at least 4GB of RAM to run all components. A single CPU core is sufficient for light workloads. Reserve at least 10GB of disk space for the application, databases, and documents. The platform runs on Linux, macOS, or Windows operating systems.
Recommended for Production
Production deployments benefit from more substantial resources. Plan for 16GB of RAM to handle concurrent users and large knowledge bases. Two or more CPU cores improve response times under load. Use 100GB of SSD storage for better database and file access performance. Linux servers running Ubuntu or Debian provide the most tested and reliable environment. For local LLM hosting, an NVIDIA RTX 3060 or better GPU with at least 12GB of VRAM enables on-premises inference without cloud API dependencies.
Configuration
Bot configuration uses config.csv files with key-value parameters. Server settings like server_host and server_port control where the UI server listens. LLM configuration through llm-url and llm-model specifies which language model to use. Email settings including email-from and email-server enable outbound email functionality. UI customization parameters like theme-color1, theme-color2, theme-title, and theme-logo brand the interface. Conversation settings such as episodic-memory-history and episodic-memory-threshold tune how context is managed. Refer to the config.csv files in bot packages for the complete list of available parameters.
Bot Package Structure
Each bot is a self-contained .gbai folder that includes everything needed for deployment. The structure organizes different aspects of the bot into subfolders with specific naming conventions.
mybot.gbai/
mybot.gbot/
config.csv
mybot.gbdialog/
start.bas
tools/
mybot.gbkb/
documents/
mybot.gbtheme/
styles/
The .gbot subfolder contains configuration files including the main config.csv. The .gbdialog subfolder holds BASIC scripts with start.bas serving as the entry point and additional scripts providing tools. The .gbkb subfolder stores knowledge base documents organized into topical folders. The optional .gbtheme subfolder contains CSS and assets for UI customization.
Deployment Models
Standalone Server
The standalone deployment model runs a single BotServer instance serving multiple bots. This approach provides the simplest setup with shared resources across bots. Standalone deployment works best for small to medium deployments where isolation between bots is not critical.
LXC Containers
Linux containers provide lightweight virtualization for bot isolation. Each bot or group of bots runs in its own container with dedicated resources. LXC deployment offers easy management through standard container tooling while maintaining lower overhead than full virtual machines.
Embedded Mode
Embedded deployment integrates BotServer into existing applications as a library. This mode provides programmatic control over bot behavior and direct integration with application logic. Custom integrations can use the embedded mode to add conversational capabilities to any Rust application.
Getting Started
Installation begins by downloading and running the BotServer binary. The bootstrap process automatically downloads all required components to the botserver-stack/ directory, including database binaries, the object storage server, cache server, LLM runtime, and other dependencies.
Bot deployment uses object storage buckets. Each bot receives its own bucket for file storage. Bots are deployed to the drive rather than the work folder, which is reserved for internal operations as documented in the gbapp chapter.
After startup, access the UI interface at http://localhost:8080 to interact with your bots and monitor their operation.
Use Cases
Customer Support
Customer support bots automate FAQ responses and ticket handling. Load your support documentation, policies, and procedures into knowledge bases. Create tools for ticket creation and status lookup. The result is 24/7 support that handles common questions automatically while escalating complex issues to human agents.
Internal Tools
Employee assistant bots streamline internal operations. Knowledge bases contain HR policies, IT guides, and company information. Tools enable leave requests, equipment orders, and other common workflows. Employees get instant answers and automated processing for routine requests.
Educational Applications
Educational bots provide interactive learning experiences. Course materials and reference documents become searchable knowledge bases. Tools handle quiz administration, progress tracking, and enrollment. Students receive personalized guidance and immediate feedback.
Healthcare Applications
Healthcare bots assist with patient engagement while maintaining compliance. Appointment scheduling, medication reminders, and symptom checking tools automate routine interactions. Knowledge bases contain patient education materials. All interactions maintain audit trails for regulatory compliance.
Security Features
BotServer implements comprehensive security at every layer. Authentication integrates with directory services for centralized user management. SSL/TLS encryption protects all network communications. Session tokens use cryptographically secure generation and validation. Input sanitization prevents injection attacks across all user inputs. SQL injection prevention uses parameterized queries throughout. XSS protection sanitizes output displayed to users. Rate limiting prevents abuse and denial of service attacks. Audit logging records all significant actions for compliance and forensics.
Monitoring and Operations
Health Checks
Health monitoring endpoints report component status for operational awareness. Database connectivity checks verify the storage layer is operational. Storage availability checks ensure object storage is accessible. Cache performance metrics track response times and hit rates.
Metrics
Operational metrics provide visibility into bot performance. Conversation counts show usage patterns over time. Response time measurements identify performance issues. Error rates highlight problems requiring attention. Resource usage tracking helps capacity planning.
Logging
Structured logging facilitates troubleshooting and analysis. Configurable log levels from ERROR through DEBUG control verbosity. Automatic rotation and archival prevent disk exhaustion. Search and filtering tools help locate specific events in large log files.
Extensibility
Channel Adapters
New messaging channels integrate through the adapter system. WebSocket protocols enable real-time bidirectional communication. REST API integration supports request-response style platforms. Custom protocols can be implemented for specialized messaging systems.
Storage Backends
Storage is abstracted to support multiple backend options. S3-compatible storage works with AWS, MinIO, and other providers. Database adapters could support different SQL databases. Cache providers can be swapped while maintaining the same interface.
Community and Support
Documentation
Comprehensive documentation covers all aspects of the platform. The User Guide walks through common tasks and best practices. The API Reference documents all endpoints and parameters. The BASIC Language Reference details every keyword and syntax rule. The Deployment Guide covers production installation and configuration.
Resources
Example bots in the templates/ directory demonstrate common patterns. Test suites verify functionality and provide usage examples. Migration tools help transition from other platforms to General Bots.
Contributing
General Bots is open source under the AGPL (GNU Affero General Public License). The GitHub repository hosts all development activity. Issue tracking manages bug reports and feature requests. Pull requests from the community are welcome and encouraged.
Summary
BotServer provides a complete platform for building conversational AI applications. The combination of simple BASIC scripting, automatic setup, and enterprise features bridges the gap between simple chatbots and complex AI systems. The focus on packages, minimal configuration, and multi-channel support makes BotServer suitable for both rapid prototyping and production deployments serving millions of users.
Quick Start
Installation in 3 Steps
1. Run BotServer
./botserver
That’s it! No configuration needed.
2. Wait for Bootstrap (2-5 minutes)
You’ll see:
BotServer starting...
Bootstrap: Detecting system...
Installing PostgreSQL...
Database created
Schema initialized
Installing Drive...
Object storage ready
Buckets created
Installing Cache...
Cache server running
Creating bots from templates...
default.gbai deployed
announcements.gbai deployed
BotServer ready at http://localhost:8080
3. Open Browser
http://localhost:8080
Start chatting with your bot!
What Just Happened?
Bootstrap Flow
The automatic bootstrap process:
- Detected your OS (Linux/macOS/Windows)
- Downloaded PostgreSQL database to botserver-stack/
- Downloaded drive (S3-compatible storage) to botserver-stack/
- Downloaded cache component to botserver-stack/
- Generated secure credentials
- Created database schema
- Deployed default bots to object storage
- Started UI server on port 8080
Zero manual configuration required!
Using Existing Services
If you already have PostgreSQL or drive storage running, configure them in config.csv of your bot:
name,value
database-url,postgres://myuser:mypass@myhost:5432/mydb
drive-server,http://my-drive:9000
drive-accesskey,my-access-key
drive-secret,my-secret-key
Step 2: Write a Simple Tool
How Tools Work
Tools are just .bas files. Create enrollment.bas:
' Student enrollment tool
PARAM name, email, course
DESCRIPTION "Processes student enrollment"
SAVE "enrollments.csv", name, email, course, NOW()
TALK "Welcome to " + course + ", " + name + "!"
The LLM automatically discovers this tool and knows when to call it!
Step 3: Add Knowledge Base
Drop documents in a .gbkb/ folder:
mybot.gbai/
mybot.gbkb/
docs/
manual.pdf
faq.docx
guide.txt
The bot automatically:
- Indexes documents with vector embeddings
- Answers questions from the content
- Updates when files change
Container Deployment (LXC)
For production isolation, BotServer supports LXC (Linux Containers):
# Create container
lxc-create -n botserver -t download -- -d ubuntu -r jammy -a amd64
# Start and attach
lxc-start -n botserver
lxc-attach -n botserver
# Install BotServer inside container
./botserver
Benefits:
- Process isolation
- Resource control
- Easy management
- Lightweight virtualization
Optional Components
After installation, add more features:
./botserver install email # Email server
./botserver install directory # Identity provider
./botserver install llm # Local LLM server (offline mode)
./botserver install meeting # Video conferencing
Example Bot Structure
mybot.gbai/
mybot.gbdialog/ # Dialog scripts
start.bas # Entry point (required)
get-weather.bas # Tool (auto-discovered)
send-email.bas # Tool (auto-discovered)
mybot.gbkb/ # Knowledge base
docs/ # Document collection
faq/ # FAQ collection
mybot.gbot/ # Configuration
config.csv # Bot parameters
mybot.gbtheme/ # UI theme (optional)
custom.css
Deploy new bots by uploading to object storage (creates a new bucket), not the local filesystem. The work/ folder is for internal use only.
Local Development with Auto-Sync
Edit bot files locally and sync automatically to drive storage:
Free S3 Sync Tools:
- Cyberduck - GUI file browser (Windows/Mac/Linux)
- rclone - Command-line sync (All platforms)
- WinSCP - File manager with S3 (Windows)
- S3 Browser - Freeware S3 client (Windows)
Quick Setup with rclone:
# Configure for drive storage
rclone config # Follow prompts for S3-compatible storage
# Auto-sync local edits to bucket
rclone sync ./mybot.gbai drive:mybot --watch
Now when you:
- Edit
.csv→ Bot config reloads automatically - Edit
.bas→ Scripts compile automatically - Add docs to
.gbkb/→ Knowledge base updates
How It Really Works
You DON’T write complex dialog flows. Instead:
1. Add Documents
mybot.gbkb/
policies/enrollment-policy.pdf
catalog/courses.pdf
2. Create Tools (Optional)
' enrollment.bas - just define what it does
PARAM name AS string
PARAM course AS string
SAVE "enrollments.csv", name, course
3. Start Chatting!
User: I want to enroll in computer science
Bot: I'll help you enroll! What's your name?
User: John Smith
Bot: [Automatically calls enrollment.bas with collected params]
Welcome to Computer Science, John Smith!
The LLM handles ALL conversation logic automatically!
Configuration (Optional)
Configure per-bot settings in config.csv:
name,value
server_port,8080
llm-url,http://localhost:8081
episodic-memory-threshold,4
theme-color1,#0d2b55
Troubleshooting
Port 8080 in use?
Edit templates/default.gbai/default.gbot/config.csv:
name,value
server_port,3000
Clean install?
# Remove everything and start fresh
rm -rf botserver-stack/
rm .env
./botserver # Will regenerate everything
Check component status
./botserver status tables # PostgreSQL
./botserver status drive # Drive storage
./botserver status cache # Cache component
Documentation
- Full Installation Guide - Detailed bootstrap explanation
- Tool Definition - Creating tools
- BASIC Keywords - Language reference
- Package System - Creating bots
- Architecture - How it works
The Magic Formula
Documents + Tools + LLM = Intelligent Bot
What You DON’T Need:
- IF/THEN logic
- Intent detection
- Dialog flow charts
- State machines
- Complex routing
What You DO:
- Drop documents in
.gbkb/ - Create simple
.bastools (optional) - Start chatting!
The LLM understands context, calls tools, searches documents, and maintains conversation naturally.
Philosophy
- Just Run It - No manual configuration
- Simple Scripts - BASIC-like language anyone can learn
- Automatic Discovery - Tools and KBs auto-detected
- Secure by Default - Credentials auto-generated
- Production Ready - Built for real-world use
Real Example: Education Bot
-
Add course materials:
edu.gbkb/ courses/computer-science.pdf policies/enrollment.pdf -
Create enrollment tool:
' enrollment.bas PARAM name AS string PARAM course AS string SAVE "enrollments.csv", name, course -
Just chat:
User: What courses do you offer? Bot: [Searches PDFs] We offer Computer Science, Data Science... User: I want to enroll Bot: [Calls enrollment.bas] Let me help you enroll...
No programming logic needed - the LLM handles everything!
Installation
BotServer installs itself automatically through the bootstrap process. Just run the binary.
System Requirements
| Resource | Minimum | Production |
|---|---|---|
| OS | Linux, macOS, Windows | Linux (Ubuntu/Debian) |
| RAM | 4GB | 16GB+ |
| Disk | 10GB | 100GB SSD |
| CPU | 1 core | 2+ cores |
| GPU | None | RTX 3060+ (12GB VRAM) for local LLM |
Quick Start
./botserver
The bootstrap process automatically:
- Detects your system (OS/architecture)
- Creates
botserver-stack/directory structure - Downloads PostgreSQL, Drive, Cache, LLM server
- Initializes database and storage
- Deploys default bot
- Starts all services
First run takes 2-5 minutes.
Using Existing Services
If you have existing infrastructure, configure it in your bot’s config.csv:
name,value
database-url,postgres://myuser:mypass@myhost:5432/mydb
drive-server,http://my-drive:9000
drive-accesskey,my-access-key
drive-secret,my-secret-key
Default Ports
| Service | Port | Config Key |
|---|---|---|
| UI Server | 8080 | server-port |
| PostgreSQL | 5432 | DATABASE_URL |
| Drive API | 9000 | DRIVE_SERVER |
| Drive Console | 9001 | - |
| LLM Server | 8081 | llm-server-port |
| Embedding | 8082 | embedding-url |
| Cache | 6379 | Internal |
Verify Installation
# Check services
./botserver status
# Test database
psql $DATABASE_URL -c "SELECT version();"
# Test LLM
curl http://localhost:8081/v1/models
# Open UI
open http://localhost:8080
Bot Deployment
Bots deploy to object storage (not local filesystem):
mybot.gbai → creates 'mybot' bucket in drive
The work/ folder is for internal use only.
S3 Sync for Development
Use S3-compatible tools for local editing:
- Cyberduck (GUI)
- rclone (CLI)
- WinSCP (Windows)
# rclone sync example
rclone sync ./mybot.gbai drive:mybot --watch
Edits sync automatically - changes reload without restart.
Memory Optimization
For limited RAM systems:
name,value
llm-server-ctx-size,2048
llm-server-parallel,2
Use quantized models (Q3_K_M, Q4_K_M) for smaller memory footprint.
GPU Setup
For GPU acceleration:
name,value
llm-server-gpu-layers,35
Requires CUDA installed and 12GB+ VRAM.
Deployment Options
| Method | Use Case | Guide |
|---|---|---|
| Local | Development, single instance | This page |
| Docker | Production, microservices | Docker Deployment |
| LXC | Isolated components, Linux | Container Deployment |
Troubleshooting
| Issue | Solution |
|---|---|
| Database connection | Check DATABASE_URL, verify PostgreSQL running |
| Port conflict | Change port in config or stop conflicting service |
| Memory issues | Reduce llm-server-ctx-size, use quantized model |
| GPU not detected | Verify CUDA, set llm-server-gpu-layers,0 for CPU |
Next Steps
- Quick Start Guide - Create your first bot
- First Conversation - Test your bot
- Configuration Reference - All settings
First Conversation
After BotServer starts, you can immediately start chatting with your bot. No programming required!
Just Start Talking
Open your browser to http://localhost:8080 and start chatting:
You: Hi!
Bot: Hello! How can I help you today?
You: I want to enroll in a course
Bot: I'll help you with enrollment. Let me collect your information...
[Bot automatically calls enrollment.bas tool]
You: What documents do you have?
Bot: [Searches .gbkb/ folders and answers from your documents]
That’s it! The LLM handles everything automatically.
How It Works
Drop Documents in .gbkb/
mybot.gbai/
mybot.gbkb/
policies/
enrollment-policy.pdf
course-catalog.pdf
faqs/
student-faq.docx
payment-guide.txt
The bot automatically indexes all documents, creates vector embeddings, searches when users ask questions, and provides accurate answers from your content.
Create Tools as .bas Files
Create enrollment.bas:
PARAM name AS string LIKE "John Smith" DESCRIPTION "Student full name"
PARAM email AS string LIKE "john@example.com" DESCRIPTION "Contact email"
PARAM course AS string LIKE "Computer Science" DESCRIPTION "Course to enroll in"
DESCRIPTION "Enrollment tool - collects student information and processes enrollment"
SAVE "enrollments.csv", name, email, course, NOW()
TALK "Enrollment complete! Welcome to " + course + ", " + name
The LLM automatically discovers this tool, knows when to call it based on user intent, collects required parameters through natural conversation, and executes the tool when all information is gathered.
The LLM Does Everything Else
You don’t need to write IF/THEN logic, intent detection, dialog flows, state machines, or complex routing. The LLM automatically understands user intent, calls appropriate tools, searches knowledge bases, maintains conversation context, and handles follow-up questions naturally.
Real Example Conversation
With enrollment.bas tool and course documents in .gbkb/:
User: Hello, I'd like to know about your computer science program
Bot: I'd be happy to help you learn about our Computer Science program!
Based on our course catalog, the program offers a comprehensive
4-year curriculum covering software development, algorithms,
databases, and AI. Would you like to know about specific courses
or the enrollment process?
User: Yes, I want to enroll
Bot: Great! I'll help you enroll in the Computer Science program.
Let me collect your information. What's your full name?
User: John Smith
Bot: Thank you, John. What email address should we use for contact?
User: john.smith@gmail.com
Bot: Perfect! I'm enrolling you in Computer Science now...
[Executes enrollment.bas tool]
Enrollment complete! Welcome to Computer Science, John Smith!
You'll receive a confirmation at john.smith@gmail.com.
User: What are the payment options?
Bot: [Searches payment-guide.txt from .gbkb/]
According to our payment guide, we offer three options:
1. Full payment with 10% discount
2. Monthly installments over 12 months
3. Semester-based payments
Which option would you prefer?
The Magic Formula
Documents in .gbkb/ combined with tools in .bas files and the LLM create an intelligent bot. No complex programming needed!
Creating Your First Bot
Start by creating the directory structure for your bot with folders for dialog scripts, knowledge base documents, and bot configuration. Add your documents to the .gbkb/ directory including PDFs, Word documents, text files, and Markdown files. Optionally create tools as .bas files to handle specific actions like processing forms or calling APIs. Then restart BotServer and start chatting. The LLM will answer questions from your documents, call your tools when appropriate, and handle the entire conversation naturally.
mkdir -p mybot.gbai/mybot.gbdialog
mkdir -p mybot.gbai/mybot.gbkb/docs
mkdir -p mybot.gbai/mybot.gbot
Example tool in mybot.gbdialog/my-tool.bas:
PARAM user_name AS string
PARAM request AS string
DESCRIPTION "Handles user requests"
result = CALL "/api/process", user_name, request
TALK "Done! " + result
No Programming Required
Traditional chatbots require complex logic with IF/THEN statements, intent detection, and multi-step dialog management. With BotServer, you simply create the tool with parameters and a description, and the LLM handles all the conversation logic automatically.
Traditional approach (don’t do this):
' Complex multi-step dialog
IF intent = "enrollment" THEN
TALK "Let me help you enroll. What's your name?"
HEAR name
TALK "What's your email?"
HEAR email
' ... lots more code ...
ENDIF
BotServer approach (just create the tool):
' In enrollment.bas - becomes a tool automatically
PARAM name AS string
PARAM email AS string
DESCRIPTION "Collects enrollment information"
SAVE "enrollments.csv", name, email
TALK "Successfully enrolled " + name
What Can You Build?
A customer support bot uses product manuals in .gbkb/ and a create-ticket.bas tool, allowing the LLM to answer questions and create support tickets automatically.
An HR assistant combines the employee handbook in .gbkb/ with a leave-request.bas tool so the LLM can explain policies and process leave requests.
An education platform stores course materials in .gbkb/ and provides enrollment.bas and submit-assignment.bas tools, enabling the LLM to teach content and manage student tasks.
A sales assistant uses product catalogs in .gbkb/ with a create-quote.bas tool, allowing the LLM to answer product questions and generate quotes.
Advanced Features
Dynamic Tool Loading
The LLM can load tools based on context. In start.bas, you simply specify which knowledge bases to use, and tools in .gbdialog/ are auto-discovered. The LLM handles the conversation naturally without explicit HEAR statements.
Multi-Language Support
The LLM handles multiple languages automatically. Users can write in Portuguese, Chinese, or any other language, and the bot responds appropriately in the same language.
Context Awareness
The LLM maintains conversation context throughout the interaction. If a user starts to enroll but then asks about prerequisites, the bot handles the tangent and can return to the enrollment process afterward.
Tips for Success
Organize documents clearly by creating folders for policies, products, FAQs, and tutorials within your .gbkb/ directory. This helps the LLM find relevant information quickly.
Name tools descriptively with names like enrollment.bas, create-ticket.bas, and schedule-meeting.bas. The LLM understands what each tool does from its name and description.
Always add descriptions to tools using the DESCRIPTION keyword. A good description like “This tool processes student enrollment for courses” helps the LLM know when to use the tool.
Let the LLM work without trying to control every aspect of the conversation. Allow it to rephrase responses naturally, handle unexpected questions, and maintain conversation flow on its own.
Next Steps
The Quick Start guide walks you through building your first bot. The Packages chapter explains the package structure in detail. The Tool Definition documentation covers creating sophisticated tools. The Knowledge Base chapter describes document management and indexing.
Remember: Just add documents and tools, and the LLM does the rest!
Sessions and Channels
Every conversation has memory. Sessions are the beating heart of BotServer because they remember who you are, what you have said, and where you left off. Even if you close your browser and come back tomorrow, your conversation continues right where it paused.
What Is a Session?
A session is a persistent conversation container that tracks everything about an ongoing interaction. This includes who is talking through user identity, what has been said through message history, the current state including variables and context, any active tools and knowledge bases, and the bot configuration in use. Think of it like a phone call that can pause and resume anytime without losing the thread of conversation.
How Sessions Start
UI Interface
When a user opens http://localhost:8080, the browser receives a session token in the form of a UUID. This token is stored in localStorage for persistence across page loads. The session itself is created in PostgreSQL for durability and cached for fast access during active conversations.
API Access
Programmatic access to sessions uses the REST API. A POST request to /api/session returns a session ID and secret token. Subsequent requests include the token in the Authorization header as a Bearer token to maintain the session context.
# Get new session
curl -X POST http://localhost:8080/api/session
# Returns: {"session_id": "uuid-here", "token": "secret-token"}
# Use session
curl -H "Authorization: Bearer secret-token" \
http://localhost:8080/api/chat
Anonymous vs Authenticated
Sessions come in two flavors depending on user identity. Anonymous sessions are auto-created with temporary identities for users who have not logged in. Authenticated sessions link to a user account and maintain permanent history that persists indefinitely.
Session Lifecycle
Sessions move through several states during their existence. Active sessions have no timeout while the user is actively chatting. Idle sessions timeout after 30 minutes by default, though this is configurable. Expired sessions are removed after 7 days for anonymous users, while authenticated sessions never expire automatically.
What Gets Stored
PostgreSQL (Permanent Storage)
The database stores the authoritative session record. The sessions table tracks the unique ID, optional user reference, which bot is being used, creation timestamp, and last activity time. The messages table stores each message with its session reference, role (user, assistant, or system), content, and timestamp. The session_state table holds variables as JSONB data and tracks the current knowledge base and tool context.
Cache (Fast Access)
The cache layer provides rapid access to active session data. Recent messages, current variables, active knowledge bases and tools, and last activity timestamps are all cached under keys prefixed with the session UUID. This caching ensures responsive conversations without constant database queries.
Session Variables
Variables set in BASIC scripts persist across messages automatically. When you store a variable in one message, you can retrieve it in a later message whether that is minutes or days later.
' First message
name = HEAR
SET user_name = name
' Later message (minutes or days later)
GET user_name
TALK "Welcome back, " + user_name
Storage happens automatically through several layers. Writes go to cache immediately for fast access. Every message triggers persistence to PostgreSQL for durability. If the cache misses, data restores automatically from the database.
Context Management
Each session maintains its own isolated context. When one session loads a knowledge base, other sessions remain unaffected. This isolation ensures users see only the information relevant to their conversation.
' Session A
USE KB "policies"
' Only this session sees policies
' Session B (different user)
USE KB "products"
' This session only sees products
Session contexts include active knowledge bases, loaded tools, LLM configuration overrides, and custom prompts. All of these are scoped to the individual session and do not leak between users.
Multi-Bot Sessions
Different bots create entirely separate sessions. A user visiting /default gets one session connected to the default bot, while visiting /support creates a different session for the support bot. Each bot session is completely independent with its own conversation history, knowledge bases, configuration, and no data sharing between them.
Session Security
Token Generation
Session tokens use cryptographically secure random generation with 256-bit entropy. Tokens are encoded in URL-safe base64 format and are unique per session. This makes tokens effectively impossible to guess or predict.
Token Validation
Every request undergoes validation to ensure security. The system verifies that the token exists, has not expired, matches the claimed session, and that the session is still active. Any failure in this chain rejects the request.
Security Features
Multiple security measures protect sessions. Unguessable tokens prevent session hijacking. New tokens for each session prevent session fixation attacks. Automatic cleanup removes old sessions to prevent accumulation. Rate limiting per session prevents abuse.
Debugging Sessions
View Current Session
Within a BASIC script, you can access session information directly.
session_id = GET "session.id"
TALK "Session: " + session_id
Database Inspection
Direct database queries help debug session issues. You can find all active sessions by querying for recent activity, or view message history for a specific session ordered by timestamp.
Cache Inspection
The cache contents can be examined using the valkey-cli tool. List all session keys or retrieve specific session data like variables or context directly from the cache.
Session Limits
Default limits control resource usage, though all are configurable. Message history keeps the last 50 messages in context. Variable storage allows up to 1MB per session. File uploads accept up to 10MB per file. Each server handles up to 1000 concurrent sessions. Rate limiting restricts each session to 60 messages per minute.
Advanced Features
Session Persistence
Sessions persist across server restarts through the cache and database layers. When users reconnect after a restart, their session state restores automatically. This happens transparently without any action required from users or bot developers.
Session Context Isolation
Each session maintains its own context for knowledge base and tool usage. When you load a knowledge base or enable a tool, the change affects only the current session. Other users in other sessions remain unaffected by your context changes.
' Each session has isolated context
USE KB "docs"
' Only affects current session
How It Works Automatically
Sessions require zero configuration from bot developers. Creation happens automatically on the first request from any client. Storage to database and cache happens automatically as conversations progress. Cleanup runs automatically after sessions expire. Security through token generation happens automatically without any setup. Multi-channel support through automatic adapter selection means the same session infrastructure works across all platforms.
You never need to manage sessions directly. Just use the conversation keywords like TALK, HEAR, SET, and GET. Everything else happens behind the scenes.
Common Patterns
Welcome Back
Personalize greetings by remembering when users last visited. Store the last visit timestamp and check for it on subsequent sessions to customize the welcome message.
last_visit = GET BOT MEMORY "last_visit_" + session_id
IF last_visit THEN
TALK "Welcome back! Last seen: " + last_visit
ELSE
TALK "Welcome! First time here?"
END IF
SET BOT MEMORY "last_visit_" + session_id, NOW()
Progressive Disclosure
Reveal more features as users become more engaged by tracking message count and adjusting guidance accordingly.
msg_count = GET "session.message_count"
IF msg_count < 3 THEN
TALK "I can help with basic questions"
ELSE IF msg_count < 10 THEN
TALK "Try our advanced features!"
ELSE
TALK "You're a power user! Check tools menu"
END IF
Multi-User Support
Each user automatically receives their own isolated session. The system handles user separation without any explicit code required. Simply write your dialog logic and trust that each user’s data remains private to their session.
Troubleshooting
If sessions are not persisting, check that PostgreSQL is running and accessible. Verify that the cache server is reachable. Look for disk space issues that might prevent database writes.
If sessions expire too soon, adjust the timeout setting in config.csv. Check that server clocks are synchronized. Monitor for memory pressure that might cause early cache eviction.
If you cannot resume a session, the token might have become invalid through expiration or corruption. The session could have passed its expiration window. Database connection issues can also prevent session restoration.
Write Once, Run Everywhere
The same BASIC script runs across all channels including the UI interface, mobile apps, WhatsApp, Microsoft Teams, email conversations, and voice assistants. Your investment in dialog development pays off everywhere because each channel adapter handles the platform specifics while you focus on conversation logic.
' This same script works everywhere
TALK "Hello! How can I help?"
answer = HEAR
TALK "I understand you need help with: " + answer
Summary
Sessions and channels work together seamlessly in BotServer. Sessions handle state management automatically across any channel, persist data reliably through cache and database layers, and scale efficiently to thousands of concurrent conversations. You focus on writing the conversation flow while the system handles memory management and multi-channel delivery transparently.
Chapter 02: About Packages
How bots are organized in General Bots.
What You’ll Learn
- Package structure (
.gbaifolders) - Dialog scripts in BASIC
- Knowledge bases
- Configuration basics
- How packages load
Package Structure
A bot is just a folder ending in .gbai:
my-bot.gbai/
├── my-bot.gbdialog/ # BASIC scripts
├── my-bot.gbkb/ # Documents
├── my-bot.gbot/ # Configuration
├── my-bot.gbtheme/ # Optional styling
└── my-bot.gbdrive/ # Optional storage
Drop the folder in templates/, it loads automatically.
Key Concepts
Dialogs (.gbdialog)
- BASIC scripts that control conversation
start.basis optional (but needed to activate tools/KB with USE TOOL/USE KB)- Simple commands like TALK and HEAR
Knowledge Base (.gbkb)
- Put PDFs and documents in folders
- Automatically becomes searchable
- Bot can answer questions from documents
Configuration (.gbot)
- Single
config.csvfile - Simple name,value pairs
- Missing values use defaults
Themes (.gbtheme)
- Optional CSS styling
- Most bots don’t need this
Storage (.gbdrive)
- Links to S3-compatible storage
- For large files and uploads
How It Works
- Discovery: Finds
.gbaifolders - Loading: Reads all components
- Indexing: Processes documents
- Activation: Bot is ready
No build process. No compilation. Just folders and files.
The web UI uses HTMX with server-side rendering - minimal JavaScript, no build process, just HTML templates powered by Rust.
Topics Covered
- .gbai Architecture - Package details
- .gbdialog Dialogs - BASIC scripting
- .gbkb Knowledge Base - Document management
- .gbot Configuration - Settings
- .gbtheme UI Theming - Styling
- .gbdrive File Storage - Storage integration
- Bot Templates - Example bots
.gbai Architecture
A bot is just a folder. The .gbai extension marks a directory as a BotServer package containing everything needed to run a conversational AI bot - scripts, documents, configuration, and themes.
The Dead Simple Structure
my-bot.gbai/ # This folder = your entire bot
my-bot.gbdialog/ # BASIC conversation scripts
my-bot.gbkb/ # Documents for Q&A
my-bot.gbot/ # Configuration
my-bot.gbtheme/ # Optional UI customization
That’s it. No manifests, no build files, no dependencies. Copy the folder to deploy.
Visual Architecture
Architecture
How Bootstrap Finds Bots
At startup, BotServer scans templates/ for any folder ending in .gbai:
templates/
default.gbai/ → Creates bot at /default
support.gbai/ → Creates bot at /support
sales.gbai/ → Creates bot at /sales
Each .gbai becomes a URL endpoint automatically. Zero configuration.
What Goes Where
.gbdialog/ - Your Bot’s Brain
BASIC scripts that control conversation flow:
my-bot.gbdialog/
start.bas # Optional - needed to activate tools/KB
auth.bas # Login flow
tools/ # Callable functions
book-meeting.bas
check-status.bas
handlers/ # Event responses
on-email.bas
Example start.bas (optional, but required for tools/KB):
USE KB "policies"
USE TOOL "book-meeting"
USE TOOL "check-status"
TALK "Hi! I'm your assistant with tools and knowledge ready."
Note: If you don’t need tools or knowledge bases, start.bas is optional. The LLM will handle basic conversations without it.
.gbkb/ - Your Bot’s Knowledge
Documents organized by topic:
my-bot.gbkb/
policies/ # HR documents
vacation.pdf
handbook.docx
products/ # Product info
catalog.pdf
pricing.xlsx
support/ # Help docs
faq.md
Each folder becomes a searchable collection. Drop files in, bot learns automatically.
.gbot/ - Your Bot’s Settings
Single config.csv file with key-value pairs:
llm-model,your-preferred-model
temperature,0.7
max-tokens,2000
welcome-message,Hello! How can I help?
session-timeout,1800
No complex JSON or YAML. Just simple CSV that opens in Excel.
.gbtheme/ - Your Bot’s Look (Optional)
Custom web interface styling:
my-bot.gbtheme/
styles.css # Custom CSS
logo.png # Brand assets
templates/ # HTML overrides
chat.html
If missing, uses default theme. Most bots don’t need this.
Real Example: Support Bot
Here’s a complete customer support bot:
support.gbai/
support.gbdialog/
start.bas # Optional, but needed for tools/KB
tools/
create-ticket.bas
check-status.bas
support.gbkb/
faqs/
common-questions.pdf
guides/
troubleshooting.docx
support.gbot/
config.csv
start.bas (activates tools and knowledge bases):
USE KB "faqs"
USE KB "guides"
USE TOOL "create-ticket"
USE TOOL "check-status"
TALK "Support bot ready. How can I help?"
create-ticket.bas:
PARAM issue, priority
DESCRIPTION "Creates support ticket"
ticket_id = GENERATE_ID()
SAVE "tickets.csv", ticket_id, issue, priority, NOW()
TALK "Ticket #" + ticket_id + " created"
config.csv:
llm-model,your-preferred-model
bot-name,TechSupport
greeting,Welcome to support!
Deployment = Copy Folder
Local Development
cp -r my-bot.gbai/ templates/
./botserver restart
# Visit http://localhost:8080/my-bot
Production Server
scp -r my-bot.gbai/ server:~/botserver/templates/
ssh server "cd botserver && ./botserver restart"
Deployment
LXC Container
lxc file push my-bot.gbai/ container/app/templates/
No build step. No compilation. Just copy files.
Multi-Bot Hosting
One BotServer runs multiple bots:
templates/
support.gbai/ # support.example.com
sales.gbai/ # sales.example.com
internal.gbai/ # internal.example.com
public.gbai/ # www.example.com
Each bot:
- Gets own URL endpoint
- Has isolated sessions
- Runs independently
- Shares infrastructure
Naming Conventions
Required
- Folder must end with
.gbai - Subfolders must match:
botname.gbdialog,botname.gbkb, etc. start.basis optional, but required if you want to use tools or knowledge bases (must USE TOOL and USE KB to activate them)
Recommended
- Use lowercase with hyphens:
customer-service.gbai - Group related bots:
support-tier1.gbai,support-tier2.gbai - Version in folder name if needed:
chatbot-v2.gbai
Bootstrap Process
When BotServer starts:
Takes about 5-10 seconds per bot.
UI Architecture
The web interface uses HTMX with server-side rendering - minimal client-side code:
- Askama templates for HTML generation
- HTMX for dynamic updates without JavaScript
- No webpack, no npm build
- Edit and refresh to see changes
- Zero compilation time
Package Size Limits
Default limits (configurable in config.csv):
| Setting | Default | config.csv key |
|---|---|---|
| Total package | 100MB | package-max-size |
| Single document | 10MB | user-file-limit |
| Number of files | 1000 | user-file-count |
| Script size | 1MB | script-max-size |
| Collection count | 50 | kb-max-collections |
Example override in your bot’s config.csv:
name,value
package-max-size,209715200
user-file-limit,52428800
Troubleshooting
Bot not appearing?
- Check folder ends with
.gbai - Verify subfolders match bot name
- If using tools/KB, ensure
start.basexists with USE TOOL/USE KB commands
Documents not searchable?
- Ensure files are in
.gbkb/subfolder - Check file format is supported
- Wait 30 seconds for indexing
Scripts not running?
- Validate BASIC syntax
- Check file has
.basextension - Review logs for errors
Best Practices
Do’s
- Keep packages under 50MB
- Organize knowledge by topic
- Use clear folder names
- Test locally first
Don’ts
- Don’t nest
.gbaifolders - Don’t mix test/prod in same folder
- Don’t hardcode absolute paths
- Don’t store secrets in scripts
Summary
The .gbai architecture keeps bot development simple. No complex frameworks, no build systems, no deployment pipelines. Just organize your files in folders, and BotServer handles the rest. Focus on content and conversation, not configuration.
Next: Learn about .gbdialog Dialogs for writing conversation scripts.
.gbdialog Dialogs
The .gbdialog package contains BASIC scripts that define conversation flows, tool integrations, and bot behavior.
What is .gbdialog?
.gbdialog files are written in a specialized BASIC dialect that controls:
- Tool execution and integrations
- LLM prompting and context
- Knowledge base activation
- Session and memory management
- External API calls
Modern Approach: Let the LLM Work
Minimal BASIC Philosophy
Instead of complex logic, use the LLM’s natural understanding:
' Example from announcements.gbai/update-summary.bas
' Generate summaries from documents
text = GET "announcements.gbkb/news/news.pdf"
resume = LLM "In a few words, resume this: " + text
SET BOT MEMORY "resume", resume
' Example from law.gbai/case.bas
' Load context and let LLM answer questions
text = GET "case-" + cod + ".pdf"
text = "Based on this document, answer the person's questions:\n\n" + text
SET CONTEXT text
TALK "Case loaded. You can ask me anything about the case."
Key Components
1. LLM Integration
' LLM is for background processing only - generates content once for all users
' Example: Generate a summary that all users will see
text = GET "document.pdf"
summary = LLM "Summarize this document: " + text
SET BOT MEMORY "daily_summary", summary
' For interactive conversations, use SET CONTEXT and TALK
SET CONTEXT "user_type" AS "premium customer"
TALK "How can I help you today?"
2. Tool Execution
' Define tools with parameters
PARAM name AS string LIKE "John Smith" DESCRIPTION "Customer name"
PARAM email AS string LIKE "john@example.com" DESCRIPTION "Email"
' LLM automatically knows when to call this
SAVE "customers.csv", name, email
TALK "Registration complete!"
3. Knowledge Base Usage
See Knowledge Base documentation for details.
' Activate knowledge base collections
USE KB "products"
USE KB "policies"
' The system AI searches these automatically during conversations
' No LLM command needed - just TALK to the user
TALK "What product information can I help you with?"
Script Structure
Entry Point: start.bas (Optional)
The start.bas file in the .gbdialog folder is optional, but required if you want to activate tools or knowledge bases:
' Optional start script - needed only to activate tools/KB
USE KB "company_docs"
USE TOOL "book-meeting"
USE TOOL "check-status"
TALK "Welcome! How can I assist you today?"
When you need start.bas:
- To activate knowledge bases with
USE KB - To activate tools with
USE TOOL - To set initial context or configuration
When you don’t need start.bas:
- For simple conversational bots
- When the LLM can handle everything without tools/KB
- For basic Q&A without document search
Tool Definitions
Create separate .bas files for each tool. See KB and Tools for more information:
' enrollment.bas - The LLM knows when to use this
PARAM student_name AS string
PARAM course AS string
DESCRIPTION "Enrolls a student in a course"
SAVE "enrollments.csv", student_name, course, NOW()
TALK "Enrolled successfully!"
Best Practices
1. Minimal Logic
' Good - Let system AI handle the conversation naturally
TALK "How can I help you?"
' System AI understands context and responds appropriately
' Avoid - Don't micromanage the flow
' IF user_says_this THEN do_that...
2. Clear Tool Descriptions
DESCRIPTION "This tool books appointments for customers"
' The LLM uses this description to know when to call the tool
3. Context Over Conditions
' Provide context, not rules
SET CONTEXT "business_hours" AS "9AM-5PM weekdays"
TALK "When would you like to schedule?"
' System AI naturally understands to mention hours when relevant
4. Trust the System AI
' The system AI handles conversations naturally
TALK "Hello! I'm here to help."
' System handles greetings, questions, complaints naturally
Common Patterns
Document Summarization - Background Processing (from announcements.gbai)
' Schedule automatic updates - runs in background
SET SCHEDULE "59 * * * *"
' Fetch and summarize documents ONCE for all users
text = GET "announcements.gbkb/news/news.pdf"
resume = LLM "In a few words, resume this: " + text
SET BOT MEMORY "resume", resume ' Stored for all users
Interactive Case Analysis - User Conversations (from law.gbai)
' Ask for case number - interactive with user
TALK "What is the case number?"
HEAR cod
' Load case document
text = GET "case-" + cod + ".pdf"
IF text THEN
' Set context for system AI to use in conversation
text = "Based on this document, answer the person's questions:\n\n" + text
SET CONTEXT text
TALK "Case loaded. Ask me anything about it."
ELSE
TALK "Case not found, please try again."
END IF
Tool Definition Pattern
' Tool parameters (auto-discovered by LLM)
PARAM name AS string
PARAM email AS string
DESCRIPTION "Enrollment tool"
' Tool logic (called when LLM decides)
SAVE "enrollments.csv", name, email
TALK "Successfully enrolled " + name
Multi-Collection Search
USE KB "products"
USE KB "reviews"
USE KB "specifications"
' System AI searches these collections automatically during conversation
TALK "What would you like to know about our products?"
Advanced Features
Memory Management
See Storage documentation for persistent data options.
SET BOT MEMORY "company_policy", policy_text
' Available across all sessions
retrieved = GET BOT MEMORY "company_policy"
External APIs
See External APIs chapter for integration patterns.
result = GET "https://api.example.com/data"
' For background processing only
summary = LLM "Summarize this data: " + result
SET BOT MEMORY "api_summary", summary
Suggestions
See UI Interface for UI integration.
ADD SUGGESTION "Schedule Meeting" AS "schedule"
ADD SUGGESTION "View Products" AS "products"
' UI shows these as quick actions
Error Handling
The system handles errors gracefully:
- Syntax errors caught at compile time
- Runtime errors logged but don’t crash
- LLM provides fallback responses
- Timeouts prevent infinite operations
Script Execution
Scripts run in a sandboxed environment with:
- Access to session state
- LLM generation capabilities
- Knowledge base search
- Tool execution rights
- External API access (configured)
Migration from Traditional Bots
Old Way (Complex Logic)
' DON'T DO THIS - 1990s style
' IF INSTR(user_input, "order") > 0 THEN
' IF INSTR(user_input, "status") > 0 THEN
' TALK "Checking order status..."
' ELSE IF INSTR(user_input, "new") > 0 THEN
' TALK "Creating new order..."
' END IF
' END IF
New Way (System AI Intelligence)
' DO THIS - Let system AI handle conversation naturally
TALK "How can I help you with your order?"
' System AI understands context and intent automatically
The key is to trust the system AI and write less code for more intelligent behavior.
Important Distinction
- LLM Command: For background/batch processing, generates content ONCE, stored in BOT MEMORY for all users
- Interactive Conversations: Use HEAR/TALK/SET CONTEXT, system AI handles the natural conversation flow
See Also
- Chapter 1: Quick Start - Getting started with your first bot
- Chapter 2: Bot Architecture - Understanding all components
- Chapter 3: Knowledge Base - Working with KB collections
- Chapter 5: Keywords Reference - Complete BASIC command reference
- Chapter 9: Conversation Flow - Advanced dialog patterns
.gbkb Knowledge Base
The .gbkb package contains your bot’s domain knowledge - documents that the AI uses to answer questions accurately about your specific organization, products, or services.
What It Does
When you place documents in a .gbkb folder, the system automatically:
- Extracts text from your files (PDF, DOCX, TXT, MD, HTML, CSV)
- Creates searchable indexes using vector embeddings
- Enables semantic search so users can ask questions naturally
This means your bot answers based on YOUR documents, not just general AI knowledge.
Folder Structure
mybot.gbai/
└── mybot.gbkb/
├── policies/ ← Collection: "policies"
│ ├── vacation.pdf
│ └── handbook.docx
├── products/ ← Collection: "products"
│ ├── catalog.pdf
│ └── specs.xlsx
└── support/ ← Collection: "support"
└── faq.md
Each subfolder becomes a collection you can activate independently.
Using in BASIC Scripts
' Activate collections for this conversation
USE KB "policies"
USE KB "products"
' Now the AI automatically searches these when answering
TALK "How can I help you today?"
' Later, clear when done
CLEAR KB "policies"
Supported File Types
| Format | Extensions |
|---|---|
.pdf | |
| Word | .docx, .doc |
| Text | .txt, .md |
| Web | .html |
| Data | .csv, .json |
Key Points
- Automatic indexing - Just drop files in folders
- Semantic search - Users don’t need exact keywords
- Multiple collections - Organize by topic, activate as needed
- No code required - The AI handles search automatically
Learn More
- Chapter 03: Knowledge Base System - Technical deep-dive on indexing, vectors, and search
- USE KB Keyword - Complete keyword reference
- CLEAR KB Keyword - Managing active collections
.gbot Bot Configuration
The .gbot package serves as the configuration center for your bot, containing the settings that define how the bot behaves, which AI models it uses, and how it interacts with users. This chapter explains the configuration system and guides you through the available options.
Understanding Bot Configuration
Every bot in General Bots requires configuration to operate effectively. The .gbot folder within your bot package holds these settings, primarily through a config.csv file that uses simple key-value pairs. This approach makes configuration accessible to anyone comfortable with spreadsheet applications while remaining powerful enough for complex deployments.
The configuration system influences several aspects of bot behavior. Bot identity settings control how the bot presents itself to users. LLM configuration determines which language model powers the bot’s intelligence and how it generates responses. Context management settings affect how the bot maintains conversation history and retrieves relevant information. Integration parameters connect the bot to external services and APIs.
The config.csv File
Configuration lives in a straightforward CSV format with two columns: key and value. This design choice prioritizes accessibility—you can edit configuration in any text editor or spreadsheet application without learning complex syntax. Each row represents a single setting, making it easy to scan and modify.
The file supports various data types implicitly. Text values are stored as-is, numbers are parsed when needed, and boolean values typically use “true” and “false” strings. The system handles type conversion automatically when reading configuration, so you rarely need to worry about explicit typing.
Bot Identity Configuration
Identity settings establish how your bot presents itself during conversations. The bot_name parameter provides the display name users see when interacting with the bot. A descriptive bot_description helps users understand the bot’s purpose and capabilities. Version tracking through the version parameter supports deployment management and debugging.
These identity settings matter because they shape user expectations. A bot named “Legal Document Assistant” with an appropriate description sets different expectations than a generic “Helper Bot.” Clear identity configuration improves user experience by establishing context before conversations begin.
Language Model Settings
LLM configuration represents perhaps the most important settings in your bot. The llm_provider parameter specifies which AI service powers your bot, supporting options like OpenAI, Azure OpenAI, or local model servers. The llm_model parameter identifies the specific model to use, such as GPT-5, Claude Sonnet 4.5, or a local GGUF model.
Response characteristics are controlled through several parameters. The temperature setting affects response creativity, with lower values producing more focused and deterministic outputs while higher values allow more varied and creative responses. The max_tokens parameter limits response length, preventing runaway generation and managing costs for cloud-based providers.
The system_prompt parameter provides instructions that shape the bot’s personality and behavior throughout conversations. This prompt is prepended to every interaction, giving the model consistent guidance about how to respond, what tone to use, and what boundaries to respect.
Context Management
Context settings control how the bot maintains awareness of conversation history and relevant information. The context_window parameter determines how many previous messages remain visible to the model during each interaction. Larger windows provide better continuity but consume more tokens.
The context_provider setting influences how context is assembled and presented to the model. Different providers may apply various strategies for selecting and formatting context, optimizing for different use cases.
Memory functionality, controlled by the memory_enabled setting, allows bots to retain information across sessions. When enabled, bots can remember user preferences, previous interactions, and other persistent data that improves personalization.
Configuration Loading and Precedence
The system assembles configuration from multiple sources, applying them in a specific order that allows flexible overrides. Default values provide baseline behavior when no explicit configuration exists. Settings in your .gbot/config.csv file override these defaults for your specific bot.
Environment variables can override config.csv settings, useful for deployment scenarios where configuration varies between environments. Database configuration provides another override layer, supporting runtime configuration changes that persist across restarts. Finally, runtime API calls can temporarily adjust settings without permanent changes.
This precedence system enables sophisticated deployment patterns. Development environments might use local configuration files while production deployments pull settings from environment variables or databases. The same bot package can behave differently across environments without modification.
Dynamic Configuration with Bot Memory
Beyond static configuration, bots can store and retrieve dynamic settings using bot memory. The SET BOT MEMORY keyword stores values that persist across all sessions, effectively creating runtime configuration that can be modified through bot scripts.
This capability supports scenarios where configuration needs to adapt based on usage patterns, administrative decisions, or external inputs. A bot might store preferred response styles, accumulated statistics, or cached data that influences its behavior.
Best Practices
Effective configuration follows several principles. Keep identity settings clear and accurate—users trust bots more when their purpose is evident. Choose LLM settings that balance capability with cost and latency requirements. Set appropriate context windows that provide continuity without excessive token consumption.
Document non-obvious configuration choices, either in comments within config.csv or in accompanying documentation. This practice helps future maintainers understand why settings were chosen and whether they should be adjusted.
Test configuration changes in development environments before deploying to production. Some settings interact in non-obvious ways, and testing catches issues before they affect users.
Summary
The .gbot configuration system provides comprehensive control over bot behavior through accessible CSV files augmented by environment variables, database settings, and runtime adjustments. Understanding these configuration options and their precedence helps you build bots that behave predictably across different deployment scenarios while remaining adaptable to changing requirements.
.gbtheme UI Theming
The .gbtheme package provides visual customization for your bot’s web interface through straightforward CSS-based theming. This approach prioritizes simplicity—you create CSS files that override default styles, without needing complex build tools, template engines, or JavaScript frameworks.
The Philosophy of Simple Theming
Many theming systems require elaborate toolchains, preprocessors, and build processes that create barriers for non-developers who want to customize their bot’s appearance. General Bots takes a different approach by using plain CSS files that any web designer can create and modify.
This simplicity doesn’t sacrifice capability. CSS custom properties (variables) provide the flexibility to change colors, typography, spacing, and other visual characteristics throughout the interface by modifying a few central values. The bot’s default interface handles all the complex layout and functionality concerns, leaving themes to focus purely on appearance.
Theme Structure
A theme consists of one or more CSS files placed in the .gbtheme folder within your bot package. The simplest theme might be a single default.css file containing variable overrides. More complex setups might include multiple theme files for different contexts—a dark theme for evening use, a high-contrast theme for accessibility, or seasonal themes for special occasions.
The system automatically loads the default theme on startup, and scripts can switch between available themes at runtime based on user preferences, time of day, or any other logic you implement.
CSS Variables and Customization
The bot interface defines a set of CSS custom properties that control fundamental visual characteristics. By overriding these properties in your theme file, you can transform the interface’s appearance with minimal code.
The primary-color variable establishes your main brand color, used for headers, buttons, and other prominent elements. The secondary-color provides accent coloring for highlights and interactive elements. Background and text-color control the basic page appearance and readability.
Typography settings including font-family let you match your organization’s brand standards. Structural properties like border-radius affect the overall feel—sharp corners suggest professionalism while rounded corners feel friendlier. Spacing controls help maintain consistent visual rhythm throughout the interface.
These variables cascade through the interface components, meaning a single change propagates everywhere that property is used. This approach makes comprehensive theming achievable with just a handful of variable overrides.
Creating Effective Themes
Building a theme starts with understanding your visual goals. Corporate deployments often need to match existing brand guidelines, requiring specific colors, fonts, and visual treatments. Consumer-facing bots might prioritize approachability and visual appeal. Internal tools might emphasize clarity and efficiency over aesthetics.
A minimal theme might override only the primary and secondary colors to match brand standards while accepting defaults for everything else. This approach gets results quickly with minimal effort. As needs grow, you can progressively add more customization.
When creating dark themes, remember to adjust not just the background color but also text colors, borders, shadows, and any other elements that assume a light background. Contrast matters for readability—test your themes with actual content to ensure text remains legible.
Accessibility considerations should inform theme design. Ensure sufficient contrast ratios between text and backgrounds, avoid relying solely on color to convey information, and test with various visual impairments in mind.
Dynamic Theme Switching
Bots can change themes at runtime through the CHANGE THEME keyword in BASIC scripts. This capability enables several useful patterns.
User preference systems let visitors choose their preferred theme, with the selection stored in user memory for future visits. Time-based switching can apply dark themes during evening hours automatically. Contextual theming might use different visual treatments for different conversation modes or topics.
Theme switching happens instantly without page reloads, providing smooth transitions that maintain conversation flow.
Configuration Integration
Theme settings can also be specified in the bot’s config.csv file, providing default values that themes can override. The theme parameter specifies which theme file to load by default. The theme-color1 and theme-color2 parameters provide primary and secondary colors that the interface uses when no theme file specifies otherwise.
These configuration values serve as fallbacks—CSS files in the .gbtheme folder take precedence when they define the same properties. This layering allows simple color customization through configuration while supporting full CSS theming for more sophisticated needs.
No Build Process Required
Unlike many modern web development workflows, .gbtheme requires no build tools, preprocessors, or compilation steps. You write CSS files, place them in the appropriate folder, and they take effect. Changes appear immediately through hot reloading, making iterative design work fast and responsive.
This simplicity means designers without development environment setup can contribute themes. Anyone who can write CSS can customize the interface, lowering barriers to visual customization.
Migrating from Complex Systems
Organizations moving from platforms with complex theming systems can extract their essential visual parameters and recreate them as CSS variable overrides. The process typically involves identifying brand colors and typography from the existing theme, mapping those values to General Bots CSS variables, testing the result against the interface, and iteratively refining until the appearance matches expectations.
Much of the complexity in traditional theming systems exists to handle layout and functionality concerns that General Bots manages through its default interface. By focusing themes purely on visual styling, the migration process becomes much simpler.
Best Practices
Effective theming follows several principles. Keep theme files focused and minimal—override only what you need to change rather than redefining everything. Start with a single default.css file and add complexity only as requirements demand.
Test themes across different devices and screen sizes to ensure they work well everywhere. Pay attention to interactive states like hover, focus, and active to ensure the interface remains usable and visually coherent.
Document theme choices, especially when values differ from brand guidelines for technical reasons. Future maintainers will appreciate understanding why specific decisions were made.
Maintain consistency within themes—if you override one color, consider whether related elements need adjustment to maintain visual harmony.
Summary
The .gbtheme system demonstrates that powerful customization doesn’t require complex tooling. Through CSS variables and standard stylesheets, you can transform the bot interface’s appearance while the platform handles the underlying complexity. This approach respects the skills of designers and developers alike, enabling visual customization without artificial barriers.
.gbdrive File Storage
The .gbdrive system provides centralized file storage for all bot packages, leveraging S3-compatible object storage to deliver reliable, scalable storage infrastructure. This chapter explains how file storage works, how files are organized, and how to interact with stored content.
Understanding File Storage in General Bots
Every bot requires storage for its various components—scripts, documents, configuration files, user uploads, and generated content. Rather than managing files across disparate locations, General Bots consolidates storage through the .gbdrive system, which provides a consistent interface regardless of the underlying storage backend.
The storage system builds on S3-compatible object storage, meaning it works with self-hosted solutions like MinIO as well as cloud providers like AWS S3, Backblaze B2, or DigitalOcean Spaces. This flexibility allows deployments to choose storage solutions that match their requirements for cost, performance, and data residency.
Beyond simple file storage, the system provides versioning capabilities, access control, automatic synchronization, and integration with other bot components like knowledge bases and themes.
Storage Organization
Files are organized using a bucket-per-bot structure that keeps each bot’s content isolated and manageable. Within a bot’s storage bucket, the familiar package structure appears: .gbdialog for scripts, .gbkb for knowledge base collections, .gbot for configuration, and .gbtheme for interface customization.
Additionally, each bot has space for user-uploaded files, generated content, and other runtime data. This organization mirrors the logical structure you work with during development, making it intuitive to understand where files reside and how they relate to bot functionality.
The system maintains this structure automatically when bots are deployed or updated, ensuring that the storage state reflects the current bot configuration without manual intervention.
.gbusers - Per-User Storage
The .gbusers folder within .gbdrive provides isolated storage space for each user interacting with the bot. This enables personalized document storage, user-specific settings, and application data that persists across sessions.
User Folder Structure
User folders are identified by the user’s email address or phone number:
mybot.gbai/
mybot.gbdrive/
users/
john@example.com/ # User identified by email
papers/
current/ # Active/working documents
untitled-1.md
meeting-notes.md
named/ # Saved/named documents
quarterly-report/
document.md
attachments/
project-proposal/
document.md
uploads/ # User file uploads
exports/ # Generated exports (PDF, DOCX, etc.)
settings/ # User preferences
preferences.json
+5511999887766/ # User identified by phone number
papers/
current/
named/
uploads/
User Identifier Format
Users are identified by their primary contact method:
- Email:
john@example.com,maria@company.com.br - Phone:
+5511999887766,+1234567890(E.164 format)
The identifier is sanitized for filesystem compatibility while remaining human-readable.
Paper Document Storage
The Paper application stores user documents in the papers/ directory:
papers/current/: Working documents that are actively being edited. These may be auto-saved drafts or recently accessed files.papers/named/: Documents that have been explicitly saved with a name. Each named document gets its own folder to support attachments and metadata.
Example document structure:
papers/
current/
untitled-1.md # Auto-saved draft
untitled-2.md # Another working document
named/
meeting-notes-2024/
document.md # The main document content
metadata.json # Title, created_at, updated_at, etc.
attachments/ # Embedded images or files
image-001.png
research-paper/
document.md
metadata.json
Accessing User Storage from BASIC
BASIC scripts can access user storage using the USER DRIVE keyword:
' Read a user's document
content = READ USER DRIVE "papers/current/notes.md"
' Write to user's storage
SAVE USER DRIVE "papers/named/report/document.md", report_content
' List user's papers
papers = LIST USER DRIVE "papers/named/"
' Delete a user document
DELETE USER DRIVE "papers/current/draft.md"
User Storage API
The REST API provides endpoints for user storage operations:
GET /api/drive/user/list?path=papers/current/
POST /api/drive/user/read
{ "path": "papers/named/report/document.md" }
POST /api/drive/user/write
{ "path": "papers/current/notes.md", "content": "..." }
POST /api/drive/user/delete
{ "path": "papers/current/draft.md" }
All user storage API calls require authentication and automatically scope operations to the authenticated user’s folder.
Storage Quotas
Each user has configurable storage limits:
| Setting | Default | Description |
|---|---|---|
user-storage-quota | 100MB | Maximum total storage per user |
user-file-limit | 5MB | Maximum single file size |
user-file-count | 500 | Maximum number of files |
Configure in config.csv:
user-storage-quota,104857600
user-file-limit,5242880
user-file-count,500
Working with Files
File operations in General Bots happen through several interfaces depending on your needs. The BASIC scripting language provides keywords for reading file content directly into scripts, enabling bots to process documents, load data, or access configuration dynamically.
Files can also be managed through the administrative API for bulk operations, migrations, or integration with external systems. The web interface provides user-facing upload and download capabilities where appropriate.
When files change in storage, the system detects modifications and triggers appropriate responses. Script changes cause recompilation, document changes trigger knowledge base reindexing, and configuration changes reload bot settings. This hot-reloading capability accelerates development and enables runtime updates without service interruption.
Integration with Bot Components
The storage system integrates deeply with other bot components, serving as the foundation for several capabilities.
Knowledge bases draw their source documents from storage, with the indexing system monitoring for changes and updating embeddings accordingly. When you add a document to a .gbkb folder, it automatically becomes part of the bot’s searchable knowledge.
Theme assets including CSS files and images are served from storage, with appropriate caching to ensure good performance. Changes to theme files take effect quickly without requiring restarts.
Tool scripts in .gbdialog folders are loaded from storage, parsed, and made available for execution. The compilation system tracks dependencies and rebuilds as needed when source files change.
Paper Application Integration
The Paper document editor automatically saves to the user’s .gbusers folder:
- Auto-save: Every 30 seconds, working documents are saved to
papers/current/ - Explicit save: When users click “Save”, documents move to
papers/named/{document-name}/ - Export: Generated exports (PDF, DOCX) are saved to
exports/and offered for download - AI-generated content: AI responses can be inserted into documents and saved automatically
Access Control
Different files require different access levels, and the storage system enforces appropriate controls:
- Public files: Accessible without authentication, suitable for shared resources
- Authenticated access: Requires valid user credentials, protects user-specific content
- User-scoped access: Users can only access their own
.gbusersfolder content - Bot-internal files: Accessible only to the bot system itself
- Administrative files: Require elevated privileges to access or modify
User storage in .gbusers is strictly isolated—users cannot access other users’ folders through any API or BASIC keyword.
Storage Backend Options
The storage system supports multiple backends to accommodate different deployment scenarios. The default configuration uses self-hosted S3-compatible object storage, providing full control over where data resides. Any S3-compatible service works as an alternative, including major cloud providers.
For development and testing, local filesystem storage offers simplicity and easy inspection of files. Production deployments might use hybrid configurations with multiple backends providing redundancy or geographic distribution.
Backend selection happens through configuration, and the rest of the system interacts with storage through a consistent interface regardless of which backend is active. This abstraction allows deployments to change storage strategies without modifying bot code.
Directory Structure Reference
Complete .gbdrive structure with all components:
mybot.gbai/
mybot.gbdrive/
dialogs/ # Compiled dialog scripts cache
kb/ # Knowledge base index data
cache/ # Temporary cache files
exports/ # Bot-level exports
uploads/ # Bot-level uploads
users/ # Per-user storage (.gbusers)
user@email.com/
papers/
current/ # Working documents
named/ # Saved documents
uploads/ # User uploads
exports/ # User exports
settings/ # User preferences
+1234567890/
papers/
uploads/
exports/
settings/
Summary
The .gbdrive storage system provides the foundation for all file-based operations in General Bots. Through S3-compatible object storage, organized bucket structures, automatic synchronization, and deep integration with other components, it delivers reliable file management that supports both development workflows and production operation.
The .gbusers folder structure enables personalized storage for each user, supporting applications like Paper that require persistent document storage. By organizing user data under their email or phone identifier, the system maintains clear separation while enabling powerful per-user features.
Understanding how storage works helps you organize bot content effectively and leverage the automatic capabilities the system provides.
Bot Templates
BotServer includes pre-built bot templates for various use cases. Each template is a complete .gbai package ready to deploy.
Complete Template List (Flat Reference)
| # | Template | Category | Folder | Key Features |
|---|---|---|---|---|
| 1 | Default | Core | default.gbai | Minimal starter bot |
| 2 | Template | Core | template.gbai | Reference implementation |
| 3 | Announcements | Communications | announcements.gbai | Company news, multiple KB |
| 4 | AI Search | Search | ai-search.gbai | QR codes, document search |
| 5 | API Client | Integration | api-client.gbai | REST API patterns |
| 6 | Backup | Administration | backup.gbai | Server backup scripts |
| 7 | BI | Analytics | bi.gbai | Dashboards, role separation |
| 8 | Broadcast | Communications | broadcast.gbai | Mass messaging |
| 9 | Crawler | Search | crawler.gbai | Web indexing |
| 10 | CRM | Sales | crm.gbai | Customer management |
| 11 | Education | Education | edu.gbai | Course management |
| 12 | ERP | Operations | erp.gbai | Process automation |
| 13 | Law | Legal | law.gbai | Document templates |
| 14 | LLM Server | AI | llm-server.gbai | Model hosting |
| 15 | LLM Tools | AI | llm-tools.gbai | Prompt engineering |
| 16 | Marketing | Marketing | marketing.gbai | Campaign tools |
| 17 | Public APIs | Integration | public-apis.gbai | Weather, news APIs |
| 18 | Reminder | Productivity | reminder.gbai | Task reminders |
| 19 | Store | E-commerce | store.gbai | Product catalog |
| 20 | Talk to Data | Analytics | talk-to-data.gbai | Natural language SQL |
| 21 | Messaging | whatsapp.gbai | WhatsApp Business | |
| 22 | Office | Productivity | office.gbai | Document processing |
| 23 | Employee Management | HR | hr/employees.gbai | Employee CRUD |
| 24 | IT Helpdesk | IT | it/helpdesk.gbai | Ticket management |
| 25 | Sales Pipeline | CRM | crm/sales-pipeline.gbai | Deal tracking |
| 26 | Contact Directory | CRM | crm/contacts.gbai | Contact management |
Templates by Category
Core Templates
| Template | Folder | Purpose |
|---|---|---|
| Default | default.gbai | Minimal starter bot for learning |
| Template | template.gbai | Complete example structure |
HR & People
| Template | Folder | Key Files |
|---|---|---|
| Employee Management | hr/employees.gbai | start.bas, add-employee.bas, search-employee.bas |
| Leave Management | hr/leave.gbai | start.bas, request-leave.bas, approve-leave.bas |
| Recruitment | hr/recruitment.gbai | start.bas, post-job.bas, add-applicant.bas |
IT & Support
| Template | Folder | Key Files |
|---|---|---|
| IT Helpdesk | it/helpdesk.gbai | start.bas, create-ticket.bas, update-ticket.bas |
| Asset Tracking | it/assets.gbai | start.bas, add-asset.bas, checkout-asset.bas |
CRM & Sales
| Template | Folder | Key Files |
|---|---|---|
| CRM | crm.gbai | lead-management.bas, opportunity-management.bas |
| Sales Pipeline | crm/sales-pipeline.gbai | start.bas, create-deal.bas, update-stage.bas |
| Contact Directory | crm/contacts.gbai | start.bas, add-contact.bas, search-contact.bas |
Finance & Accounting
| Template | Folder | Key Files |
|---|---|---|
| Invoicing | finance/invoicing.gbai | start.bas, create-invoice.bas, send-reminder.bas |
| Expense Tracker | finance/expenses.gbai | start.bas, submit-expense.bas, approve-expense.bas |
Operations
| Template | Folder | Key Files |
|---|---|---|
| ERP | erp.gbai | Process automation, integrations |
| Warehouse | operations/warehouse.gbai | start.bas, receive-stock.bas, ship-order.bas |
Template Structure
All templates follow this standard directory layout:
template-name.gbai/
template-name.gbdialog/ # BASIC dialog scripts
start.bas # Entry point (required)
*.bas # Tool scripts (auto-discovered)
*-jobs.bas # Scheduled jobs
template-name.gbkb/ # Knowledge base collections
collection1/ # Documents for USE KB "collection1"
template-name.gbdrive/ # File storage (not KB)
uploads/ # User uploaded files
exports/ # Generated files
template-name.gbot/ # Configuration
config.csv # Bot parameters
template-name.gbtheme/ # UI theme (optional)
default.css # Theme CSS
Quick Start Guide
1. Choose a Template
Select based on your needs:
- Simple chat: Use
default.gbai - Business app: Choose
crm.gbai,bi.gbai, orerp.gbai - AI features: Pick
ai-search.gbaiorllm-tools.gbai - Communication: Select
broadcast.gbaiorwhatsapp.gbai
2. Deploy the Template
# Templates are auto-deployed during bootstrap
# Access at: http://localhost:8080/template-name
3. Customize Configuration
Edit template-name.gbot/config.csv:
name,value
bot-name,My Custom Bot
welcome-message,Hello! How can I help?
llm-model,model.gguf
temperature,0.7
4. Add Knowledge Base
Place documents in .gbkb folders:
- Each folder becomes a collection
- Use
USE KB "folder-name"in scripts - Documents are automatically indexed
5. Create Tools
Add .bas files to .gbdialog:
- Each file becomes a tool
- Auto-discovered by the system
- Called automatically by LLM when needed
Required Files for Each Template
start.bas (Required)
' Template Name - Start Script
' Setup Tools
ADD TOOL "tool-name-1"
ADD TOOL "tool-name-2"
' Setup Knowledge Base
USE KB "template-name.gbkb"
' Set Context
SET CONTEXT "context name" AS "You are a [role]. You help with [tasks]."
' Setup Suggestions
CLEAR SUGGESTIONS
ADD SUGGESTION "action1" AS "Display text 1"
ADD SUGGESTION "action2" AS "Display text 2"
' Welcome Message
BEGIN TALK
**Template Title**
Welcome message here.
**What I can help with:**
• Feature 1
• Feature 2
END TALK
BEGIN SYSTEM PROMPT
Detailed instructions for the AI...
END SYSTEM PROMPT
Tool File Template
PARAM paramname AS STRING LIKE "example" DESCRIPTION "What this parameter is"
PARAM optionalparam AS STRING LIKE "default" DESCRIPTION "Optional parameter"
DESCRIPTION "What this tool does. Called when user wants to [action]."
' Business logic
let result = "processed"
' Save data (field names = variable names)
SAVE "table.csv", paramname, optionalparam, result
' Store in memory
SET BOT MEMORY "last_item", result
' Response
TALK "✅ Action completed successfully!"
config.csv Template
name,value
episodic-memory-history,2
episodic-memory-threshold,4
theme-color1,#1565C0
theme-color2,#E3F2FD
theme-logo,https://pragmatismo.com.br/icons/general-bots.svg
theme-title,Template Name - General Bots
Syntax Rules for Templates
DO ✅
' Variable names (no underscores in names)
let ticketnumber = "TKT001"
let useremail = "user@example.com"
' SAVE with field names = variable names
SAVE "table.csv", ticketnumber, useremail, status
' Keywords with spaces
SET BOT MEMORY "last_ticket", ticketnumber
SET CONTEXT "name" AS "description"
ADD SUGGESTION "key" AS "Display text"
CLEAR SUGGESTIONS
USE KB "myknowledge"
USE TOOL "mytool"
' GET BOT MEMORY as function
let lastticket = GET BOT MEMORY("last_ticket")
DON’T ❌
' NO: Complex object operations
SET object.field = value ' WRONG
SAVE "table", object.id, object ' WRONG
Creating Custom Templates
To create your own template:
- Copy
template.gbaias starting point - Define clear purpose - one template, one job
- Structure folders properly:
.gbdialogfor scripts.gbkbfor knowledge collections.gbdrivefor general files.gbotfor configuration
- Include examples - sample data and dialogs
- Test thoroughly - verify all features
Best Practices
Template Selection
- Start small: Begin with
default.gbai - Match use case: Choose aligned templates
- Combine features: Mix templates as needed
- Keep originals: Copy before modifying
Customization Strategy
Minimal BASIC Approach
Instead of complex dialog flows, use simple LLM calls:
' Let system AI handle conversations naturally
TALK "How can I assist you?"
' System AI understands and responds appropriately
Tool Creation
Only create .bas files for specific actions:
- API calls
- Database operations
- File processing
- Calculations
Knowledge Base Organization
- One folder per topic/collection
- Name folders clearly
- Keep documents updated
- Index automatically
Performance Tips
- Remove unused template files
- Index only necessary documents
- Configure appropriate cache settings
- Monitor resource usage
Support Resources
- README files in each template folder
- Example configurations included
- Sample knowledge bases provided
- Community forums for discussions
Template Samples & Conversations
Ready-to-use bots for common business needs
See real conversations and learn how each template works
How to Use Templates
- Go to Sources app
- Click + New Bot
- Select From Template
- Choose the template you want
- Click Create
Your bot is ready to use immediately!
Customer Support (CRM Template)
Best for: Handling customer inquiries, tracking issues, managing contacts
Sample Conversation
User: I need help with my order
Bot: I'd be happy to help with your order!
What's your order number?
User: ORD-12345
Bot: Found it! Order #ORD-12345
📦 Status: Shipped
📅 Shipped: May 14, 2025
🚚 Carrier: FedEx
📍 Tracking: 7891234567890
Expected delivery: May 17, 2025
Is there something specific about this order?
User: I want to return it
Bot: I can help you start a return for Order #ORD-12345.
📋 Return Policy:
• Returns accepted within 30 days
• Item must be unused
• Original packaging required
Would you like me to create a return request?
User: Yes
Bot: ✅ Return Request Created!
Return #: RET-5678
Next steps:
1. Print the return label (sent to your email)
2. Pack the item securely
3. Drop off at any FedEx location
Refund will process within 5-7 business days
after we receive the item.
Anything else I can help with?
What It Can Do
- Look up orders by number or customer email
- Check shipping status
- Create return requests
- Log customer interactions
- Track support tickets
FAQ Bot (AI Search Template)
Best for: Answering common questions from your documents
Sample Conversation
User: What's your refund policy?
Bot: Based on our policies:
**Refund Policy**
• Full refund within 30 days of purchase
• Items must be unused and in original packaging
• Refunds processed to original payment method
• Processing time: 5-7 business days
For items over 30 days old, store credit is available.
📄 Source: Company Policies Handbook, Page 12
User: How do I contact support?
Bot: You can reach our support team through:
📧 Email: support@company.com
📞 Phone: 1-800-555-0123 (Mon-Fri, 9am-5pm)
💬 Chat: You're using it right now!
Average response time:
• Chat: Immediate
• Email: Within 24 hours
• Phone: Under 5 minutes wait
📄 Source: Contact Information Page
User: Do you ship internationally?
Bot: Yes! We ship to over 50 countries.
**International Shipping:**
• Standard: 10-15 business days ($15)
• Express: 5-7 business days ($35)
**Note:** Import duties and taxes may apply
depending on your country.
Countries we ship to include: Canada, UK,
Australia, Germany, France, Japan, and more.
📄 Source: Shipping FAQ Document
Setup Tips
- Upload your FAQ documents to the knowledge base
- Include policy documents, product guides, help articles
- The bot automatically learns from your files
Appointment Booking (Calendar Template)
Best for: Scheduling meetings, consultations, appointments
Sample Conversation
User: I want to book an appointment
Bot: I'll help you schedule an appointment!
What type of appointment do you need?
1. 📋 Consultation (30 min)
2. 🔧 Service Visit (1 hour)
3. 📞 Phone Call (15 min)
User: Consultation
Bot: Great! A 30-minute consultation.
Here are available times this week:
📅 Tuesday, May 20
• 10:00 AM
• 2:00 PM
• 4:00 PM
📅 Wednesday, May 21
• 9:00 AM
• 11:00 AM
• 3:00 PM
Which works best for you?
User: Tuesday at 2pm
Bot: Perfect! Let me confirm your details.
What's your name?
User: John Smith
Bot: And your email address?
User: john@email.com
Bot: ✅ Appointment Confirmed!
📋 Consultation
👤 John Smith
📅 Tuesday, May 20, 2025
🕐 2:00 PM (30 minutes)
A confirmation email has been sent to
john@email.com with calendar invite.
Need to reschedule? Just ask!
What It Can Do
- Show available time slots
- Book appointments
- Send confirmation emails
- Send reminders before appointments
- Allow rescheduling and cancellation
HR Assistant (Employee Template)
Best for: Employee self-service, HR questions, leave requests
Sample Conversation
User: How many vacation days do I have?
Bot: Let me check your balance, John.
📊 Your Time Off Balance:
Vacation Days
├── Available: 12 days
├── Used: 3 days
└── Pending: 0 days
Sick Days
├── Available: 8 days
└── Used: 2 days
Your vacation resets on January 1, 2026.
Would you like to request time off?
User: Yes, I need next Friday off
Bot: I'll create a time off request for you.
📅 Requesting: Friday, May 23, 2025
📋 Type: Vacation (1 day)
Reason for time off? (optional)
User: Personal appointment
Bot: ✅ Time Off Request Submitted!
📋 Request #PTO-2025-0142
📅 Date: May 23, 2025 (1 day)
📝 Reason: Personal appointment
👤 Approver: Sarah Johnson (your manager)
You'll receive an email when it's approved.
Current status: ⏳ Pending Approval
User: What holidays do we have this year?
Bot: 📅 Company Holidays 2025:
✓ New Year's Day - Jan 1 (past)
✓ Memorial Day - May 26
✓ Independence Day - July 4
✓ Labor Day - Sep 1
✓ Thanksgiving - Nov 27-28
✓ Christmas - Dec 25-26
Total: 8 paid holidays
Note: If a holiday falls on a weekend,
it's observed on the nearest weekday.
What It Can Do
- Check PTO balances
- Submit time off requests
- Look up company holidays
- Answer HR policy questions
- Find employee directory info
Sales Lead Capture
Best for: Qualifying leads, collecting contact info, scheduling demos
Sample Conversation
User: I'm interested in your product
Bot: Great! I'd love to learn more about what you're
looking for.
What's your name?
User: Maria Garcia
Bot: Nice to meet you, Maria!
What company are you with?
User: Acme Corp
Bot: Acme Corp - got it!
What's your role there?
User: Marketing Director
Bot: Perfect! And what's the best email to reach you?
User: maria@acmecorp.com
Bot: Thanks Maria! One last question:
What's your biggest challenge right now?
1. 📈 Growing leads
2. ⏱️ Saving time on manual tasks
3. 📊 Better reporting
4. 🔗 Integrating tools
5. Other
User: Saving time
Bot: That's exactly what we help with!
✅ I've saved your information:
👤 Maria Garcia
🏢 Acme Corp (Marketing Director)
📧 maria@acmecorp.com
🎯 Interest: Time savings
Would you like to:
1. 📅 Schedule a demo (15 min)
2. 📄 Get a product overview PDF
3. 💬 Ask questions now
User: Schedule a demo
Bot: I can schedule you with our team!
Available demo times:
📅 Tomorrow (May 16)
• 10:00 AM
• 3:00 PM
📅 Monday (May 19)
• 11:00 AM
• 2:00 PM
Which works for you?
User: Tomorrow at 3pm
Bot: ✅ Demo Scheduled!
📅 Friday, May 16 at 3:00 PM
👤 With: Alex from our Sales team
📧 Calendar invite sent to maria@acmecorp.com
Alex will show you how companies like Acme
save 10+ hours per week with our platform.
See you tomorrow, Maria!
What It Can Do
- Qualify leads with questions
- Capture contact information
- Score leads automatically
- Schedule demo calls
- Send follow-up emails
IT Helpdesk
Best for: IT support tickets, troubleshooting, password resets
Sample Conversation
User: I can't access my email
Bot: I'll help you troubleshoot your email access.
What error are you seeing?
1. 🔐 Password incorrect
2. 📧 Can't connect to server
3. 🔄 Email not syncing
4. ❓ Something else
User: Password incorrect
Bot: Let's fix your password issue.
Have you tried resetting your password in the
last 24 hours?
User: No
Bot: I can send a password reset link to your
backup email address.
For security, please confirm your employee ID:
User: EMP-4521
Bot: ✅ Verified!
I've sent a password reset link to your backup
email (j***@gmail.com).
📧 Check your email and click the reset link
⏱️ Link expires in 15 minutes
After resetting:
1. Use the new password for email
2. Update password on your phone too
3. It may take 5 minutes to sync
Still having issues after resetting?
User: It worked, thanks!
Bot: Glad it's working! 🎉
Quick security tips:
• Use a strong, unique password
• Enable two-factor authentication
• Never share your password
Anything else I can help with?
User: No that's all
Bot: ✅ Ticket #IT-2025-0891 closed as resolved.
Have a great day! Remember, I'm here 24/7
if you need IT help.
What It Can Do
- Create support tickets
- Reset passwords
- Troubleshoot common issues
- Check system status
- Escalate to human agents
Compliance & Privacy (LGPD/GDPR)
Best for: Handling data privacy requests
Sample Conversation
User: I want to see what data you have about me
Bot: I can help you with a Data Access Request.
Under privacy regulations (LGPD, GDPR, CCPA),
you have the right to receive a copy of your
personal data.
To process your request, I need to verify your
identity. What's your email address?
User: john.doe@email.com
Bot: I've sent a verification code to john.doe@email.com
Please enter the 6-digit code:
User: 847291
Bot: ✅ Identity verified!
What would you like to do?
1. 📥 Get a copy of my data
2. ✏️ Correct my information
3. 🗑️ Delete my data
4. ❓ Ask about data usage
User: Get a copy of my data
Bot: ✅ Data Access Request Submitted
📋 Request #DSR-2025-0142
📧 john.doe@email.com
📅 Submitted: May 15, 2025
⏱️ Due: May 30, 2025 (15 days)
We will compile all your personal data and
send it to your email in a secure format.
You'll receive:
• Profile information
• Communication history
• Preferences and settings
• Activity logs
Is there anything specific you're looking for?
User: No, that's all
Bot: Your request is being processed.
You'll receive an email within 15 days with
a secure link to download your data.
Questions? Reply to the confirmation email
or come back here anytime.
Broadcast & Announcements
Best for: Sending company-wide messages, news updates
Sample Conversation (Admin)
Admin: Send an announcement
Bot: I'll help you create an announcement.
What's the title?
Admin: Office Closure Tomorrow
Bot: Got it! Now the message content:
Admin: Due to the holiday, our office will be closed
tomorrow May 16. We'll reopen Monday at 9am.
Bot: Who should receive this?
1. 👥 All employees
2. 🏢 Specific department
3. 📍 Specific location
4. 📋 Custom list
Admin: All employees
Bot: How should I send it?
☑️ Email
☑️ Chat notification
☐ SMS
(Select all that apply)
Admin: Email and chat
Bot: 📢 Announcement Preview:
━━━━━━━━━━━━━━━━━━━━━━━━━━━
OFFICE CLOSURE TOMORROW
━━━━━━━━━━━━━━━━━━━━━━━━━━━
Due to the holiday, our office will be
closed tomorrow May 16. We'll reopen
Monday at 9am.
━━━━━━━━━━━━━━━━━━━━━━━━━━━
Recipients: 247 employees
Channels: Email, Chat
Send now or schedule?
Admin: Send now
Bot: ✅ Announcement Sent!
📧 247 emails delivered
💬 247 chat notifications sent
Delivery report saved. View in Analytics.
Talk to Data (Analytics)
Best for: Asking questions about your business data in plain English
Sample Conversation
User: How many sales did we make last month?
Bot: 📊 Sales Summary - April 2025
Total Sales: 1,247
Revenue: $156,340
Average Order: $125.35
Compared to March:
• Sales: ▲ 12% (+134)
• Revenue: ▲ 8% (+$11,520)
User: Which product sold the most?
Bot: 🏆 Top 5 Products - April 2025
1. Widget Pro - 342 units ($42,750)
2. Basic Package - 289 units ($28,900)
3. Premium Bundle - 201 units ($40,200)
4. Starter Kit - 187 units ($18,700)
5. Add-on Pack - 156 units ($15,600)
Widget Pro has been #1 for 3 months in a row!
User: Show me sales by region
Bot: 🗺️ Sales by Region - April 2025
North America ████████████░░ $68,500 (44%)
Europe ███████░░░░░░░ $42,300 (27%)
Asia Pacific █████░░░░░░░░░ $31,200 (20%)
Latin America ██░░░░░░░░░░░░ $14,340 (9%)
Fastest growing: Asia Pacific (▲ 23%)
User: Who are our top customers?
Bot: 👥 Top 5 Customers - April 2025
1. Acme Corp $12,450 (8 orders)
2. TechStart Inc $9,800 (12 orders)
3. Global Services $8,200 (5 orders)
4. Smith & Co $7,650 (6 orders)
5. DataFlow Ltd $6,900 (4 orders)
Want me to show purchase history for any of these?
What It Can Do
- Answer business questions in plain English
- Show sales, revenue, customer data
- Compare time periods
- Create simple reports
- Export data to spreadsheets
Quick Template Selection Guide
| I need to… | Use this template |
|---|---|
| Answer customer questions | AI Search (FAQ) |
| Handle support tickets | CRM / IT Helpdesk |
| Book appointments | Calendar |
| Capture sales leads | CRM (Sales) |
| Help employees with HR | HR Assistant |
| Send announcements | Broadcast |
| Query business data | Talk to Data |
| Handle privacy requests | Compliance |
Need Something Custom?
If these templates don’t fit your needs:
- Start with the closest template - Modify it
- Use the Default template - Build from scratch
- Combine templates - Use features from multiple templates
- Ask for help - Our community can guide you
See Also
- Templates Reference - Full template list
- How To: Create Your First Bot
- Sources App - Managing templates
- Write Your First Dialog
Business Intelligence Template (bi.gbai)
A General Bots template for automated business intelligence reporting and data visualization.
Overview
The BI template provides scheduled analytics reporting with automatic chart generation and delivery. It’s designed for organizations that need automated consumption reports, category analysis, and customer-specific insights.
Features
- Scheduled Reporting - Automated report generation on configurable schedules
- Time-Series Charts - Monthly consumption trends visualization
- Category Analysis - Product category breakdown with donut charts
- Per-Customer Reports - Individual customer consumption analysis
- Multi-Channel Delivery - Send reports via chat, email, or messaging platforms
Package Structure
bi.gbai/
├── bi.gbai/
│ ├── bi-admin.bas # Administrative scheduled reports
│ └── bi-user.bas # Per-customer report generation
Scripts
| File | Description |
|---|---|
bi-admin.bas | Scheduled job for generating platform-wide analytics reports |
bi-user.bas | Loop through customers to generate individual consumption reports |
Configuration
Configure the template in your bot’s config.csv:
| Parameter | Description | Example |
|---|---|---|
Schedule | Cron expression for report timing | 1 * * * * * |
Data Source | Table/view for billing data | Orders |
Usage
Administrative Reports
The bi-admin.bas script runs on a schedule and generates:
- Monthly Consumption Chart - Time-series showing spending trends
- Product Category Breakdown - Donut chart of spending by category
SET SCHEDULE "1 * * * * *"
billing = FIND "Orders"
' Monthly consumption
data = SELECT SUM(UnitPrice * Quantity) as Value,
MONTH(OrderDate)+'/'+YEAR(OrderDate)
FROM billing
GROUP BY MONTH(OrderDate), YEAR(OrderDate)
img = CHART "timeseries", data
SEND FILE img, "Monthly Consumption"
Per-Customer Reports
The bi-user.bas script iterates through customers to generate personalized reports:
customers = FIND "Customers"
FOR EACH c IN customers
data = SELECT SUM(UnitPrice * Quantity) AS Value,
MONTH(OrderDate)+'/'+YEAR(OrderDate)
FROM billing
JOIN Customers ON billing.CustomerID = Customers.CustomerID
GROUP BY MONTH(OrderDate), YEAR(OrderDate)
WHERE Customers.CustomerID = c.CustomerID
img = CHART "timeseries", data
SEND FILE img, "Monthly Consumption"
END FOR
Chart Types
The template supports various chart types:
| Type | Use Case |
|---|---|
timeseries | Trends over time (monthly, weekly, daily) |
donut | Category distribution |
bar | Comparative analysis |
pie | Percentage breakdowns |
Data Requirements
Orders Table Schema
The template expects a billing/orders data source with:
OrderDate- Date of the transactionUnitPrice- Price per unitQuantity- Number of unitsProductID- Foreign key to productsCustomerID- Foreign key to customers
Products Table Schema
ProductID- Primary keyCategoryID- Foreign key to categoriesProductName- Product name
Categories Table Schema
CategoryID- Primary keyCategoryName- Category display name
Example Output
Monthly Consumption Report
📊 Monthly Consumption Report
-----------------------------
Generated: 2024-01-15 08:00
[Time Series Chart Image]
Total Revenue: $125,430
Top Month: December ($18,500)
Growth Rate: +12% MoM
Category Breakdown
📊 Product Category Distribution
--------------------------------
[Donut Chart Image]
Electronics: 35%
Clothing: 28%
Home & Garden: 22%
Other: 15%
Customization
Adding New Reports
Create additional .bas files in the bi.gbai folder:
' sales-by-region.bas
SET SCHEDULE "0 9 * * 1" ' Every Monday at 9 AM
data = SELECT Region, SUM(Amount) as Total
FROM Sales
GROUP BY Region
img = CHART "bar", data
SEND FILE img, "Weekly Regional Sales"
Customizing Delivery
Send reports to specific users or channels:
' Send to specific user
SEND FILE img TO "manager@company.com", "Weekly Report"
' Send to WhatsApp
SEND FILE img TO "+1234567890", "Your monthly report"
' Send to team channel
TALK TO "sales-team", img
Scheduling Options
| Schedule | Cron Expression | Description |
|---|---|---|
| Every minute | 1 * * * * * | Testing/real-time |
| Hourly | 0 0 * * * * | Frequent updates |
| Daily 8 AM | 0 0 8 * * * | Morning reports |
| Weekly Monday | 0 0 9 * * 1 | Weekly summaries |
| Monthly 1st | 0 0 8 1 * * | Monthly reports |
Integration Examples
With CRM
' Combine with CRM data
opportunities = FIND "opportunities.csv"
revenue = SELECT stage, SUM(amount) FROM opportunities GROUP BY stage
img = CHART "funnel", revenue
SEND FILE img, "Sales Pipeline"
With ERP
' Inventory analysis
inventory = FIND "inventory.csv"
low_stock = SELECT product, quantity FROM inventory WHERE quantity < reorder_level
img = CHART "bar", low_stock
SEND FILE img, "Low Stock Alert"
Best Practices
- Schedule appropriately - Don’t run heavy reports too frequently
- Filter data - Use date ranges to limit data volume
- Cache results - Store computed metrics for faster access
- Log activities - Track report generation for auditing
- Handle errors - Wrap queries in error handling
Troubleshooting
| Issue | Solution |
|---|---|
| Empty charts | Verify data source has records |
| Schedule not running | Check cron syntax |
| Slow reports | Add date filters, optimize queries |
| Missing data | Verify JOIN conditions |
Related Templates
- Platform Analytics - Platform metrics and monitoring
- Talk to Data - Natural language data queries
- CRM - CRM with built-in reporting
See Also
- Templates Reference - Full template list
- Template Samples - Example conversations
- gbdialog Reference - BASIC scripting guide
Web Crawler Template (crawler.gbai)
A General Bots template for automated web crawling and content extraction for knowledge base population.
Overview
The Crawler template enables your bot to automatically fetch, parse, and index web content. It’s designed for building knowledge bases from websites, monitoring web pages for changes, and extracting structured data from online sources.
Features
- Automated Web Scraping - Fetch and parse web pages automatically
- Document Mode - Answer questions based on crawled content
- Configurable Depth - Control how many pages to crawl
- Content Indexing - Automatically add content to knowledge base
- LLM Integration - Use AI to understand and summarize crawled content
Package Structure
crawler.gbai/
├── crawler.gbkb/ # Knowledge base for crawled content
│ └── docs/ # Indexed documents
└── crawler.gbot/
└── config.csv # Crawler configuration
Configuration
Configure the crawler in crawler.gbot/config.csv:
| Parameter | Description | Example |
|---|---|---|
Website | Target URL to crawl | https://pragmatismo.com.br/ |
website Max Documents | Maximum pages to crawl | 2 |
Answer Mode | How to respond to queries | document |
Theme Color | UI theme color | purple |
LLM Provider | AI provider for processing | openai |
Example config.csv
name,value
Website,https://pragmatismo.com.br/
website Max Documents,2
Answer Mode,document
Theme Color,purple
LLM Provider,openai
How It Works
- Initialization - Bot reads the target website from configuration
- Crawling - Fetches pages starting from the root URL
- Extraction - Parses HTML and extracts meaningful text content
- Indexing - Stores content in the knowledge base for RAG
- Q&A - Users can ask questions about the crawled content
Usage
Basic Setup
- Copy the template to your bot’s packages directory:
cp -r templates/crawler.gbai /path/to/your/bot/packages/
- Edit
crawler.gbot/config.csvwith your target website:
name,value
Website,https://your-website.com/
website Max Documents,10
Answer Mode,document
- Deploy and the bot will automatically crawl the configured site.
Querying Crawled Content
Once crawled, users can ask questions naturally:
- “What services does the company offer?”
- “Tell me about the pricing”
- “Summarize the about page”
- “What are the main features?”
Answer Modes
| Mode | Behavior |
|---|---|
document | Answers strictly based on crawled content |
hybrid | Combines crawled content with general knowledge |
summary | Provides concise summaries of relevant pages |
Sample Conversation
User: What does this company do?
Bot: Based on the website content:
Pragmatismo provides enterprise AI solutions
including:
• General Bots platform
• Custom AI development
• Conversational AI consulting
• Integration services
📄 Source: pragmatismo.com.br/about
User: Do they have any case studies?
Bot: Yes! According to the website, they have worked
with several enterprise clients:
• Banking sector automation
• Healthcare chatbots
• E-commerce customer service
📄 Source: pragmatismo.com.br/cases
User: How can I contact them?
Bot: Contact information from the website:
📧 Email: contact@pragmatismo.com.br
📍 Location: São Paulo, Brazil
🌐 Website: pragmatismo.com.br
📄 Source: pragmatismo.com.br/contact
Advanced Configuration
Limiting Crawl Scope
Control which pages are crawled:
name,value
Website,https://example.com/docs/
website Max Documents,50
Website Include Pattern,/docs/*
Website Exclude Pattern,/docs/archive/*
Scheduling Recrawls
Set up periodic recrawling to keep content fresh:
name,value
Website Refresh Schedule,0 0 * * 0
This example recrawls every Sunday at midnight.
Authentication
For sites requiring authentication:
name,value
Website Auth Type,basic
Website Username,user
Website Password,secret
Customization
Creating Custom Crawl Logic
Create a BASIC dialog for custom crawling:
' custom-crawl.bas
urls = ["https://site1.com", "https://site2.com", "https://site3.com"]
FOR EACH url IN urls
content = GET url
IF content THEN
SAVE "crawled_pages.csv", url, content, NOW()
SET CONTEXT content
END IF
NEXT
TALK "Crawled " + UBOUND(urls) + " pages successfully."
Processing Crawled Content
Use LLM to process and structure crawled data:
' process-crawled.bas
pages = FIND "crawled_pages.csv"
FOR EACH page IN pages
summary = LLM "Summarize this content in 3 bullet points: " + page.content
WITH processed
url = page.url
summary = summary
processed_at = NOW()
END WITH
SAVE "processed_content.csv", processed
NEXT
Extracting Structured Data
Extract specific information from pages:
' extract-products.bas
SET CONTEXT "You are a data extraction assistant. Extract product information as JSON."
page_content = GET "https://store.example.com/products"
products = LLM "Extract all products with name, price, and description as JSON array: " + page_content
SAVE "products.json", products
Integration Examples
With Knowledge Base
' Add crawled content to KB
content = GET "https://docs.example.com/api"
IF content THEN
USE KB "api-docs.gbkb"
ADD TO KB content, "API Documentation"
END IF
With Notifications
' Monitor for changes
previous = GET BOT MEMORY "last_content"
current = GET "https://news.example.com"
IF current <> previous THEN
SEND MAIL "admin@company.com", "Website Changed", "The monitored page has been updated.", []
SET BOT MEMORY "last_content", current
END IF
Best Practices
- Respect robots.txt - Only crawl pages allowed by the site’s robots.txt
- Rate limiting - Don’t overwhelm target servers with requests
- Set reasonable limits - Start with low
Max Documentsvalues - Monitor content quality - Review crawled content for accuracy
- Keep content fresh - Schedule periodic recrawls for dynamic sites
- Handle errors gracefully - Implement retry logic for failed requests
Troubleshooting
| Issue | Cause | Solution |
|---|---|---|
| No content indexed | Invalid URL | Verify the Website URL is accessible |
| Partial content | Max Documents too low | Increase the limit in config |
| Stale answers | Content not refreshed | Set up scheduled recrawls |
| Authentication errors | Missing credentials | Add auth settings to config |
| Timeout errors | Slow target site | Increase timeout settings |
Limitations
- JavaScript-rendered content may not be fully captured
- Some sites block automated crawlers
- Large sites may take significant time to fully crawl
- Dynamic content may require special handling
Use Cases
- Documentation Bots - Index product docs for support
- Competitive Intelligence - Monitor competitor websites
- News Aggregation - Collect news from multiple sources
- Research Assistants - Build knowledge bases from academic sources
- FAQ Generators - Extract FAQs from help sites
Related Templates
- AI Search - AI-powered document search
- Talk to Data - Natural language data queries
- Law - Legal document processing with similar RAG approach
See Also
- Templates Reference - Full template list
- Template Samples - Example conversations
- gbkb Reference - Knowledge base guide
Legal Document Processing Template (law.gbai)
A General Bots template for legal case management, document analysis, and AI-powered legal Q&A.
Overview
The Law template enables legal professionals to load case documents, query case information using natural language, and get AI-assisted analysis of legal materials. It’s designed for law firms, legal departments, and compliance teams who need efficient document-based legal research.
Features
- Case Document Loading - Load and index PDF case files by case number
- Document Q&A - Ask questions about loaded legal documents
- Context-Aware Responses - AI responses grounded in actual case content
- Case Summarization - Generate summaries of case documents
- Multi-Document Support - Work with multiple cases simultaneously
- Audit Trail - Track all document access and queries
Package Structure
law.gbai/
├── law.gbdialog/
│ └── case.bas # Case loading and query handler
├── law.gbdrive/ # Case document storage
│ └── (case-*.pdf) # Case PDF files
├── law.gbkb/ # Knowledge base for legal reference
└── law.gbot/
└── config.csv # Bot configuration
Scripts
| File | Description |
|---|---|
case.bas | Loads case documents by number and enables document-based Q&A |
How It Works
Case Loading Flow
- User provides a case number (e.g., “12345”)
- Bot searches for
case-12345.pdfin the drive - Document content is extracted and set as context
- Answer mode switches to “document” for grounded responses
- User can ask questions about the loaded case
Code Example
PARAM cod AS STRING LIKE "12345" DESCRIPTION "Case number to load and query"
DESCRIPTION "Load a legal case document by case number for Q&A and analysis"
text = GET "case-" + cod + ".pdf"
IF text THEN
SET CONTEXT "Based on this document, answer the person's questions:\n\n" + text
SET ANSWER MODE "document"
TALK "Case ${cod} loaded. Ask me anything about the case or request a summary."
ELSE
TALK "Case not found. Please check the case number."
END IF
Configuration
Configure in law.gbot/config.csv:
| Parameter | Description | Example |
|---|---|---|
Answer Mode | Response grounding mode | document |
Theme Color | UI accent color | navy |
LLM Provider | AI provider | openai |
Document Retention | Days to keep loaded docs | 30 |
Require Authentication | Require user login | true |
Sample Conversation
User: Load case 12345
Bot: Case 12345 loaded. Ask me anything about the case
or request a summary.
User: What are the main claims in this case?
Bot: Based on the document, the main claims are:
1. Breach of contract regarding the delivery terms
in Section 3.2
2. Failure to meet quality standards as specified
in Exhibit A
3. Request for damages in the amount of $250,000
📄 Source: Case 12345, Pages 3-5
User: Summarize the defendant's response
Bot: The defendant argues that:
• Delivery delays were caused by force majeure events
(documented in Exhibit C)
• Quality standards were met according to third-party
inspection report
• Damages claimed are excessive and not substantiated
by evidence
📄 Source: Case 12345, Pages 12-18
User: What are the key dates mentioned?
Bot: Key dates from the case file:
📅 Contract signed: January 15, 2024
📅 Delivery due date: March 30, 2024
📅 Actual delivery: April 22, 2024
📅 Complaint filed: May 10, 2024
📅 Response deadline: June 15, 2024
📄 Source: Case 12345, Pages 1-2
Document Naming Convention
Case documents should follow this naming pattern:
case-{case_number}.pdf
Examples:
case-12345.pdfcase-2024-CV-0001.pdfcase-APPEAL-789.pdf
Customization
Adding Case Search
' search-cases.bas
PARAM query AS STRING DESCRIPTION "Search term for finding cases"
DESCRIPTION "Search for cases by party name, date, or keywords"
cases = FIND "case_index.csv", "content LIKE '%" + query + "%'"
IF UBOUND(cases) > 0 THEN
TALK "Found " + UBOUND(cases) + " matching cases:"
FOR EACH c IN cases
TALK "- Case " + c.case_number + ": " + c.title
NEXT
ELSE
TALK "No cases found matching: " + query
END IF
Case Summarization
' summarize-case.bas
PARAM cod AS STRING DESCRIPTION "Case number to summarize"
DESCRIPTION "Generate an executive summary of a legal case"
text = GET "case-" + cod + ".pdf"
IF text THEN
summary = LLM "As a legal professional, provide an executive summary of this case including:
1. Parties involved
2. Key facts
3. Legal issues
4. Current status
5. Next steps
Document: " + text
TALK "## Case " + cod + " Summary\n\n" + summary
' Save summary for future reference
SAVE "case_summaries.csv", cod, summary, NOW()
ELSE
TALK "Case not found."
END IF
Supporting Multiple Document Types
' load-document.bas
PARAM doc_type AS STRING LIKE "contract" DESCRIPTION "Type: case, contract, brief, motion"
PARAM doc_id AS STRING DESCRIPTION "Document identifier"
DESCRIPTION "Load various legal document types"
filename = doc_type + "-" + doc_id + ".pdf"
text = GET filename
IF text THEN
SET CONTEXT "This is a legal " + doc_type + ". Answer questions based on its content:\n\n" + text
SET ANSWER MODE "document"
TALK "Loaded " + doc_type + " " + doc_id + ". Ready for questions."
ELSE
TALK "Document not found: " + filename
END IF
Compliance Logging
' Add audit logging to case.bas
IF text THEN
' Log access for compliance
WITH audit_entry
timestamp = NOW()
user = GET SESSION "user_email"
case_number = cod
action = "document_access"
ip_address = GET SESSION "client_ip"
END WITH
SAVE "legal_audit_log.csv", audit_entry
SET CONTEXT "Based on this document..." + text
END IF
Integration Examples
With Calendar
' Schedule case deadlines
deadline = LLM "Extract the next deadline date from this case: " + text
IF deadline THEN
CREATE CALENDAR EVENT "Case " + cod + " Deadline", deadline
TALK "Deadline added to calendar: " + deadline
END IF
With Email
' Email case summary to team
summary = LLM "Summarize the key points of this case in 3 paragraphs: " + text
SEND MAIL "legal-team@firm.com", "Case " + cod + " Summary", summary, []
TALK "Summary sent to legal team."
With Document Generation
' Generate response document
response = LLM "Draft a formal response letter addressing the claims in this case: " + text
CREATE DRAFT response, "Response to Case " + cod
TALK "Draft response created. Review in your documents."
Security Considerations
- Access Control - Implement role-based access for sensitive cases
- Audit Logging - Log all document access for compliance
- Data Encryption - Enable encryption for stored documents
- Session Timeout - Configure appropriate session timeouts
- Authentication - Require strong authentication for legal systems
- Data Retention - Follow legal data retention requirements
Best Practices
- Organize documents - Use consistent naming conventions
- Index cases - Maintain a searchable case index
- Regular backups - Back up case documents frequently
- Version control - Track document versions
- Clear context - Clear previous case context before loading new cases
- Verify AI responses - Always verify AI-generated legal content
Troubleshooting
| Issue | Cause | Solution |
|---|---|---|
| Case not found | Wrong filename | Check naming convention |
| Empty responses | Document not parsed | Verify PDF is text-based |
| Slow loading | Large document | Consider document chunking |
| Context errors | Multiple cases loaded | Clear context between cases |
| Access denied | Missing permissions | Check user authentication |
Limitations
- PDF documents must be text-based (not scanned images)
- Very large documents may require chunking
- Complex legal analysis should be verified by professionals
- AI responses are assistive, not legal advice
Use Cases
- Case Research - Quickly find relevant information in case files
- Document Review - AI-assisted document analysis
- Client Communication - Generate case status summaries
- Deadline Tracking - Extract and track important dates
- Knowledge Management - Build searchable legal knowledge bases
Disclaimer
This template provides AI-assisted document analysis tools. It does not constitute legal advice. All AI-generated content should be reviewed by qualified legal professionals. Users are responsible for ensuring compliance with applicable legal and ethical standards.
Related Templates
- HIPAA Medical - Healthcare compliance
- Talk to Data - Natural language document queries
- AI Search - AI-powered document search
See Also
- Templates Reference - Full template list
- Template Samples - Example conversations
- gbkb Reference - Knowledge base guide
LLM Server Template (llm-server.gbai)
A General Bots template for deploying LLM-powered web services that process orders and requests via API endpoints.
Overview
The LLM Server template transforms General Bots into a headless API service that processes structured requests using LLM intelligence. It’s designed for integrating AI-powered order processing, chatbot backends, and automated response systems into existing applications.
Features
- REST API Endpoints - HTTP endpoints for bot interaction
- Order Processing - Structured JSON responses for orders
- Product Catalog Integration - Dynamic product menu from CSV
- System Prompt Configuration - Customizable AI behavior
- Session Management - Track conversations across requests
- Operator Support - Multi-operator/tenant architecture
Package Structure
llm-server.gbai/
├── llm-server.gbdata/ # Data files
│ └── products.csv # Product catalog
├── llm-server.gbdialog/
│ └── start.bas # Main dialog with system prompt
├── llm-server.gbkb/ # Knowledge base
└── llm-server.gbot/
└── config.csv # Bot configuration
API Endpoints
Start a Session
POST https://{host}/{botId}/dialogs/start
Content-Type: application/x-www-form-urlencoded
operator=123
userSystemId=999
Response:
{
"pid": "1237189231897",
"conversationId": "abc123",
"status": "started"
}
Send a Message
POST https://{host}/api/dk/messageBot
Content-Type: application/x-www-form-urlencoded
pid=1237189231897
text=I want a banana
Response:
{
"orderedItems": [
{
"item": {
"id": 102,
"price": 0.30,
"name": "Banana",
"quantity": 1,
"notes": ""
}
}
],
"userId": "123",
"accountIdentifier": "TableA",
"deliveryTypeId": 2
}
Configuration
System Prompt
The start.bas defines the AI behavior:
PARAM operator AS number LIKE 12312312 DESCRIPTION "Operator code."
DESCRIPTION It is a WebService of GB.
products = FIND "products.csv"
BEGIN SYSTEM PROMPT
You are a chatbot assisting a store attendant in processing orders. Follow these rules:
1. **Order Format**: Each order must include the product name, the table number, and the customer's name.
2. **Product Details**: The available products are:
${TOYAML(products)}
3. **JSON Response**: For each order, return a valid RFC 8259 JSON object containing:
- product name
- table number
4. **Guidelines**:
- Do **not** engage in conversation.
- Return the response in plain text JSON format only.
END SYSTEM PROMPT
Product Catalog
Create products.csv in the llm-server.gbdata folder:
id,name,price,category,description
101,Apple,0.50,Fruit,Fresh red apple
102,Banana,0.30,Fruit,Ripe yellow banana
103,Orange,0.40,Fruit,Juicy orange
201,Milk,1.20,Dairy,1 liter whole milk
202,Cheese,2.50,Dairy,200g cheddar
Bot Configuration
Configure in llm-server.gbot/config.csv:
| Parameter | Description | Example |
|---|---|---|
LLM Provider | AI model provider | openai |
LLM Model | Specific model | gpt-5 |
Max Tokens | Response length limit | 500 |
Temperature | Response creativity | 0.3 |
API Mode | Enable API mode | true |
Usage Examples
cURL Examples
Start Session:
curl -X POST https://api.example.com/llmservergbot/dialogs/start \
-d "operator=123" \
-d "userSystemId=999"
Send Order:
curl -X POST https://api.example.com/api/dk/messageBot \
-d "pid=1237189231897" \
-d "text=I need 2 apples and 1 milk"
JavaScript Integration
async function startBotSession(operator, userId) {
const response = await fetch('https://api.example.com/llmservergbot/dialogs/start', {
method: 'POST',
headers: { 'Content-Type': 'application/x-www-form-urlencoded' },
body: new URLSearchParams({ operator, userSystemId: userId })
});
return response.json();
}
async function sendMessage(pid, text) {
const response = await fetch('https://api.example.com/api/dk/messageBot', {
method: 'POST',
headers: { 'Content-Type': 'application/x-www-form-urlencoded' },
body: new URLSearchParams({ pid, text })
});
return response.json();
}
// Usage
const session = await startBotSession('123', '999');
const order = await sendMessage(session.pid, 'I want a banana');
console.log(order.orderedItems);
Python Integration
import requests
class LLMServerClient:
def __init__(self, base_url, operator):
self.base_url = base_url
self.operator = operator
self.pid = None
def start_session(self, user_id):
response = requests.post(
f"{self.base_url}/llmservergbot/dialogs/start",
data={"operator": self.operator, "userSystemId": user_id}
)
self.pid = response.json()["pid"]
return self.pid
def send_message(self, text):
response = requests.post(
f"{self.base_url}/api/dk/messageBot",
data={"pid": self.pid, "text": text}
)
return response.json()
# Usage
client = LLMServerClient("https://api.example.com", "123")
client.start_session("999")
order = client.send_message("I need 2 bananas")
print(order)
Response Format
Order Response Structure
{
"orderedItems": [
{
"item": {
"id": 102,
"price": 0.30,
"name": "Banana",
"sideItems": [],
"quantity": 2,
"notes": "ripe ones please"
}
}
],
"userId": "123",
"accountIdentifier": "Table5",
"deliveryTypeId": 2
}
Field Descriptions
| Field | Type | Description |
|---|---|---|
orderedItems | Array | List of ordered items |
item.id | Number | Product ID from catalog |
item.price | Number | Unit price |
item.name | String | Product name |
item.sideItems | Array | Additional items |
item.quantity | Number | Order quantity |
item.notes | String | Special instructions |
userId | String | Operator identifier |
accountIdentifier | String | Table/customer identifier |
deliveryTypeId | Number | Delivery method |
Customization
Custom Response Format
Modify the system prompt for different output structures:
BEGIN SYSTEM PROMPT
Return responses as JSON with this structure:
{
"intent": "order|question|complaint",
"entities": [...extracted entities...],
"response": "...",
"confidence": 0.0-1.0
}
END SYSTEM PROMPT
Adding Validation
' Validate order before returning
order = LLM_RESPONSE
IF NOT order.orderedItems THEN
RETURN {"error": "No items in order", "suggestion": "Please specify products"}
END IF
FOR EACH item IN order.orderedItems
product = FIND "products.csv", "id = " + item.item.id
IF NOT product THEN
RETURN {"error": "Invalid product ID: " + item.item.id}
END IF
NEXT
RETURN order
Multi-Language Support
PARAM language AS STRING LIKE "en" DESCRIPTION "Response language"
BEGIN SYSTEM PROMPT
Respond in ${language} language.
Available products: ${TOYAML(products)}
Return JSON format only.
END SYSTEM PROMPT
Error Handling
Common Error Responses
{
"error": "session_expired",
"message": "Please start a new session",
"code": 401
}
{
"error": "invalid_request",
"message": "Missing required parameter: text",
"code": 400
}
{
"error": "product_not_found",
"message": "Product 'pizza' is not in our catalog",
"code": 404
}
Best Practices
- Keep prompts focused - Single-purpose system prompts work better
- Validate responses - Always validate LLM output before returning
- Handle edge cases - Plan for invalid products, empty orders
- Monitor usage - Track API calls and response times
- Rate limiting - Implement rate limits for production
- Secure endpoints - Use authentication for production APIs
- Log requests - Maintain audit logs for debugging
Deployment
Environment Variables
LLM_PROVIDER=openai
LLM_API_KEY=sk-...
LLM_MODEL=gpt-5
API_RATE_LIMIT=100
SESSION_TIMEOUT=3600
Docker Deployment
FROM generalbots/server:latest
COPY llm-server.gbai /app/packages/
ENV API_MODE=true
EXPOSE 4242
CMD ["npm", "start"]
Troubleshooting
| Issue | Cause | Solution |
|---|---|---|
| Empty responses | System prompt too restrictive | Adjust prompt guidelines |
| Invalid JSON | LLM hallucination | Add JSON validation examples |
| Session expired | Timeout reached | Implement session refresh |
| Wrong products | Catalog not loaded | Verify products.csv path |
| Slow responses | Large catalog | Optimize product filtering |
Use Cases
- Restaurant Ordering - Process food orders via API
- Retail POS Integration - AI-powered point of sale
- Chatbot Backend - Headless chatbot for web/mobile apps
- Voice Assistant Backend - Process voice-to-text commands
- Order Automation - Automate order entry from various sources
Related Templates
- LLM Tools - LLM with tool/function calling
- Store - Full e-commerce with order processing
- API Client - API integration examples
See Also
- Templates Reference - Full template list
- Template Samples - Example conversations
- gbdialog Reference - BASIC scripting guide
LLM Tools Template (llm-tools.gbai)
A General Bots template demonstrating how to create and register custom tools (functions) that LLMs can call during conversations.
Overview
The LLM Tools template shows how to extend your bot’s capabilities by creating tools that the AI can invoke automatically. Tools enable the LLM to perform actions like database lookups, API calls, calculations, and more—all triggered naturally through conversation.
Features
- Custom Tool Registration - Define tools the LLM can call
- Parameter Validation - Type-safe tool parameters with descriptions
- Knowledge Base Integration - Combine tools with RAG
- Natural Interaction - Users don’t need to know tool names
- Extensible Architecture - Easy to add new tools
Package Structure
llm-tools.gbai/
├── llm-tools.gbdata/ # Data files for tools
│ └── products.csv # Product catalog
├── llm-tools.gbdialog/
│ ├── start.bas # Main dialog with tool registration
│ └── get-price.bas # Example tool implementation
├── llm-tools.gbkb/ # Knowledge base
│ └── products.gbkb/ # Product documentation
└── llm-tools.gbot/
└── config.csv # Bot configuration
Scripts
| File | Description |
|---|---|
start.bas | Registers tools, configures context, and sets system prompt |
get-price.bas | Example tool that looks up product prices |
How Tools Work
1. Tool Registration
In start.bas, tools are registered with ADD TOOL:
ADD TOOL "get-price"
USE KB "products.gbkb"
CLEAR SUGGESTIONS
ADD SUGGESTION "price" AS "Check product price"
ADD SUGGESTION "products" AS "View products"
ADD SUGGESTION "help" AS "How to use"
BEGIN TALK
**Product Assistant**
I can help you check product prices and information.
Just ask me about any product and I'll look it up for you.
END TALK
BEGIN SYSTEM PROMPT
You are a product assistant with access to internal tools.
When get-price returns -1, the product does not exist.
When asked about a price, use the get-price tool and return the result.
Do not expose tool names to users - just act on their requests naturally.
END SYSTEM PROMPT
2. Tool Implementation
Each tool is a separate .bas file with PARAM and DESCRIPTION:
PARAM product AS STRING LIKE "fax" DESCRIPTION "Name of the product to get price for"
DESCRIPTION "Get the price of a product by name from the product catalog"
productRecord = FIND "products.csv", "name = ${product}"
IF productRecord THEN
RETURN productRecord.price
ELSE
RETURN -1
END IF
3. LLM Invocation
When a user asks “How much is the fax machine?”, the LLM:
- Recognizes this requires price information
- Calls
get-pricewithproduct="fax" - Receives the price (or -1 if not found)
- Formulates a natural response
Sample Conversation
User: How much does the wireless mouse cost?
Bot: [Calls get-price with product="wireless mouse"]
The wireless mouse is $29.99.
User: What about the keyboard?
Bot: [Calls get-price with product="keyboard"]
The standard keyboard is $49.99. We also have a
mechanical keyboard for $89.99.
User: Is the laptop in stock and what's the price?
Bot: [Calls get-price with product="laptop"]
[Calls check-inventory with sku="LAPTOP-001"]
The laptop is $999. Good news - we have 12 units
available in our main warehouse!
User: I want 3 monitors with my 20% employee discount
Bot: [Calls get-price with product="monitor"]
[Calls calculate-discount with price=299, percent=20]
Each monitor is $299, but with your 20% employee
discount, you'll pay $239.20 each.
For 3 monitors: $717.60 (saving $179.40)!
Creating Custom Tools
Tool Anatomy
Every tool needs:
- PARAM declarations - Input parameters with types and examples
- DESCRIPTION - What the tool does (for LLM understanding)
- Implementation - The actual logic
- RETURN - The output value
Parameter Types
| Type | Description | Example |
|---|---|---|
STRING | Text input | PARAM name AS STRING LIKE "John" |
NUMBER | Numeric input | PARAM quantity AS NUMBER LIKE 5 |
INTEGER | Whole numbers | PARAM count AS INTEGER LIKE 10 |
BOOLEAN | True/false | PARAM active AS BOOLEAN |
DATE | Date values | PARAM start AS DATE LIKE "2024-01-15" |
EMAIL | Email addresses | PARAM email AS EMAIL |
PHONE | Phone numbers | PARAM phone AS PHONE |
OBJECT | JSON objects | PARAM data AS OBJECT |
Example Tools
Database Lookup Tool
' lookup-customer.bas
PARAM customer_id AS STRING LIKE "CUST-001" DESCRIPTION "Customer ID to look up"
DESCRIPTION "Retrieve customer information by ID"
customer = FIND "customers.csv", "id = ${customer_id}"
IF customer THEN
WITH result
name = customer.name
email = customer.email
status = customer.status
since = customer.created_at
END WITH
RETURN result
ELSE
RETURN NULL
END IF
Calculation Tool
' calculate-discount.bas
PARAM original_price AS NUMBER LIKE 100 DESCRIPTION "Original product price"
PARAM discount_percent AS NUMBER LIKE 15 DESCRIPTION "Discount percentage"
DESCRIPTION "Calculate the final price after applying a discount"
discount_amount = original_price * (discount_percent / 100)
final_price = original_price - discount_amount
WITH result
original = original_price
discount = discount_amount
final = final_price
savings = discount_percent + "% off"
END WITH
RETURN result
API Integration Tool
' check-inventory.bas
PARAM sku AS STRING LIKE "SKU-12345" DESCRIPTION "Product SKU to check"
PARAM warehouse AS STRING LIKE "main" DESCRIPTION "Warehouse location" OPTIONAL
DESCRIPTION "Check real-time inventory levels for a product"
IF NOT warehouse THEN
warehouse = "main"
END IF
SET HEADER "Authorization" AS "Bearer " + GET ENV "INVENTORY_API_KEY"
response = GET "https://api.inventory.com/stock/" + sku + "?warehouse=" + warehouse
IF response.error THEN
RETURN {"available": false, "error": response.error}
END IF
WITH result
sku = sku
available = response.quantity > 0
quantity = response.quantity
warehouse = warehouse
last_updated = response.timestamp
END WITH
RETURN result
Email Sending Tool
' send-notification.bas
PARAM recipient AS EMAIL LIKE "user@example.com" DESCRIPTION "Email recipient"
PARAM subject AS STRING LIKE "Order Confirmation" DESCRIPTION "Email subject"
PARAM message AS STRING DESCRIPTION "Email body content"
DESCRIPTION "Send an email notification to a customer"
SEND MAIL recipient, subject, message, []
WITH result
sent = true
recipient = recipient
timestamp = NOW()
END WITH
RETURN result
Tool Registration Patterns
Single Tool
ADD TOOL "get-price"
Multiple Tools
ADD TOOL "get-price"
ADD TOOL "check-inventory"
ADD TOOL "lookup-customer"
ADD TOOL "calculate-discount"
ADD TOOL "send-notification"
Conditional Tools
user_role = GET SESSION "user_role"
ADD TOOL "get-price"
ADD TOOL "check-inventory"
IF user_role = "admin" THEN
ADD TOOL "update-price"
ADD TOOL "delete-product"
END IF
System Prompt Best Practices
Guide the LLM on when and how to use tools:
BEGIN SYSTEM PROMPT
You are a helpful product assistant with access to the following capabilities:
**Available Tools:**
- get-price: Look up product prices by name
- check-inventory: Check stock availability
- calculate-discount: Calculate prices with discounts
**Guidelines:**
1. When users ask about prices, use the get-price tool
2. When asked about availability, use check-inventory
3. If a tool returns an error, explain politely that the item wasn't found
4. Never mention tool names to users - just provide the information naturally
5. Combine multiple tool results when needed to answer complex questions
**Error Handling:**
- If get-price returns -1, the product doesn't exist
- If check-inventory shows quantity 0, inform the user it's out of stock
END SYSTEM PROMPT
Configuration
Configure in llm-tools.gbot/config.csv:
| Parameter | Description | Example |
|---|---|---|
LLM Provider | AI provider | openai |
LLM Model | Model for tool calls | gpt-4o |
Tool Timeout | Max tool execution time | 30 |
Max Tool Calls | Limit per conversation | 10 |
Error Handling
In Tool Implementation
' get-price.bas with error handling
PARAM product AS STRING LIKE "laptop" DESCRIPTION "Product name"
DESCRIPTION "Get product price with error handling"
ON ERROR GOTO HandleError
productRecord = FIND "products.csv", "name LIKE '%" + product + "%'"
IF productRecord THEN
RETURN productRecord.price
ELSE
RETURN {"error": "not_found", "message": "Product not in catalog"}
END IF
HandleError:
RETURN {"error": "system_error", "message": "Unable to look up price"}
In System Prompt
BEGIN SYSTEM PROMPT
When tools return errors:
- "not_found": Apologize and suggest similar products
- "out_of_stock": Offer to notify when back in stock
- "system_error": Ask user to try again later
END SYSTEM PROMPT
Testing Tools
Manual Testing
' test-tools.bas
result = CALL "get-price", {"product": "laptop"}
TALK "Price result: " + JSON(result)
result = CALL "check-inventory", {"sku": "LAPTOP-001"}
TALK "Inventory result: " + JSON(result)
Conversation Testing
Test various phrasings to ensure tool invocation:
- “What’s the price of X?”
- “How much does X cost?”
- “Price for X please”
- “X price?”
- “Can you tell me what X costs?”
Best Practices
- Clear descriptions - Help the LLM understand when to use each tool
- Good examples - LIKE clauses guide parameter format
- Handle errors - Always return meaningful error responses
- Validate input - Check parameters before processing
- Log tool calls - Track usage for debugging
- Keep tools focused - One tool, one purpose
- Test thoroughly - Various phrasings should trigger correct tools
Troubleshooting
| Issue | Cause | Solution |
|---|---|---|
| Tool not called | Description unclear | Improve DESCRIPTION text |
| Wrong parameters | Examples missing | Add LIKE examples |
| Tool errors | Missing validation | Add error handling |
| Slow responses | Complex tool logic | Optimize or cache |
| Tool exposed to user | System prompt issue | Add “don’t mention tools” |
Use Cases
- Product Lookup - Price, availability, specifications
- Customer Service - Order status, account info
- Calculations - Quotes, discounts, shipping
- Integrations - CRM, ERP, external APIs
- Data Access - Database queries, report generation
Related Templates
- LLM Server - Headless API with LLM processing
- CRM - CRM with many tool examples
- Store - E-commerce with product tools
- Talk to Data - Data query tools
See Also
- Templates Reference - Full template list
- Template Samples - Example conversations
- gbdialog Reference - BASIC scripting guide
API Client Template (api-client.gbai)
A General Bots template demonstrating REST API integration patterns for connecting to external services and data sources.
Overview
The API Client template provides examples and patterns for integrating General Bots with external REST APIs. It includes examples for weather services, Microsoft Partner Center integration, and general HTTP request patterns that can be adapted for any API.
Features
- REST API Integration - GET, POST, PUT, DELETE request patterns
- Authentication - OAuth, Bearer tokens, API keys
- Header Management - Custom headers for API requirements
- Pagination Support - Handle paginated API responses
- Data Synchronization - Sync external data to local tables
- Scheduled Jobs - Automated API polling and sync
Package Structure
api-client.gbai/
├── api-client.gbdialog/
│ ├── climate.bas # Weather API example
│ └── msft-partner-center.bas # Microsoft Partner Center integration
└── api-client.gbot/
└── config.csv # Bot configuration
Scripts
| File | Description |
|---|---|
climate.bas | Weather API tool for getting current conditions |
msft-partner-center.bas | Full Microsoft Partner Center billing sync |
Basic API Patterns
Simple GET Request
' Get data from an API
response = GET "https://api.example.com/data"
IF response THEN
TALK "Data received: " + response.value
ELSE
TALK "Failed to fetch data"
END IF
POST Request with Body
' Send data to an API
SET HEADER "Content-Type" AS "application/json"
payload = {"name": "John", "email": "john@example.com"}
response = POST "https://api.example.com/users", payload
IF response.id THEN
TALK "User created with ID: " + response.id
END IF
PUT Request for Updates
' Update existing resource
SET HEADER "Content-Type" AS "application/json"
updates = {"status": "active", "role": "admin"}
response = PUT "https://api.example.com/users/123", updates
TALK "User updated: " + response.status
DELETE Request
' Delete a resource
response = DELETE "https://api.example.com/users/123"
IF response.deleted THEN
TALK "User deleted successfully"
END IF
Authentication Patterns
API Key Authentication
SET HEADER "X-API-Key" AS "your-api-key-here"
response = GET "https://api.example.com/protected-resource"
Bearer Token Authentication
SET HEADER "Authorization" AS "Bearer " + access_token
response = GET "https://api.example.com/user/profile"
OAuth 2.0 Token Exchange
' Get OAuth token
SET HEADER "Content-Type" AS "application/x-www-form-urlencoded"
tokenResponse = POST "https://auth.example.com/oauth/token",
"grant_type=client_credentials&client_id=" + clientId +
"&client_secret=" + clientSecret
access_token = tokenResponse.access_token
' Use token for API calls
SET HEADER "Authorization" AS "Bearer " + access_token
data = GET "https://api.example.com/resources"
Basic Authentication
credentials = BASE64(username + ":" + password)
SET HEADER "Authorization" AS "Basic " + credentials
response = GET "https://api.example.com/secure-endpoint"
Weather API Example
The climate.bas tool demonstrates a simple API integration:
PARAM location AS "The city and state, e.g. San Francisco, CA"
PARAM unit AS "celsius", "fahrenheit"
DESCRIPTION "Get the current weather in a given location"
' Implementation would call weather API
' response = GET "https://api.weather.com/current?location=" + location
RETURN weather_info
Sample Conversation
User: What's the weather in New York?
Bot: [Calls climate tool with location="New York"]
It's currently 72°F and sunny in New York, NY.
Today's forecast:
🌡️ High: 78°F / Low: 65°F
💧 Humidity: 45%
💨 Wind: 8 mph NW
User: What about São Paulo in celsius?
Bot: [Calls climate tool with location="São Paulo", unit="celsius"]
It's currently 24°C and partly cloudy in São Paulo, Brazil.
Today's forecast:
🌡️ High: 28°C / Low: 19°C
💧 Humidity: 62%
💨 Wind: 12 km/h SE
Microsoft Partner Center Integration
The msft-partner-center.bas demonstrates a complex enterprise API integration:
Features
- OAuth token authentication with Azure AD
- Multi-resource synchronization (Customers, Subscriptions, Billing)
- Scheduled execution
- Pagination handling
- Database table management
Configuration
' Required parameters
tenantId = GET ENV "AZURE_TENANT_ID"
clientId = GET ENV "AZURE_CLIENT_ID"
clientSecret = GET ENV "AZURE_CLIENT_SECRET"
host = "https://api.partnercenter.microsoft.com"
Scheduled Sync
SET SCHEDULE "1 * * * * *" ' Run periodically
' Set required headers
SET HEADER "MS-Contract-Version" AS "v1"
SET HEADER "MS-CorrelationId" AS UUID()
SET HEADER "MS-RequestId" AS UUID()
SET HEADER "MS-PartnerCenter-Application" AS "General Bots"
SET HEADER "X-Locale" AS "en-US"
Sync Customers and Subscriptions
SET PAGE MODE "none"
customers = GET host + "/v1/customers?size=20000"
MERGE "Customers" WITH customers.items BY "Id"
FOR EACH customer IN customers
subs = GET host + "/v1/customers/" + customer.id + "/subscriptions"
MERGE "Subscriptions" WITH subs.items BY "Id"
END FOR
Billing Data Sync
SET PAGE MODE "auto"
billingItems = GET host + "/v1/invoices/unbilled/lineitems" +
"?provider=onetime&invoicelineitemtype=usagelineitems¤cycode=USD"
FOR EACH item IN billingItems
SAVE "Billing", item.alternateId, item.customerId, item.productName,
item.quantity, item.unitPrice, item.subtotal, item.chargeStartDate
END FOR
Table Definitions
TABLE Billing
CustomerId Customers
ResourceGroup string(200)
CustomerName string(400)
ProductName string(400)
Quantity double
UnitPrice double
Subtotal double
ChargeStartDate date
ChargeEndDate date
END TABLE
TABLE Customers
TenantId guid
CompanyName string(100)
Id guid
END TABLE
TABLE Subscriptions
CustomerId Customers
Id guid
OfferName string(50)
END TABLE
Custom API Integration
Creating Your Own API Client
' my-api-client.bas
PARAM resource AS STRING LIKE "users" DESCRIPTION "API resource to fetch"
PARAM filters AS STRING LIKE "status=active" DESCRIPTION "Query filters" OPTIONAL
DESCRIPTION "Fetch data from custom API"
' Configuration
api_base = GET ENV "MY_API_BASE_URL"
api_key = GET ENV "MY_API_KEY"
' Set authentication
SET HEADER "Authorization" AS "Bearer " + api_key
SET HEADER "Content-Type" AS "application/json"
' Build URL
url = api_base + "/" + resource
IF filters THEN
url = url + "?" + filters
END IF
' Make request
response = GET url
IF response.error THEN
RETURN {"success": false, "error": response.error}
END IF
RETURN {"success": true, "data": response.data, "count": UBOUND(response.data)}
Handling Pagination
' paginated-fetch.bas
PARAM endpoint AS STRING DESCRIPTION "API endpoint"
DESCRIPTION "Fetch all pages from a paginated API"
all_results = []
page = 1
has_more = true
DO WHILE has_more
response = GET endpoint + "?page=" + page + "&per_page=100"
IF response.data THEN
all_results = MERGE all_results, response.data
IF UBOUND(response.data) < 100 THEN
has_more = false
ELSE
page = page + 1
END IF
ELSE
has_more = false
END IF
LOOP
RETURN all_results
Error Handling with Retry
' api-with-retry.bas
PARAM url AS STRING DESCRIPTION "API URL to call"
PARAM max_retries AS INTEGER LIKE 3 DESCRIPTION "Maximum retry attempts"
DESCRIPTION "API call with automatic retry on failure"
retries = 0
success = false
DO WHILE retries < max_retries AND NOT success
response = GET url
IF response.error THEN
retries = retries + 1
WAIT retries * 2 ' Exponential backoff
ELSE
success = true
END IF
LOOP
IF success THEN
RETURN response
ELSE
RETURN {"error": "Max retries exceeded", "last_error": response.error}
END IF
Configuration
Configure in api-client.gbot/config.csv:
| Parameter | Description | Example |
|---|---|---|
API Base URL | Default API endpoint | https://api.example.com |
API Timeout | Request timeout in seconds | 30 |
Retry Count | Number of retry attempts | 3 |
Log Requests | Enable request logging | true |
Environment Variables
Store sensitive values as environment variables:
MY_API_KEY=your-api-key
MY_API_SECRET=your-secret
AZURE_TENANT_ID=your-tenant-id
AZURE_CLIENT_ID=your-client-id
AZURE_CLIENT_SECRET=your-client-secret
Access in BASIC:
api_key = GET ENV "MY_API_KEY"
Common HTTP Headers
| Header | Purpose | Example |
|---|---|---|
Content-Type | Request body format | application/json |
Accept | Expected response format | application/json |
Authorization | Authentication | Bearer token |
X-API-Key | API key auth | your-key |
User-Agent | Client identification | GeneralBots/1.0 |
Best Practices
- Secure credentials - Never hardcode API keys; use environment variables
- Handle errors - Always check for error responses
- Rate limiting - Respect API rate limits with delays
- Pagination - Handle paginated responses properly
- Logging - Log API calls for debugging
- Timeouts - Set appropriate timeout values
- Retries - Implement retry logic for transient failures
- Caching - Cache responses when appropriate
Troubleshooting
| Issue | Cause | Solution |
|---|---|---|
| 401 Unauthorized | Invalid credentials | Check API key/token |
| 403 Forbidden | Missing permissions | Verify API access rights |
| 404 Not Found | Wrong endpoint | Verify URL path |
| 429 Too Many Requests | Rate limited | Add delays between requests |
| 500 Server Error | API issue | Retry with backoff |
| Timeout | Slow API | Increase timeout setting |
Use Cases
- Data Synchronization - Sync data from external systems
- Service Integration - Connect to SaaS platforms
- Automation - Automate cross-system workflows
- Monitoring - Poll external systems for changes
- Reporting - Aggregate data from multiple APIs
Related Templates
- Public APIs - Pre-built integrations for public APIs
- Bling ERP - ERP API integration example
- LLM Server - Building your own API endpoints
- CRM - CRM with external API sync
See Also
- Templates Reference - Full template list
- Template Samples - Example conversations
- gbdialog Reference - BASIC scripting guide
Platform Analytics Template (analytics.gbai)
A General Bots template for platform metrics, performance monitoring, and custom report generation.
Overview
The Analytics template provides comprehensive platform analytics capabilities, allowing administrators and stakeholders to monitor usage, track performance, analyze trends, and generate custom reports through conversational AI.
Features
- Platform Overview - Key metrics summary with trend analysis
- Message Analytics - Conversation statistics by channel and bot
- User Analytics - Active users, sessions, and engagement
- Performance Metrics - Response times and throughput monitoring
- LLM Usage Tracking - Token consumption and cost analysis
- Storage Analytics - Disk usage and file statistics
- Error Analysis - Error patterns and troubleshooting insights
- Custom Report Generator - Build and schedule personalized reports
- AI-Powered Insights - Automatic trend analysis and recommendations
Package Structure
analytics.gbai/
├── analytics.gbdialog/
│ ├── start.bas # Main menu and navigation
│ ├── platform-overview.bas # Key metrics dashboard
│ └── custom-report.bas # Custom report builder
└── analytics.gbot/
└── config.csv # Bot configuration
Scripts
| File | Description |
|---|---|
start.bas | Main entry point with analytics menu options |
platform-overview.bas | Platform-wide metrics with trends and export |
custom-report.bas | Interactive custom report generator with scheduling |
Available Analytics
1. Platform Overview
High-level metrics summary including:
- Total messages and trend percentage
- Average active sessions
- Response time performance
- Error rates
- LLM token consumption
2. Message Analytics
- Message volume by time period
- Breakdown by channel (WhatsApp, Web, API)
- Bot-specific conversation stats
- Peak usage times
3. User Analytics
- Active users (daily, weekly, monthly)
- Session duration analysis
- User retention rates
- Geographic distribution
4. Performance Metrics
- Average response time
- Response time distribution
- Throughput (messages per second)
- Latency percentiles (p50, p95, p99)
5. LLM Usage
- Token consumption by model
- Cost analysis
- Usage patterns
- Optimization recommendations
6. Storage Analytics
- Disk usage trends
- File count and types
- Storage by bot/package
- Cleanup recommendations
7. Error Analysis
- Error frequency and patterns
- Error categorization
- Root cause analysis
- Resolution suggestions
8. Custom Reports
- Select metrics to include
- Choose time ranges
- Set grouping intervals
- Export in multiple formats
- Schedule automatic delivery
Sample Conversation
User: Show me analytics
Bot: Welcome to the Analytics Center. I can help you
understand your platform metrics and generate reports.
What would you like to analyze?
1. Platform Overview - Key metrics summary
2. Message Analytics - Conversation statistics
3. User Analytics - Active users and sessions
4. Performance Metrics - Response times
5. LLM Usage - Token consumption and costs
6. Storage Analytics - Disk usage
7. Error Analysis - Error patterns
8. Generate Custom Report
User: 1
Bot: Generating platform overview...
Select time range (1h, 6h, 24h, 7d, 30d):
User: 24h
Bot: 📊 Platform Overview for Last 24 Hours
════════════════════════════════════════
📨 Messages
Total: 15,432
Trend: ▲ +12.5% vs previous period
👥 Sessions
Average Active: 234
Peak: 412 (at 2:30 PM)
Trend: ▲ +5.2%
⚡ Performance
Avg Response Time: 145 ms
P95: 320 ms
P99: 580 ms
❌ Errors
Total: 23
Error Rate: 0.15%
Most Common: Timeout (12)
🤖 LLM Usage
Total Tokens: 1,245,678
Estimated Cost: $18.45
Avg per Message: 81 tokens
Would you like to:
(D)etail, (E)xport report, (A)lerts, (B)ack
User: E
Bot: Export format (PDF, CSV, XLSX):
User: PDF
Bot: ✅ Report exported: platform_overview_20240515.pdf
The file is available in your Drive.
Download link: [secure link]
Custom Report Generation
User: I need a custom report
Bot: Custom Report Generator
What would you like to name this report?
User: Weekly Performance Summary
Bot: Select the time range:
1. Last Hour
2. Last 24 Hours
3. Last 7 Days
4. Last 30 Days
5. Custom Range
User: 3
Bot: Select metrics to include (comma-separated):
1. Message Volume
2. Active Sessions
3. Response Time
4. LLM Token Usage
5. Error Rate
6. Storage Usage
7. API Calls
8. User Activity
9. Bot Performance
10. All Metrics
User: 1,3,4,5
Bot: Select grouping interval:
1. Hourly
2. Daily
3. Weekly
User: 2
Bot: Generating your custom report...
✅ Report generated successfully!
📋 Weekly Performance Summary
📅 Period: May 8-15, 2025
📊 Metrics: Messages, Response Time, LLM Usage, Errors
📈 Grouping: Daily
Download: [secure link]
Would you like to schedule this report?
User: Yes, weekly
Bot: ✅ Report scheduled!
Frequency: Weekly on Monday at 8:00 AM
Delivery: Email to your address
You can manage scheduled reports anytime.
Metrics Query System
The template uses a metrics query system:
' Query metrics for a time range
messages = QUERY METRICS "messages" FOR "24h"
totalMessages = SUM(messages, "count")
' Query with grouping
hourlyMessages = QUERY METRICS "messages" FOR "24h" BY "1h"
' Query with offset for comparison
prevMessages = QUERY METRICS "messages" FOR "24h" OFFSET 1
trend = ((totalMessages - SUM(prevMessages, "count")) / SUM(prevMessages, "count")) * 100
Export Formats
Reports can be exported in multiple formats:
| Format | Description |
|---|---|
| Formatted report with charts | |
| XLSX | Excel spreadsheet |
| CSV | Raw data export |
| JSON | Structured data format |
Scheduled Reports
Configure automatic report delivery:
| Schedule | Cron Expression | Description |
|---|---|---|
| Daily | 0 8 * * * | Every day at 8 AM |
| Weekly | 0 8 * * 1 | Monday at 8 AM |
| Monthly | 0 8 1 * * | 1st of month at 8 AM |
SET SCHEDULE "0 8 * * 1", "generate-scheduled-report.bas"
Configuration
Configure in analytics.gbot/config.csv:
| Parameter | Description | Example |
|---|---|---|
Default Time Range | Default period for queries | 7d |
Data Retention Days | How long to keep metrics | 90 |
Admin Email | Email for scheduled reports | admin@company.com |
Enable AI Insights | Auto-generate insights | true |
Export Path | Report storage location | /reports/ |
Customization
Adding Custom Metrics
' Track custom events
INSERT INTO "custom_metrics" VALUES {
"name": "feature_usage",
"value": 1,
"tags": {"feature": "chat", "plan": "pro"},
"timestamp": NOW()
}
' Query custom metrics
usage = QUERY METRICS "feature_usage" FOR "30d" WHERE tags.feature = "chat"
Custom Dashboard Widgets
' Add to start.bas
TALK "Custom Metrics:"
TALK "9. Feature Usage"
TALK "10. Revenue Analytics"
TALK "11. Customer Health Score"
' Handle custom options
CASE 9
CALL "feature-usage.bas"
CASE 10
CALL "revenue-analytics.bas"
AI-Powered Insights
' Generate AI insights from metrics
SET CONTEXT "You are an analytics expert. Generate executive insights."
insights = LLM "Analyze this data and provide 3-5 key insights: " + JSON(report_data)
Integration Examples
With Alerting
' Set up alerts based on metrics
errorRate = SUM(errors, "count") / SUM(messages, "count") * 100
IF errorRate > 5 THEN
SEND MAIL admin_email, "High Error Rate Alert",
"Error rate is " + errorRate + "%, above 5% threshold.", []
END IF
With External BI Tools
' Export data for external tools
data = QUERY METRICS "messages" FOR "30d" BY "1d"
WRITE "analytics_export.csv", CSV(data)
' Or send to webhook
POST "https://bi-tool.example.com/webhook", data
Best Practices
- Set appropriate time ranges - Don’t query more data than needed
- Use caching - Cache expensive queries
- Schedule off-peak - Run heavy reports during low traffic
- Monitor the monitor - Track analytics query performance
- Archive old data - Move historical data to cold storage
- Validate insights - Review AI-generated insights for accuracy
Troubleshooting
| Issue | Cause | Solution |
|---|---|---|
| Slow queries | Large time range | Reduce range or add filters |
| Missing data | Metrics not collected | Verify instrumentation |
| Export fails | Large report size | Export in chunks |
| Stale data | Cache not refreshed | Clear cache |
| Incorrect trends | Insufficient history | Wait for more data |
Use Cases
- Operations Teams - Monitor platform health and performance
- Product Managers - Track feature usage and engagement
- Executives - High-level KPI dashboards
- Support Teams - Identify error patterns
- Finance - LLM cost tracking and optimization
Data Privacy
- Analytics data is aggregated and anonymized
- User-level data requires appropriate permissions
- Respect data retention policies
- Comply with GDPR/CCPA as applicable
Related Templates
- BI Template - Business Intelligence reporting
- Talk to Data - Natural language data queries
- CRM - CRM analytics and pipeline reports
See Also
- Templates Reference - Full template list
- Template Samples - Example conversations
- gbdialog Reference - BASIC scripting guide
Office Automation Template (office.gbai)
A General Bots template for role-based office productivity with department-specific knowledge bases and context-aware assistance.
Overview
The Office template provides a multi-role office assistant that adapts its behavior, knowledge, and suggestions based on the user’s role. Whether you’re a manager, developer, customer support agent, HR professional, or finance team member, the bot tailors its responses and available resources accordingly.
Features
- Role-Based Access - Different capabilities per user role
- Dynamic Knowledge Bases - Automatically loads relevant KB per role
- Context-Aware Responses - AI behavior adapts to role requirements
- Custom Suggestions - Role-specific quick actions
- Tool Integration - Calendar, tasks, documents, meetings, notes
- Persistent Role Memory - Remembers user role across sessions
Package Structure
office.gbai/
├── office.gbdialog/
│ ├── start.bas # Role selection and configuration
│ ├── api-integration.bas # External API connections
│ ├── data-sync.bas # Data synchronization
│ └── document-processor.bas # Document handling
├── office.gbkb/ # Knowledge bases by role
│ ├── management/
│ ├── documentation/
│ ├── products/
│ ├── hr-policies/
│ └── budgets/
└── office.gbot/
└── config.csv # Bot configuration
Scripts
| File | Description |
|---|---|
start.bas | Main entry with role selection and context setup |
api-integration.bas | Connect to external office systems |
data-sync.bas | Sync data across office applications |
document-processor.bas | Process and manage documents |
Supported Roles
Manager
- Context: Executive assistant for reports, team management, strategic decisions
- Knowledge Bases: management, reports, team-policies
- Suggestions: Team performance, Generate report, Schedule meeting
Developer
- Context: Technical assistant for documentation, APIs, coding best practices
- Knowledge Bases: documentation, apis, coding-standards
- Suggestions: Search documentation, API reference, Code review checklist
Customer Support
- Context: Customer service assistant focused on resolving issues
- Knowledge Bases: products, support, faq
- Suggestions: Track my order, Product information, Contact support
HR
- Context: HR assistant for employee matters, policies, benefits
- Knowledge Bases: hr-policies, benefits, onboarding
- Suggestions: Employee handbook, Benefits overview, New hire checklist
Finance
- Context: Finance assistant for budgets, expenses, financial reports
- Knowledge Bases: budgets, expenses, financial-reports
- Suggestions: Expense policy, Budget status, Approval workflow
Available Tools
| Tool | Description |
|---|---|
calendar | Schedule and manage appointments |
tasks | Create and track to-do items |
documents | Access and manage documents |
meetings | Schedule and join meetings |
notes | Create and search notes |
Sample Conversation
Role Selection
Bot: Welcome to the Office Assistant!
Please select your role:
User: manager
Bot: Welcome, Manager! I can help with reports, team
management, and policies.
Quick Actions:
• Team performance
• Generate report
• Schedule meeting
Manager Interaction
User: I need to review my team's performance
Bot: I'll help you review team performance. Here's a summary:
📊 Team Performance Dashboard
Active Projects: 5
Completed This Month: 3
Top Performers:
• Sarah Chen: 12 tasks completed
• Mike Johnson: 10 tasks completed
Pending Approvals: 2
Would you like a detailed report or to schedule 1:1s?
Developer Interaction
User: How do I implement pagination in our API?
Bot: Based on our API documentation, here's the pagination pattern:
GET /api/v1/users?page=1&per_page=20
Response includes:
• data: Array of results
• meta.total: Total count
• meta.page: Current page
• links.next: Next page URL
Our coding standards recommend cursor-based pagination
for large datasets.
📄 Source: docs/api/pagination.md
Customer Support Interaction
User: Customer asking about order #12345
Bot: I found order #12345. Here are the details:
📦 Order Status: Shipped
📅 Shipped: May 14, 2025
🚚 Carrier: FedEx
📍 Tracking: 7891234567890
Expected delivery: May 17, 2025
Would you like to:
• Send tracking info to customer
• Create a support ticket
• Escalate to supervisor
Configuration
Configure in office.gbot/config.csv:
| Parameter | Description | Example |
|---|---|---|
Default Role | Role when none selected | customer |
Remember Role | Persist role across sessions | true |
Role Timeout | Hours before re-asking role | 24 |
Theme Color | UI accent color | blue |
How It Works
Role Detection and Setup
ADD TOOL "calendar"
ADD TOOL "tasks"
ADD TOOL "documents"
ADD TOOL "meetings"
ADD TOOL "notes"
role = GET role
IF NOT role THEN
TALK "Welcome to the Office Assistant!"
TALK "Please select your role:"
HEAR role AS NAME
role = LOWER(role)
SET role, role
END IF
SWITCH role
CASE "manager"
SET CONTEXT "You are an executive assistant helping managers..."
USE KB "management"
USE KB "reports"
USE KB "team-policies"
TALK "Welcome, Manager! I can help with reports and team management."
CASE "developer"
SET CONTEXT "You are a technical assistant helping developers..."
USE KB "documentation"
USE KB "apis"
USE KB "coding-standards"
TALK "Welcome, Developer! I can help with documentation and APIs."
' ... more roles
END SWITCH
Dynamic Suggestions
CLEAR SUGGESTIONS
SWITCH role
CASE "manager"
ADD SUGGESTION "performance" AS "Team performance"
ADD SUGGESTION "report" AS "Generate report"
ADD SUGGESTION "meeting" AS "Schedule meeting"
CASE "developer"
ADD SUGGESTION "docs" AS "Search documentation"
ADD SUGGESTION "api" AS "API reference"
ADD SUGGESTION "review" AS "Code review checklist"
CASE "customer"
ADD SUGGESTION "order" AS "Track my order"
ADD SUGGESTION "product" AS "Product information"
ADD SUGGESTION "support" AS "Contact support"
END SWITCH
Customization
Adding New Roles
Extend the start.bas file:
CASE "sales"
SET CONTEXT "You are a sales assistant helping with leads and deals."
USE KB "sales-playbook"
USE KB "pricing"
USE KB "competitors"
TALK "Welcome, Sales! I can help with leads, pricing, and proposals."
CLEAR SUGGESTIONS
ADD SUGGESTION "leads" AS "View my leads"
ADD SUGGESTION "quote" AS "Generate quote"
ADD SUGGESTION "pipeline" AS "Pipeline status"
Custom Knowledge Bases
Create role-specific knowledge bases in office.gbkb/:
office.gbkb/
├── sales-playbook/
│ ├── objection-handling.md
│ ├── pricing-guide.md
│ └── competitor-comparison.md
Role-Specific Tools
Register different tools per role:
CASE "manager"
ADD TOOL "calendar"
ADD TOOL "tasks"
ADD TOOL "team-report"
ADD TOOL "approve-request"
CASE "developer"
ADD TOOL "search-docs"
ADD TOOL "api-tester"
ADD TOOL "code-review"
Integration Examples
With Calendar
' Schedule meeting for manager
IF role = "manager" THEN
TALK "I'll schedule the team meeting."
CREATE CALENDAR EVENT "Team Standup", tomorrow + " 9:00 AM", 30
END IF
With Document System
' Generate document based on role
SWITCH role
CASE "hr"
template = "offer-letter-template.docx"
CASE "sales"
template = "proposal-template.docx"
CASE "finance"
template = "budget-template.xlsx"
END SWITCH
document = GENERATE FROM TEMPLATE template WITH data
With Task Management
' Create role-appropriate tasks
IF role = "manager" THEN
CREATE TASK "Review Q4 budget", "high", manager_email
CREATE TASK "Approve team PTO requests", "medium", manager_email
END IF
Best Practices
- Clear role definitions - Define clear boundaries for each role
- Relevant suggestions - Keep quick actions useful for each role
- Appropriate KBs - Only load necessary knowledge bases
- Security awareness - Restrict sensitive data by role
- Regular updates - Keep knowledge bases current
- Feedback loops - Monitor which features each role uses
Troubleshooting
| Issue | Cause | Solution |
|---|---|---|
| Wrong KB loaded | Role not set correctly | Check role detection logic |
| Missing suggestions | Role not in switch | Add role to all switch blocks |
| Context confusion | Multiple roles used | Clear context between role changes |
| Slow responses | Too many KBs loaded | Load only essential KBs per role |
Use Cases
- Corporate Offices - Multi-department support
- Startups - Flexible role-based assistance
- Remote Teams - Unified office assistant
- Enterprise - Department-specific knowledge management
Related Templates
- Contacts - Contact management
- Reminder - Task and reminder management
- CRM - Full CRM for sales roles
- Analytics - Platform analytics for managers
See Also
- Templates Reference - Full template list
- Template Samples - Example conversations
- gbdialog Reference - BASIC scripting guide
Reminder Template (reminder.gbai)
A General Bots template for managing personal and team reminders with multi-channel notifications.
Overview
The Reminder template provides a complete reminder management system with natural language scheduling, multiple notification channels, and snooze capabilities. Users can create, view, manage, and receive reminders through conversational AI.
Features
- Natural Language Scheduling - Create reminders using everyday language
- Multi-Channel Notifications - Email, SMS, or chat notifications
- Reminder Management - List, snooze, and delete reminders
- Scheduled Execution - Background job checks and sends due reminders
- Smart Date Parsing - Understands “tomorrow”, “next week”, “in 2 hours”
- Persistent Storage - Reminders saved to CSV for reliability
Package Structure
reminder.gbai/
├── reminder.gbdata/ # Data storage
│ └── reminders.csv # Reminder records
├── reminder.gbdialog/
│ ├── start.bas # Main entry and tool registration
│ ├── add-reminder.bas # Create new reminders
│ └── reminder.bas # Scheduled job to send reminders
└── reminder.gbot/
└── config.csv # Bot configuration
Scripts
| File | Description |
|---|---|
start.bas | Registers tools, sets up suggestions, configures context |
add-reminder.bas | Creates new reminder records with notifications |
reminder.bas | Scheduled job that checks and sends due reminders |
Available Tools
| Tool | Description |
|---|---|
add-reminder | Create a new reminder |
list-reminders | View all pending reminders |
delete-reminder | Remove a reminder |
snooze-reminder | Postpone a reminder |
Data Schema
Reminders Table
| Field | Type | Description |
|---|---|---|
id | String | Unique identifier (REM-YYYYMMDD-XXXX) |
remindAt | DateTime | When to send the reminder |
message | String | Reminder content/subject |
notifyBy | String | Notification channel (email, sms, chat) |
email | User’s email address | |
phone | Phone | User’s phone number |
created | DateTime | Creation timestamp |
status | String | pending, sent, snoozed, deleted |
Sample Conversation
Creating Reminders
User: Remind me to call John tomorrow at 9am
Bot: ✅ Reminder set: Call John
When: Tomorrow at 9:00 AM
Notification: chat
User: Set a reminder for next Monday at 2pm to submit the report
Bot: ✅ Reminder set: Submit the report
When: Monday, May 19 at 2:00 PM
Notification: chat
User: In 30 minutes remind me to take a break
Bot: ✅ Reminder set: Take a break
When: in 30 minutes (2:45 PM)
Notification: chat
Viewing Reminders
User: Show my reminders
Bot: 📋 Your Reminders:
1. Call John
📅 Tomorrow at 9:00 AM
🔔 chat
2. Submit the report
📅 Monday, May 19 at 2:00 PM
🔔 chat
3. Take a break
📅 Today at 2:45 PM
🔔 chat
Managing Reminders
User: Snooze my call reminder for 1 hour
Bot: ✅ Reminder snoozed. New time: Tomorrow at 10:00 AM
User: Delete the report reminder
Bot: ✅ Reminder deleted: Submit the report
How It Works
Tool Registration
ADD TOOL "add-reminder"
ADD TOOL "list-reminders"
ADD TOOL "delete-reminder"
ADD TOOL "snooze-reminder"
USE KB "reminder.gbkb"
CLEAR SUGGESTIONS
ADD SUGGESTION "add" AS "Add a reminder"
ADD SUGGESTION "list" AS "View my reminders"
ADD SUGGESTION "today" AS "Today's reminders"
ADD SUGGESTION "delete" AS "Delete a reminder"
SET CONTEXT "reminders" AS "You are a reminder assistant helping users manage their tasks and reminders."
BEGIN SYSTEM PROMPT
You are a reminder AI assistant.
When creating reminders:
- Parse natural language dates (tomorrow, next week, in 2 hours)
- Confirm the reminder details before saving
- Suggest appropriate times if not specified
When listing reminders:
- Show upcoming reminders first
- Highlight overdue items
- Group by date when appropriate
Be concise and helpful.
END SYSTEM PROMPT
Creating Reminders
' add-reminder.bas
PARAM when AS STRING LIKE "tomorrow at 9am" DESCRIPTION "When to send the reminder"
PARAM subject AS STRING LIKE "Call John" DESCRIPTION "What to be reminded about"
PARAM notify AS STRING LIKE "email" DESCRIPTION "Notification method" OPTIONAL
DESCRIPTION "Create a reminder for a specific date and time"
IF NOT notify THEN
notify = "chat"
END IF
reminderid = "REM-" + FORMAT(NOW(), "YYYYMMDD") + "-" + FORMAT(RANDOM(1000, 9999))
useremail = GET "session.user_email"
userphone = GET "session.user_phone"
WITH reminder
id = reminderid
remindAt = when
message = subject
notifyBy = notify
email = useremail
phone = userphone
created = NOW()
status = "pending"
END WITH
SAVE "reminders.csv", reminder
SET BOT MEMORY "last_reminder", reminderid
TALK "Reminder set: " + subject
TALK "When: " + when
TALK "Notification: " + notify
RETURN reminderid
Scheduled Reminder Delivery
' reminder.bas - runs on schedule
REM SET SCHEDULER "1 * * * * "
data = FIND "reminder.csv", "when=" + hour
IF data THEN
TALK TO admin, data.subject
END IF
Notification Channels
| Channel | Delivery Method |
|---|---|
chat | Message in bot conversation |
email | Email to user’s address |
sms | SMS to user’s phone |
Configuration
Configure in reminder.gbot/config.csv:
| Parameter | Description | Example |
|---|---|---|
Default Notification | Default channel | chat |
Snooze Duration | Default snooze time | 15 (minutes) |
Check Interval | How often to check | 1 (minute) |
Timezone | User timezone | America/New_York |
Max Reminders | Limit per user | 100 |
Customization
Custom Notification Channels
Add new notification types:
' In add-reminder.bas
SWITCH notify
CASE "chat"
' Default chat notification
CASE "email"
SEND MAIL email, "Reminder: " + subject, message, []
CASE "sms"
SEND SMS phone, "Reminder: " + subject
CASE "slack"
POST "https://hooks.slack.com/...", {"text": "Reminder: " + subject}
CASE "teams"
POST "https://outlook.office.com/webhook/...", {"text": subject}
END SWITCH
Recurring Reminders
Add support for recurring reminders:
' add-recurring-reminder.bas
PARAM subject AS STRING DESCRIPTION "What to remind about"
PARAM schedule AS STRING LIKE "daily" DESCRIPTION "Frequency: daily, weekly, monthly"
PARAM time AS STRING LIKE "9:00 AM" DESCRIPTION "Time of day"
DESCRIPTION "Create a recurring reminder"
SET SCHEDULE cron_expression, "send-recurring.bas"
WITH reminder
id = "REC-" + FORMAT(GUID())
message = subject
frequency = schedule
remindTime = time
status = "active"
END WITH
SAVE "recurring_reminders.csv", reminder
Priority Levels
Add priority support:
PARAM priority AS STRING LIKE "high" DESCRIPTION "Priority: low, medium, high" OPTIONAL
IF priority = "high" THEN
' Send via multiple channels
SEND MAIL email, "🔴 URGENT: " + subject, message, []
SEND SMS phone, "URGENT: " + subject
END IF
Integration Examples
With Calendar
' Sync reminder to calendar
IF reminder.notifyBy = "calendar" THEN
CREATE CALENDAR EVENT reminder.message, reminder.remindAt, 15
END IF
With Tasks
' Convert reminder to task when due
IF reminder.status = "sent" THEN
CREATE TASK reminder.message, "medium", user_email
END IF
With CRM
' Add follow-up reminder from CRM
PARAM contact_id AS STRING DESCRIPTION "Contact to follow up with"
PARAM days AS INTEGER LIKE 7 DESCRIPTION "Days until follow-up"
contact = FIND "contacts.csv", "id = " + contact_id
WITH reminder
id = FORMAT(GUID())
message = "Follow up with " + contact.name
remindAt = DATEADD(NOW(), days, "day")
notifyBy = "chat"
relatedTo = contact_id
END WITH
SAVE "reminders.csv", reminder
Date Parsing Examples
The LLM understands various date formats:
| Input | Parsed As |
|---|---|
| “tomorrow” | Next day, 9:00 AM |
| “tomorrow at 3pm” | Next day, 3:00 PM |
| “in 2 hours” | Current time + 2 hours |
| “next Monday” | Coming Monday, 9:00 AM |
| “end of day” | Today, 5:00 PM |
| “next week” | 7 days from now |
| “January 15” | Jan 15, current year |
| “1/15 at noon” | Jan 15, 12:00 PM |
Best Practices
- Be specific - Include enough detail in reminder messages
- Set appropriate times - Don’t set reminders for odd hours
- Use the right channel - Critical reminders via multiple channels
- Clean up - Delete completed reminders regularly
- Review regularly - Check reminder list to stay organized
- Test notifications - Verify each channel works before relying on it
Troubleshooting
| Issue | Cause | Solution |
|---|---|---|
| Reminder not sent | Scheduler not running | Verify cron job is active |
| Wrong time | Timezone mismatch | Configure correct timezone |
| No notification | Missing contact info | Ensure email/phone is set |
| Duplicate reminders | Created multiple times | Check for existing before adding |
| Past date accepted | No validation | Add date validation logic |
Use Cases
- Personal Productivity - Don’t forget important tasks
- Team Coordination - Remind team members of deadlines
- Customer Follow-ups - Schedule sales and support follow-ups
- Meeting Prep - Get reminded before meetings
- Health & Wellness - Regular break and wellness reminders
Related Templates
- Office - Office productivity with task management
- CRM - CRM with follow-up reminders
- Contacts - Contact management with activity tracking
- Marketing - Marketing with scheduled broadcasts
See Also
- Templates Reference - Full template list
- Template Samples - Example conversations
- gbdialog Reference - BASIC scripting guide
Sales CRM Template (crm.gbai)
A comprehensive General Bots template for sales customer relationship management with lead tracking, opportunity management, and sales pipeline automation.
Overview
The CRM template provides a full-featured sales CRM system with conversational AI capabilities. It enables sales teams to manage leads, track opportunities through the pipeline, generate quotes, send proposals, and forecast revenue—all through natural conversation or automated workflows.
Features
- Lead Management - Capture, qualify, convert, and nurture leads
- Opportunity Pipeline - Track deals through customizable stages
- Account Management - Manage customer accounts and contacts
- Activity Tracking - Log calls, emails, meetings, and tasks
- Quote Generation - Create and send professional quotes
- Proposal Automation - Generate and deliver sales proposals
- Sales Forecasting - Pipeline analysis and revenue projections
- Email Integration - Receive and process emails automatically
- Sentiment Analysis - AI-powered customer sentiment tracking
- Data Enrichment - Automatic lead data enhancement
Package Structure
crm.gbai/
├── crm.gbdialog/
│ ├── lead-management.bas # Lead lifecycle management
│ ├── opportunity-management.bas # Opportunity pipeline
│ ├── account-management.bas # Account/company management
│ ├── activity-tracking.bas # Activity logging
│ ├── case-management.bas # Support case handling
│ ├── analyze-customer-sentiment.bas # AI sentiment analysis
│ ├── data-enrichment.bas # Lead data enhancement
│ ├── send-proposal.bas # Proposal generation
│ ├── create-lead-from-draft.bas # Email to lead conversion
│ ├── crm-jobs.bas # Scheduled background jobs
│ └── tables.bas # Database schema definitions
└── crm.gbot/
└── config.csv # Bot configuration
Scripts
| File | Description |
|---|---|
lead-management.bas | Complete lead lifecycle: capture, qualify, convert, follow-up, nurture |
opportunity-management.bas | Pipeline stages, quotes, products, forecasting |
account-management.bas | Account and contact management |
activity-tracking.bas | Log and track all sales activities |
case-management.bas | Customer support case handling |
analyze-customer-sentiment.bas | AI-powered sentiment analysis |
data-enrichment.bas | Enrich leads with external data |
send-proposal.bas | Generate and send proposals |
on-receive-email.bas | Process incoming emails |
crm-jobs.bas | Scheduled automation tasks |
tables.bas | CRM database schema |
Data Schema
Leads Table
| Field | Type | Description |
|---|---|---|
id | GUID | Unique identifier |
name | String | Lead name |
email | Email address | |
phone | Phone | Phone number |
company | String | Company name |
source | String | Lead source |
status | String | new, qualified, hot, warm, cold, converted |
score | Integer | Lead qualification score (0-100) |
assigned_to | String | Sales rep ID |
created_at | DateTime | Creation timestamp |
Opportunities Table
| Field | Type | Description |
|---|---|---|
id | GUID | Unique identifier |
name | String | Opportunity name |
account_id | GUID | Related account |
contact_id | GUID | Primary contact |
amount | Decimal | Deal value |
stage | String | Pipeline stage |
probability | Integer | Win probability (0-100) |
close_date | Date | Expected close date |
owner_id | String | Sales rep ID |
lead_source | String | Original lead source |
Accounts Table
| Field | Type | Description |
|---|---|---|
id | GUID | Unique identifier |
name | String | Company name |
type | String | prospect, customer, partner |
industry | String | Industry vertical |
owner_id | String | Account owner |
created_from_lead | GUID | Original lead ID |
Contacts Table
| Field | Type | Description |
|---|---|---|
id | GUID | Unique identifier |
account_id | GUID | Parent account |
name | String | Full name |
email | Email address | |
phone | Phone | Phone number |
title | String | Job title |
primary_contact | Boolean | Primary contact flag |
Activities Table
| Field | Type | Description |
|---|---|---|
id | GUID | Unique identifier |
type | String | call, email, meeting, task |
subject | String | Activity subject |
lead_id | GUID | Related lead |
opportunity_id | GUID | Related opportunity |
created_at | DateTime | Activity timestamp |
Pipeline Stages
| Stage | Probability | Description |
|---|---|---|
qualification | 10% | Initial qualification |
needs_analysis | 20% | Understanding requirements |
value_proposition | 50% | Presenting solution |
decision_makers | 60% | Engaging decision makers |
proposal | 75% | Proposal sent |
negotiation | 90% | Terms negotiation |
closed_won | 100% | Deal closed - won |
closed_lost | 0% | Deal closed - lost |
Sample Conversations
Lead Capture
User: I just got a lead from the website
Bot: Great! I'll help you capture it. What's their name?
User: John Smith from TechCorp
Bot: What's their email?
User: john@techcorp.com
Bot: ✅ Lead captured: John Smith (TechCorp)
ID: LEAD-20240115-4821
Would you like to qualify this lead now?
Lead Qualification (BANT)
User: Yes, qualify the lead
Bot: I'll ask a few questions to qualify them.
What is TechCorp's annual revenue range?
1. Under $1M
2. $1M - $10M
3. $10M - $50M
4. Over $50M
User: 3
Bot: How many employees?
User: About 200
Bot: What's their timeline for making a decision?
User: This quarter
Bot: Do they have budget allocated?
User: Yes
Bot: 🔥 Hot Lead! Score: 85/100
This lead is a great fit. I've:
• Notified sales@company.com
• Created a follow-up task
Would you like to convert to an opportunity?
Opportunity Management
User: Create a new opportunity for Acme Corp
Bot: What should we call this opportunity?
User: Enterprise License Deal
Bot: What is the estimated value of this deal?
User: $50,000
Bot: When do you expect to close this deal?
User: End of Q2
Bot: ✅ Opportunity created: Enterprise License Deal
Account: Acme Corp
Value: $50,000
Stage: Qualification (10%)
Close Date: June 30, 2024
Pipeline Update
User: Update the Acme deal to proposal stage
Bot: Current stage: value_proposition
Select new stage:
1. Qualification (10%)
2. Needs Analysis (20%)
3. Value Proposition (50%)
4. Decision Makers (60%)
5. Proposal (75%)
6. Negotiation (90%)
7. Closed Won (100%)
8. Closed Lost (0%)
User: 5
Bot: ✅ Stage updated to proposal (75%)
Would you like to generate a quote for this opportunity?
Sales Forecast
User: Show me the sales forecast
Bot: 📊 SALES FORECAST
════════════════════════════════════════
Total Pipeline: $425,000
Weighted Pipeline: $187,500
Quarterly Forecast:
├── Q1: $45,000
├── Q2: $82,500
├── Q3: $35,000
└── Q4: $25,000
Top Opportunities:
1. Enterprise License - Acme Corp ($50,000) - 75%
2. Platform Upgrade - TechStart ($35,000) - 60%
3. Consulting Deal - DataFlow ($28,000) - 50%
Lead Qualification Scoring
The qualification process scores leads based on BANT criteria:
- Budget - Revenue range (up to 30 points)
- Authority - Employee count/company size (up to 25 points)
- Need - Timeline urgency (up to 30 points)
- Timeline - Budget allocation (up to 25 points)
Scoring Thresholds
| Score | Status | Action |
|---|---|---|
| ≥70 | Hot Lead | Immediate follow-up |
| 50-69 | Warm Lead | 24-hour response |
| 30-49 | Cold Lead | Nurture campaign |
| <30 | Unqualified | Archive |
Lead Nurturing
Automated nurture campaigns based on lead age:
| Days | Content |
|---|---|
| 3 | 5 Tips to Improve Your Business |
| 7 | Case Study: How We Helped Similar Companies |
| 14 | Free Consultation Offer |
| 30 | Special Limited Time Offer |
Configuration
Configure in crm.gbot/config.csv:
| Parameter | Description | Example |
|---|---|---|
Company Name | Your company name | Acme Sales |
Currency | Default currency | USD |
Tax Rate | Default tax percentage | 10 |
Quote Validity Days | Quote expiration | 30 |
Pipeline Stages | Custom stage definitions | (JSON) |
Lead Sources | Available lead sources | web,referral,event |
Admin Email | Notifications email | sales@company.com |
Scheduled Jobs
| Job | Schedule | Description |
|---|---|---|
| Lead nurturing | Daily | Send nurture emails to cold/warm leads |
| Follow-up reminders | Hourly | Alert reps of overdue follow-ups |
| Pipeline cleanup | Weekly | Archive stale opportunities |
| Forecast update | Daily | Recalculate sales forecast |
| Activity sync | Real-time | Sync emails and calendar events |
Email Integration
Receiving Emails
' on-receive-email.bas
email_from = GET "email.from"
email_subject = GET "email.subject"
email_body = GET "email.body"
' Check if from existing contact
contact = FIND "contacts.csv", "email = '" + email_from + "'"
IF contact THEN
' Log activity against contact
WITH activity
type = "email"
subject = email_subject
contact_id = contact.id
END WITH
SAVE "activities.csv", activity
ELSE
' Create new lead from email
CALL "create-lead-from-draft.bas"
END IF
Sending Proposals
' send-proposal.bas
proposal = GENERATE FROM TEMPLATE "proposal_template.docx" WITH {
"company": account.name,
"contact": contact.name,
"products": opportunity_products,
"total": quote.total,
"valid_until": quote.valid_until
}
SEND MAIL contact.email, "Proposal: " + opportunity.name,
"Please find attached our proposal.", [proposal]
AI Features
Customer Sentiment Analysis
' analyze-customer-sentiment.bas
SET CONTEXT "Analyze customer communication for sentiment and buying signals."
communications = FIND "activities.csv", "contact_id = '" + contact_id + "'"
analysis = LLM "Analyze these customer communications and provide:
1. Overall sentiment (positive, neutral, negative)
2. Buying signals detected
3. Concerns or objections
4. Recommended next action
Communications: " + JSON(communications)
TALK analysis
Data Enrichment
' data-enrichment.bas
' Enrich lead with external data
company_info = GET "https://api.enrichment.com/company/" + lead.company
IF company_info THEN
lead.industry = company_info.industry
lead.employee_count = company_info.employees
lead.revenue_range = company_info.revenue
lead.linkedin_url = company_info.linkedin
UPDATE "leads.csv", lead
END IF
Best Practices
- Qualify early - Use BANT scoring to prioritize leads
- Track everything - Log all customer interactions
- Follow up promptly - Hot leads within hours, warm within 24h
- Use automation - Let nurture campaigns work cold leads
- Clean pipeline - Archive stale opportunities regularly
- Forecast accurately - Keep close dates and probabilities updated
- Segment leads - Use tags and sources for better targeting
Troubleshooting
| Issue | Cause | Solution |
|---|---|---|
| Lead not found | Search criteria too strict | Use partial match |
| Stage not updating | Missing opportunity ID | Set opportunity in session |
| Quote not generating | Missing products | Add products to opportunity first |
| Email not sending | Missing contact email | Verify contact record |
| Forecast incorrect | Stale data | Update opportunity amounts |
Use Cases
- Inside Sales - Lead qualification and opportunity management
- Field Sales - Account management and activity tracking
- Sales Management - Pipeline visibility and forecasting
- Business Development - Lead generation and nurturing
- Customer Success - Account health and expansion opportunities
Integration Points
- Email - Inbound/outbound email tracking
- Calendar - Meeting scheduling
- ERP - Order and billing sync
- Marketing Automation - Lead handoff
- Support Ticketing - Case management
Related Templates
- Contacts - Contact directory management
- Marketing - Marketing automation and campaigns
- Analytics - Sales analytics and reporting
- Reminder - Follow-up reminders
See Also
- Templates Reference - Full template list
- Template Samples - Example conversations
- gbdialog Reference - BASIC scripting guide
CRM Contacts Template
The CRM Contacts template provides a complete contact management solution with natural language interface. Users can add, search, update, and manage contacts through conversational interactions.
Overview
| Property | Value |
|---|---|
| Template ID | crm/contacts.gbai |
| Category | CRM |
| Complexity | Intermediate |
Features
- Add new contacts with validation
- Search contacts by name, email, or phone
- Update contact information
- Tag and categorize contacts
- Export contacts to CSV
Installation
Copy the template folder to your work directory or use the Sources app to create from template.
Configuration
Add these settings to your config.csv:
| Key | Value | Description |
|---|---|---|
contacts-table | contacts | Database table name |
require-email | true | Require email for new contacts |
allow-duplicates | false | Allow duplicate entries |
default-tags | lead | Default tags for new contacts |
Sample Conversations
Adding a New Contact
Searching Contacts
Updating a Contact
Exporting Contacts
What It Can Do
| Capability | Description |
|---|---|
| Add contacts | Collect name, email, phone, company through conversation |
| Search | Find by any field - name, email, company, phone |
| Update | Modify any contact field naturally |
| Delete | Remove contacts with confirmation |
| Tags | Categorize contacts (lead, customer, prospect) |
| Export | Generate CSV files for external use |
| Bulk import | Upload CSV to add multiple contacts |
Customization Ideas
Add Custom Fields
Configure additional fields like LinkedIn profile, job title, or lead source in your bot’s config.csv.
Add Validation
The bot validates email formats and phone numbers automatically. Configure stricter rules as needed.
Connect to External CRM
Use the POST and GET keywords to sync contacts with Salesforce, HubSpot, or other CRM systems.
Related Templates
- Sales Pipeline - Track deals and opportunities
- Customer Support - Support ticket management
- Template Samples - More conversation examples
Attendance CRM Template (attendance-crm.gbai)
A hybrid AI + Human support template that combines intelligent bot routing with human attendant management and full CRM automation. This template demonstrates the power of General Bots as an LLM orchestrator for customer service operations.
Overview
The Attendance CRM template provides:
- Intelligent Routing - Bot analyzes sentiment and auto-transfers frustrated customers
- LLM-Assisted Attendants - AI tips, message polish, smart replies for human agents
- Queue Management - Automated queue monitoring and load balancing
- CRM Automations - Follow-ups, collections, lead nurturing, pipeline management
- Multi-Channel Support - Works on WhatsApp, Web, and other channels
Key Features
| Feature | Description |
|---|---|
| Sentiment-Based Transfer | Auto-transfers when customer frustration is detected |
| AI Copilot for Attendants | Real-time tips, smart replies, message polishing |
| Queue Health Monitoring | Auto-reassign stale conversations, alert supervisors |
| Automated Follow-ups | 1-day, 3-day, 7-day follow-up sequences |
| Collections Workflow | Payment reminders from due date to legal escalation |
| Lead Scoring & Nurturing | Score leads and re-engage cold prospects |
| Pipeline Management | Weekly reviews, stale opportunity alerts |
Package Structure
attendance-crm.gbai/
├── attendance-crm.gbdialog/
│ ├── start.bas # Main entry - intelligent routing
│ ├── queue-monitor.bas # Queue health monitoring (scheduled)
│ ├── attendant-helper.bas # LLM assist tools for attendants
│ └── crm-automations.bas # Follow-ups, collections, nurturing
├── attendance-crm.gbot/
│ └── config.csv # Bot configuration
└── attendant.csv # Attendant team configuration
Configuration
config.csv
name,value
# Bot Identity
bot-name,Attendance CRM Bot
bot-description,Hybrid AI + Human support with CRM integration
# CRM / Human Handoff - Required
crm-enabled,true
# LLM Assist Features for Attendants
attendant-llm-tips,true
attendant-polish-message,true
attendant-smart-replies,true
attendant-auto-summary,true
attendant-sentiment-analysis,true
# Bot Personality (used for LLM assist context)
bot-system-prompt,You are a professional customer service assistant. Be helpful and empathetic.
# Auto-transfer triggers
auto-transfer-on-frustration,true
auto-transfer-threshold,3
# Queue Settings
queue-timeout-minutes,30
queue-notify-interval,5
# Lead Scoring
lead-score-threshold-hot,70
lead-score-threshold-warm,50
# Follow-up Automation
follow-up-1-day,true
follow-up-3-day,true
follow-up-7-day,true
# Collections Automation
collections-enabled,true
collections-grace-days,3
# Working Hours
business-hours-start,09:00
business-hours-end,18:00
business-days,1-5
# Notifications
notify-on-vip,true
notify-on-escalation,true
notify-email,support@company.com
attendant.csv
Attendants can be identified by any channel: WhatsApp phone, email, Microsoft Teams, or Google account.
id,name,channel,preferences,department,aliases,phone,email,teams,google
att-001,João Silva,all,sales,commercial,joao;js;silva,+5511999990001,joao.silva@company.com,joao.silva@company.onmicrosoft.com,joao.silva@company.com
att-002,Maria Santos,whatsapp,support,customer-service,maria;ms,+5511999990002,maria.santos@company.com,maria.santos@company.onmicrosoft.com,maria.santos@gmail.com
att-003,Pedro Costa,web,technical,engineering,pedro;pc;tech,+5511999990003,pedro.costa@company.com,pedro.costa@company.onmicrosoft.com,pedro.costa@company.com
att-004,Ana Oliveira,all,collections,finance,ana;ao;cobranca,+5511999990004,ana.oliveira@company.com,ana.oliveira@company.onmicrosoft.com,ana.oliveira@company.com
att-005,Carlos Souza,whatsapp,sales,commercial,carlos;cs,+5511999990005,carlos.souza@company.com,carlos.souza@company.onmicrosoft.com,carlos.souza@gmail.com
Column Reference
| Column | Description | Example |
|---|---|---|
id | Unique attendant ID | att-001 |
name | Display name | João Silva |
channel | Preferred channels (all, whatsapp, web, teams) | all |
preferences | Specialization area | sales, support, technical |
department | Department for routing | commercial, engineering |
aliases | Semicolon-separated nicknames for matching | joao;js;silva |
phone | WhatsApp number (E.164 format) | +5511999990001 |
email | Email address for notifications | joao@company.com |
teams | Microsoft Teams UPN | joao@company.onmicrosoft.com |
google | Google Workspace email | joao@company.com |
The system can find an attendant by any identifier - phone, email, Teams UPN, Google account, name, or alias.
---
## Scripts
### start.bas - Intelligent Routing
The main entry point analyzes every customer message and decides routing:
```basic
' Analyze sentiment immediately
sentiment = ANALYZE SENTIMENT session.id, message
' Track frustration
IF sentiment.overall = "negative" THEN
frustration_count = frustration_count + 1
END IF
' Auto-transfer on high escalation risk
IF sentiment.escalation_risk = "high" THEN
tips = GET TIPS session.id, message
result = TRANSFER TO HUMAN "support", "urgent", context_summary
END IF
Key behaviors:
- Analyzes sentiment on every message
- Tracks frustration count across conversation
- Auto-transfers on explicit request (“falar com humano”, “talk to human”)
- Auto-transfers when escalation risk is high
- Auto-transfers after 3+ negative messages
- Passes AI tips to attendant during transfer
queue-monitor.bas - Queue Health
Scheduled job that runs every 5 minutes:
SET SCHEDULE "queue-monitor", "*/5 * * * *"
What it does:
- Finds conversations waiting >10 minutes → auto-assigns
- Finds inactive assigned conversations → reminds attendant
- Finds conversations with offline attendants → reassigns
- Detects abandoned conversations → sends follow-up, then resolves
- Generates queue metrics for dashboard
- Alerts supervisor if queue gets long or no attendants online
attendant-helper.bas - LLM Assist Tools
Provides AI-powered assistance to human attendants:
' Get tips for current conversation
tips = USE TOOL "attendant-helper", "tips", session_id, message
' Polish a message before sending
polished = USE TOOL "attendant-helper", "polish", session_id, message, "empathetic"
' Get smart reply suggestions
replies = USE TOOL "attendant-helper", "replies", session_id
' Get conversation summary
summary = USE TOOL "attendant-helper", "summary", session_id
' Analyze sentiment with recommendations
sentiment = USE TOOL "attendant-helper", "sentiment", session_id, message
' Check if transfer is recommended
should_transfer = USE TOOL "attendant-helper", "suggest_transfer", session_id
crm-automations.bas - Business Workflows
Scheduled CRM automations:
' Daily follow-ups at 9am weekdays
SET SCHEDULE "follow-ups", "0 9 * * 1-5"
' Daily collections at 8am weekdays
SET SCHEDULE "collections", "0 8 * * 1-5"
' Daily lead nurturing at 10am weekdays
SET SCHEDULE "lead-nurture", "0 10 * * 1-5"
' Weekly pipeline review Friday 2pm
SET SCHEDULE "pipeline-review", "0 14 * * 5"
BASIC Keywords Used
Queue Management
| Keyword | Description | Example |
|---|---|---|
GET QUEUE | Get queue status and items | queue = GET QUEUE |
NEXT IN QUEUE | Get next waiting conversation | next = NEXT IN QUEUE |
ASSIGN CONVERSATION | Assign to attendant | ASSIGN CONVERSATION session_id, "att-001" |
RESOLVE CONVERSATION | Mark as resolved | RESOLVE CONVERSATION session_id, "Fixed" |
SET PRIORITY | Change priority | SET PRIORITY session_id, "urgent" |
Attendant Management
| Keyword | Description | Example |
|---|---|---|
GET ATTENDANTS | List attendants | attendants = GET ATTENDANTS "online" |
GET ATTENDANT STATS | Get performance metrics | stats = GET ATTENDANT STATS "att-001" |
SET ATTENDANT STATUS | Change status | SET ATTENDANT STATUS "att-001", "busy" |
LLM Assist
| Keyword | Description | Example |
|---|---|---|
GET TIPS | Generate AI tips | tips = GET TIPS session_id, message |
POLISH MESSAGE | Improve message | polished = POLISH MESSAGE text, "empathetic" |
GET SMART REPLIES | Get reply suggestions | replies = GET SMART REPLIES session_id |
GET SUMMARY | Conversation summary | summary = GET SUMMARY session_id |
ANALYZE SENTIMENT | Sentiment analysis | sentiment = ANALYZE SENTIMENT session_id, message |
Customer Journey
| Keyword | Description | Example |
|---|---|---|
TAG CONVERSATION | Add tags | TAG CONVERSATION session_id, "vip" |
ADD NOTE | Add internal note | ADD NOTE session_id, "Customer is VIP" |
GET CUSTOMER HISTORY | Previous interactions | history = GET CUSTOMER HISTORY user_id |
TRANSFER TO HUMAN | Handoff to human | TRANSFER TO HUMAN "support", "high", context |
Sample Conversations
Auto-Transfer on Frustration
Customer: Olá, preciso de ajuda com meu pedido
Bot: Olá! Claro, ficarei feliz em ajudar com seu pedido.
Qual é o número do pedido?
Customer: #12345
Bot: Encontrei seu pedido #12345. Está em trânsito e deve
chegar amanhã. Posso ajudar com mais alguma coisa?
Customer: Amanhã?! Isso é ridículo! Já deveria ter chegado há 3 dias!
Bot: [Sentiment: negative, Escalation Risk: medium]
Entendo sua frustração. Peço desculpas pelo atraso.
Se preferir falar com um atendente humano, é só me avisar.
Customer: SIM! Quero falar com alguém de verdade AGORA!
Bot: [Auto-transfer triggered]
Estou transferindo você para Maria Santos.
Um momento, por favor.
[Tips sent to Maria:]
⚠️ Customer frustrated - 3 negative messages
🎯 Issue: Delayed order #12345
✅ Offer compensation for delay
Attendant Using LLM Assist
[Customer message arrives]
Customer: não consigo acessar minha conta faz 2 dias!!
[AI Tips appear in attendant UI:]
💡 Tips:
⚠️ Customer frustrated - use empathetic tone
🎯 Intent: Account access issue
✅ Verify account status, offer password reset
[Attendant types response:]
Attendant: oi, vou verificar sua conta
[Clicks ✨ Polish button:]
Polished: "Olá! Entendo como isso pode ser frustrante.
Vou verificar sua conta agora mesmo e resolver
isso para você."
[Attendant sends polished message]
Automation Workflows
Follow-up Sequence
| Day | Action | Template |
|---|---|---|
| 1 | Thank you message | follow_up_thanks |
| 3 | Value proposition | follow_up_value |
| 7 | Special offer (if score ≥50) | follow_up_offer |
Collections Workflow
| Days Overdue | Action | Escalation |
|---|---|---|
| 0 (due today) | Friendly reminder | WhatsApp template |
| 3 | First notice | WhatsApp + Email |
| 7 | Second notice | + Notify collections team |
| 15 | Final notice + late fees | + Queue for human call |
| 30+ | Send to legal | + Suspend account |
WhatsApp Templates Required
Configure these in Meta Business Manager:
| Template | Variables | Purpose |
|---|---|---|
follow_up_thanks | name, interest | 1-day thank you |
follow_up_value | name, interest | 3-day value prop |
follow_up_offer | name, discount | 7-day offer |
payment_due_today | name, invoice_id, amount | Due reminder |
payment_overdue_3 | name, invoice_id, amount | 3-day overdue |
payment_overdue_7 | name, invoice_id, amount | 7-day overdue |
payment_final_notice | name, invoice_id, total | 15-day final |
Metrics & Analytics
The template automatically tracks:
- Queue Metrics: Wait times, queue length, utilization
- Attendant Performance: Resolved count, active conversations
- Sentiment Trends: Per conversation and overall
- Automation Results: Follow-ups sent, collections processed
Access via:
- Dashboard at
/suite/analytics/ - API at
/api/attendance/insights - Stored in
queue_metricsandautomation_logstables
Best Practices
1. Configure Sentiment Thresholds
Adjust auto-transfer-threshold based on your tolerance:
2= Very aggressive (transfer quickly)3= Balanced (default)5= Conservative (try harder with bot)
2. Set Business Hours
Configure business-hours-* to avoid sending automated messages at night.
3. Train Your Team
Ensure attendants know the WhatsApp commands:
/tips- Get AI tips/polish <message>- Improve message/replies- Get suggestions/resolve- Close conversation
4. Monitor Queue Health
Set up alerts for:
- Queue > 10 waiting
- No attendants online during business hours
- Average wait > 15 minutes
See Also
- Transfer to Human - Handoff details
- LLM-Assisted Attendant - AI copilot features
- Sales CRM Template - Full CRM without attendance
- Attendance Queue Module - Queue configuration
Marketing Automation Template (marketing.gbai)
A General Bots template for marketing campaign management, content creation, and multi-channel broadcast messaging.
Overview
The Marketing template provides marketing automation capabilities including campaign management, content ideation, image generation, social media posting, and WhatsApp broadcast messaging. It enables marketing teams to create, schedule, and deliver campaigns through conversational AI.
Features
- Campaign Management - Create and organize marketing campaigns
- Content Ideation - AI-assisted content idea generation
- Image Generation - AI-powered marketing visuals
- Social Media Posting - Direct posting to Instagram and other platforms
- WhatsApp Broadcasts - Mass messaging with template support
- Contact Segmentation - Target specific audience segments
- Template Compliance - META-approved template validation
- Broadcast Logging - Track delivery and engagement
Package Structure
marketing.gbai/
├── marketing.gbdialog/
│ ├── add-new-idea.bas # Content ideation tool
│ ├── broadcast.bas # WhatsApp broadcast messaging
│ ├── get-image.bas # AI image generation
│ ├── post-to-instagram.bas # Instagram posting
│ ├── poster.bas # Marketing poster creation
│ └── campaigns/ # Campaign templates
└── marketing.gbot/
└── config.csv # Bot configuration
Scripts
| File | Description |
|---|---|
add-new-idea.bas | Generate and save marketing content ideas |
broadcast.bas | Send WhatsApp broadcasts to contact lists |
get-image.bas | Generate marketing images with AI |
post-to-instagram.bas | Post content to Instagram |
poster.bas | Create marketing posters and visuals |
WhatsApp Broadcast
The broadcast.bas script enables mass WhatsApp messaging with template support:
Parameters
| Parameter | Type | Description | Example |
|---|---|---|---|
message | STRING | Message with variables | "Olá {name}, confira nossas novidades!" |
template_file | FILE | Header image for template | header.jpg |
list_file | FILE | Contact list with phone numbers | contacts.xlsx |
filter | STRING | Optional filter condition | "Perfil=VIP" |
Template Compliance
The system validates messages for META WhatsApp template approval:
report = LLM "Esta mensagem será aprovada pelo WhatsApp META como Template?
Responda OK se sim, ou explique o problema: " + message
IF report <> "OK" THEN
TALK "Atenção: " + report
END IF
Broadcast Execution
IF filter THEN
list = FIND list_file, filter
ELSE
list = FIND list_file
END IF
index = 1
sent = 0
DO WHILE index < UBOUND(list)
row = list[index]
SEND TEMPLATE TO row.telefone, template_file
WAIT 0.1 ' Rate limiting
' Log delivery
WITH logEntry
timestamp = NOW()
phone = row.telefone
name = row.name
status = "sent"
END WITH
SAVE "broadcast_log.csv", logEntry
sent = sent + 1
index = index + 1
LOOP
TALK "Broadcast enviado para " + sent + " contatos."
Contact List Format
The contact file should include:
| Column | Required | Description |
|---|---|---|
telefone | Yes | Phone number with country code |
name | No | Contact name for personalization |
email | No | Email address |
Perfil | No | Segment tag for filtering |
Example contacts.xlsx
telefone,name,email,Perfil
+5511999999999,João Silva,joao@email.com,VIP
+5511888888888,Maria Santos,maria@email.com,Regular
+5511777777777,Pedro Costa,pedro@email.com,VIP
Sample Conversations
Creating a Broadcast
User: Send a broadcast to VIP customers
Bot: I'll help you create a broadcast. What's the message?
User: Olá {name}, aproveite 20% de desconto esta semana!
Bot: Checking template compliance...
✅ Message approved for WhatsApp template.
Do you have a header image?
User: Yes, use promo-banner.jpg
Bot: Which contact list should I use?
User: contacts.xlsx
Bot: 📤 Broadcast Preview:
Message: Olá {name}, aproveite 20% de desconto esta semana!
Image: promo-banner.jpg
Recipients: 127 VIP customers
Send now or schedule?
User: Send now
Bot: 📤 Broadcasting...
✅ Broadcast complete! Sent to 127 contacts.
View delivery report in Analytics.
Content Ideation
User: I need content ideas for our new product launch
Bot: What product are you launching?
User: Wireless headphones
Bot: What platform are you targeting?
User: Instagram
Bot: Here are 5 content ideas for your wireless headphones launch:
1. "Experience Sound Like Never Before 🎧"
Key message: Premium audio quality
CTA: Shop now, link in bio
Hashtags: #WirelessFreedom #AudioExperience #NewLaunch
2. "Cut the Cord, Not the Quality"
Key message: Freedom without compromise
CTA: Pre-order today
Hashtags: #WirelessAudio #TechLife #MusicLovers
3. "Your Music. Your Freedom."
Key message: Lifestyle integration
CTA: Discover more
Hashtags: #OnTheGo #Wireless #Premium
4. "30 Hours of Pure Sound"
Key message: Battery life
CTA: Learn more
Hashtags: #AllDayBattery #WirelessHeadphones
5. "Studio Sound, Everywhere"
Key message: Professional quality
CTA: Experience now
Hashtags: #StudioQuality #Audiophile #TechGear
Would you like me to generate images for any of these?
Image Generation
User: Generate an image for idea 1
Bot: Generating marketing image...
Prompt: "Premium wireless headphones floating in space with
sound waves emanating, modern minimalist style, product
photography, dark background with purple accent lighting"
[Generated Image]
✅ Image generated! Would you like to:
1. Post to Instagram
2. Use in a broadcast
3. Generate variations
4. Download
User: Post to Instagram
Bot: What caption should I use?
User: Experience Sound Like Never Before 🎧 #WirelessFreedom #NewLaunch
Bot: ✅ Posted to Instagram!
Post ID: 12345678
Time: May 15, 2025 at 2:30 PM
Track engagement in Analytics.
Audience Segmentation
Filter Examples
| Filter | Description |
|---|---|
Perfil=VIP | VIP customers only |
Perfil=Regular AND compras>5 | Regular customers with 5+ purchases |
ultima_compra>2024-01-01 | Recent purchasers |
cidade=São Paulo | Location-based |
interesse=electronics | Interest-based |
Building Segments
' Create reusable segments
WITH segment
id = "SEG-" + FORMAT(GUID())
name = "VIP São Paulo"
filter = "Perfil=VIP AND cidade=São Paulo"
count = COUNT(FIND "contacts.xlsx", filter)
created_at = NOW()
END WITH
SAVE "segments.csv", segment
Configuration
Configure in marketing.gbot/config.csv:
| Parameter | Description | Example |
|---|---|---|
Brand Color | Primary brand color | #FF5722 |
Logo File | Company logo path | logo.png |
Instagram Account | Connected IG account | @mycompany |
WhatsApp Business ID | WA Business account | 123456789 |
Default Template | Default broadcast template | marketing_update |
Rate Limit | Messages per second | 10 |
Max Broadcast Size | Maximum recipients | 1000 |
Analytics & Reporting
Broadcast Analytics
' Get broadcast statistics
broadcast_id = "BROADCAST-20240115-1234"
logs = FIND "broadcast_log.csv", "broadcast_id = '" + broadcast_id + "'"
total_sent = COUNT(logs)
delivered = COUNT(FIND logs, "status = 'delivered'")
read = COUNT(FIND logs, "status = 'read'")
clicked = COUNT(FIND logs, "status = 'clicked'")
TALK "📊 Broadcast Report"
TALK "Total Sent: " + total_sent
TALK "Delivered: " + delivered + " (" + (delivered/total_sent*100) + "%)"
TALK "Read: " + read + " (" + (read/total_sent*100) + "%)"
TALK "Clicked: " + clicked + " (" + (clicked/total_sent*100) + "%)"
Customization
Adding Campaign Types
' campaign-email.bas
PARAM subject AS STRING DESCRIPTION "Email subject line"
PARAM body AS STRING DESCRIPTION "Email body content"
PARAM list_file AS FILE DESCRIPTION "Contact list"
PARAM filter AS STRING DESCRIPTION "Segment filter" OPTIONAL
DESCRIPTION "Send email marketing campaign"
IF filter THEN
contacts = FIND list_file, filter
ELSE
contacts = FIND list_file
END IF
FOR EACH contact IN contacts
personalized_body = REPLACE(body, "{name}", contact.name)
SEND MAIL contact.email, subject, personalized_body, []
WITH log
campaign_id = campaign_id
contact_email = contact.email
sent_at = NOW()
status = "sent"
END WITH
SAVE "email_campaign_log.csv", log
NEXT
TALK "Email campaign sent to " + UBOUND(contacts) + " recipients."
Social Media Scheduling
' schedule-post.bas
PARAM platform AS STRING LIKE "instagram" DESCRIPTION "Social platform"
PARAM content AS STRING DESCRIPTION "Post content"
PARAM image AS FILE DESCRIPTION "Post image" OPTIONAL
PARAM schedule_time AS STRING DESCRIPTION "When to post"
DESCRIPTION "Schedule social media post"
WITH scheduled_post
id = "POST-" + FORMAT(GUID())
platform = platform
content = content
image = image
scheduled_for = schedule_time
status = "scheduled"
created_at = NOW()
END WITH
SAVE "scheduled_posts.csv", scheduled_post
SET SCHEDULE schedule_time, "execute-scheduled-post.bas"
TALK "Post scheduled for " + schedule_time + " on " + platform
Best Practices
- Template compliance - Always validate templates before broadcast
- Segment wisely - Target relevant audiences to improve engagement
- Rate limiting - Respect platform rate limits to avoid blocks
- Personalization - Use variables for personalized messages
- A/B testing - Test different messages with small segments first
- Timing - Schedule broadcasts for optimal engagement times
- Tracking - Monitor delivery and engagement metrics
- Opt-out handling - Honor unsubscribe requests immediately
Troubleshooting
| Issue | Cause | Solution |
|---|---|---|
| Template rejected | Non-compliant content | Review META guidelines |
| Low delivery rate | Invalid phone numbers | Clean contact list |
| Broadcast blocked | Rate limit exceeded | Reduce sending speed |
| Images not generating | Invalid prompt | Simplify prompt text |
| Instagram post failed | Auth expired | Reconnect account |
Compliance Notes
- Ensure recipients have opted in to receive marketing messages
- Honor unsubscribe requests within 24 hours
- Follow META WhatsApp Business policies
- Comply with GDPR/LGPD data protection requirements
- Keep records of consent for audit purposes
Use Cases
- Product Launches - Announce new products to customers
- Promotions - Send special offers and discounts
- Events - Promote webinars, sales, and events
- Newsletters - Regular customer communications
- Re-engagement - Win back inactive customers
- Social Media - Automated content posting
Related Templates
- CRM - Customer relationship management
- Contacts - Contact list management
- Broadcast - General message broadcasting
- Analytics - Marketing analytics
See Also
- Templates Reference - Full template list
- Template Samples - Example conversations
- gbdialog Reference - BASIC scripting guide
Template for Creating Templates (template.gbai)
A General Bots meta-template that serves as a starting point for creating new bot templates.
Overview
The Template template (yes, it’s a template for templates!) provides the essential structure and best practices for creating new General Bots templates. Use this as your foundation when building custom templates for specific use cases.
Features
- Standard Structure - Pre-configured folder hierarchy
- Best Practices - Follows General Bots conventions
- Documentation Ready - Includes README template
- Quick Start - Minimal setup required
Package Structure
template.gbai/
├── README.md # Template documentation
├── template.gbdialog/ # Dialog scripts
│ └── send.bas # Example script (placeholder)
├── template.gbdrive/ # File storage
│ └── (your files here)
├── template.gbkb/ # Knowledge base (optional)
│ └── docs/
└── template.gbot/ # Bot configuration
└── config.csv
Creating a New Template
Step 1: Copy the Template
cp -r templates/template.gbai templates/your-template.gbai
Step 2: Rename Internal Folders
Rename all internal folders to match your template name:
cd templates/your-template.gbai
mv template.gbdialog your-template.gbdialog
mv template.gbdrive your-template.gbdrive
mv template.gbot your-template.gbot
Step 3: Configure Your Bot
Edit your-template.gbot/config.csv:
name,value
Bot Name,Your Bot Name
Theme Color,blue
Answer Mode,default
LLM Provider,openai
Step 4: Create Dialog Scripts
Add your BASIC scripts to your-template.gbdialog/:
' start.bas - Main entry point
ADD TOOL "your-tool"
USE KB "your-template.gbkb"
CLEAR SUGGESTIONS
ADD SUGGESTION "option1" AS "First Option"
ADD SUGGESTION "option2" AS "Second Option"
ADD SUGGESTION "help" AS "Get Help"
BEGIN TALK
**Your Bot Name**
Welcome! I can help you with:
• Feature 1
• Feature 2
• Feature 3
What would you like to do?
END TALK
BEGIN SYSTEM PROMPT
You are a helpful assistant for [your use case].
Guidelines:
- Be helpful and concise
- Use the available tools when appropriate
- Ask clarifying questions when needed
END SYSTEM PROMPT
Step 5: Add Tools
Create tool scripts with proper parameters:
' your-tool.bas
PARAM input AS STRING LIKE "example" DESCRIPTION "Description of this parameter"
PARAM optional_param AS STRING DESCRIPTION "Optional parameter" OPTIONAL
DESCRIPTION "What this tool does - this helps the LLM decide when to use it"
' Your implementation here
result = DO_SOMETHING(input)
IF result THEN
RETURN result
ELSE
RETURN {"error": "Something went wrong"}
END IF
Step 6: Add Knowledge Base (Optional)
If your template needs reference documentation:
your-template.gbkb/
└── docs/
├── feature1.md
├── feature2.md
└── faq.md
Step 7: Update README
Replace the README with documentation for your template following the standard format.
Template Checklist
Before publishing your template, ensure:
- All folders renamed to match template name
-
config.csvconfigured with appropriate defaults -
start.basprovides clear entry point -
All tools have
PARAMandDESCRIPTION - System prompt guides LLM behavior
- README documents all features
- No hardcoded credentials or secrets
- Error handling implemented
- Example conversations documented
Naming Conventions
| Item | Convention | Example |
|---|---|---|
| Template folder | kebab-case.gbai | my-crm.gbai |
| Dialog scripts | kebab-case.bas | add-contact.bas |
| Tools | kebab-case | search-products |
| Config keys | Title Case | Theme Color |
| Table names | PascalCase | CustomerOrders |
File Templates
config.csv Template
name,value
Bot Name,Your Bot Name
Theme Color,blue
Answer Mode,default
LLM Provider,openai
Admin Email,admin@company.com
start.bas Template
' Register tools
ADD TOOL "tool-name"
' Load knowledge base
USE KB "your-template.gbkb"
' Configure suggestions
CLEAR SUGGESTIONS
ADD SUGGESTION "action" AS "Do Something"
' Welcome message
BEGIN TALK
**Bot Name**
Welcome message here.
END TALK
' System prompt
BEGIN SYSTEM PROMPT
You are a helpful assistant.
Define behavior and guidelines here.
END SYSTEM PROMPT
Tool Template
PARAM required_param AS STRING LIKE "example" DESCRIPTION "What this is"
PARAM optional_param AS STRING DESCRIPTION "Optional input" OPTIONAL
DESCRIPTION "What this tool does"
' Implementation
result = YOUR_LOGIC_HERE
IF result THEN
RETURN result
ELSE
RETURN {"error": "Error message"}
END IF
Best Practices
Dialog Scripts
- Clear entry point -
start.basshould be the main entry - Register tools - Use
ADD TOOLfor LLM-callable functions - Set context - Use
SET CONTEXTandBEGIN SYSTEM PROMPT - Add suggestions - Guide users with quick actions
- Welcome message - Use
BEGIN TALKfor introduction
Tools
- Descriptive names - Tool names should indicate function
- Parameter examples - Use
LIKEto show expected format - Clear descriptions - Help LLM understand when to use
- Return values - Always return meaningful results
- Error handling - Handle edge cases gracefully
Configuration
- Sensible defaults - Work out of the box
- Document options - Explain each setting
- No secrets - Use environment variables for credentials
- Flexible - Allow customization without code changes
Documentation
- Clear overview - Explain what the template does
- Feature list - Highlight capabilities
- Usage examples - Show common scenarios
- Configuration guide - Document all settings
- Troubleshooting - Address common issues
Publishing Your Template
- Test thoroughly - Verify all features work
- Document completely - Users should be self-sufficient
- Follow conventions - Consistent with other templates
- Update main README - Add to templates list
- Submit PR - Include template and documentation
Sample Conversation
User: How do I create a new template?
Bot: To create a new template:
1. Copy the template.gbai folder
2. Rename it to your-template.gbai
3. Rename internal folders to match
4. Edit config.csv with your settings
5. Create your dialog scripts in .gbdialog
6. Add tools as separate .bas files
7. Update the README
Would you like help with any specific step?
User: How do I add a tool?
Bot: To add a tool, create a .bas file in your .gbdialog folder:
' my-tool.bas
PARAM input AS STRING LIKE "example" DESCRIPTION "Input parameter"
DESCRIPTION "What this tool does"
' Your logic here
result = PROCESS(input)
RETURN result
Then register it in start.bas:
ADD TOOL "my-tool"
The LLM will automatically call your tool when appropriate
based on the DESCRIPTION you provided.
Troubleshooting
| Issue | Cause | Solution |
|---|---|---|
| Tool not found | Not registered | Add ADD TOOL "name" in start.bas |
| Tool not called | Poor description | Improve DESCRIPTION text |
| Wrong folder structure | Names don’t match | Ensure all folders use same prefix |
| Config not loading | Wrong format | Check CSV syntax |
| KB not found | Wrong path | Verify USE KB path matches folder |
Related Resources
- Templates Reference - Full template list
- Template Samples - Example conversations
- gbdialog Reference - BASIC scripting guide
- Bot Configuration - Configuration options
Use Cases for Custom Templates
- Industry-Specific - Healthcare, legal, finance bots
- Department-Specific - HR, IT, sales assistants
- Process Automation - Workflow-specific bots
- Integration Templates - Connect to specific APIs/systems
- Vertical Solutions - Complete solutions for business needs
See Also
- Templates Reference - Full template list
- Template Samples - Example conversations
- gbdialog Reference - BASIC scripting guide
Chapter 03: Knowledge Base System
Vector search and semantic retrieval for intelligent document querying.
Overview
The Knowledge Base (gbkb) transforms documents into searchable semantic representations, enabling natural language queries against your organization’s content.
Architecture
The pipeline processes documents through extraction, chunking, embedding, and storage to enable semantic search.
Supported Formats
| Format | Features |
|---|---|
| Text, OCR, tables | |
| DOCX | Formatted text, styles |
| HTML | DOM parsing |
| Markdown | GFM, tables, code |
| CSV/JSON | Structured data |
| TXT | Plain text |
Quick Start
' Activate knowledge base
USE KB "company-docs"
' Bot now answers from your documents
TALK "How can I help you?"
Key Concepts
Document Processing
- Extract - Pull text from files
- Chunk - Split into ~500 token segments
- Embed - Generate vectors (BGE model)
- Store - Save to Qdrant
Semantic Search
- Query converted to vector embedding
- Cosine similarity finds relevant chunks
- Top results injected into LLM context
- No explicit search code needed
Storage Requirements
Vector databases need ~3.5x original document size:
- Embeddings: ~2x
- Indexes: ~1x
- Metadata: ~0.5x
Configuration
name,value
embedding-url,http://localhost:8082
embedding-model,bge-small-en-v1.5
rag-hybrid-enabled,true
rag-top-k,10
Chapter Contents
- KB and Tools System - Integration patterns
- Vector Collections - Collection management
- Document Indexing - Processing pipeline
- Semantic Search - Search mechanics
- Episodic Memory - Conversation history and context management
- Semantic Caching - Performance optimization
See Also
- .gbkb Package - Folder structure
- USE KB Keyword - Keyword reference
- Hybrid Search - RAG 2.0
KB and TOOL System Documentation
Overview
The General Bots system provides four essential keywords for managing Knowledge Bases and Tools dynamically during conversation sessions. The USE KB keyword loads and embeds files from .gbkb folders into the vector database. The CLEAR KB keyword removes a knowledge base from the current session. The USE TOOL keyword makes a tool available for the LLM to call. The CLEAR TOOLS keyword removes all tools from the current session. Together, these keywords give you complete control over what information and capabilities your bot has access to at any moment.
Knowledge Base System
What is a KB?
A Knowledge Base is a folder containing documents (using the .gbkb folder structure) that are vectorized, embedded, and stored in a vector database. When users ask questions, the vector database retrieves relevant chunks and excerpts to inject into prompts, giving the LLM context-aware responses based on your specific documentation and data.
Folder Structure
Knowledge bases are organized within your bot’s work directory. The structure places all knowledge base folders inside a .gbkb container that shares your bot’s name. Within this container, you create separate folders for different topics or document collections. Each folder can contain PDF files, markdown documents, plain text files, Word documents, CSV files, and other supported formats.
work/
{bot_name}/
{bot_name}.gbkb/
circular/
document1.pdf
document2.md
document3.txt
comunicado/
info.docx
data.csv
docs/
README.md
guide.pdf
KB Loading Process
When you load a knowledge base, the system goes through several stages to make your documents searchable. First, the system scans the specified .gbkb folder to identify all documents. Then it processes each file by extracting text from PDFs, Word documents, text files, markdown, CSV files, and other supported formats. The extracted text is split into chunks of approximately 1000 characters with overlap between chunks to preserve context at boundaries. Each chunk is then converted into a vector representation using an embedding model. These vectors are stored in the vector database with metadata about their source, enabling fast similarity search. Once this process completes, the knowledge base is ready to answer semantic queries.
Supported File Types
The system supports a variety of document formats. PDF files receive full text extraction using the pdf-extract library. Microsoft Word documents in both DOCX and DOC formats are supported. Plain text files and markdown documents are processed directly. CSV files treat each row as a separate searchable entry. HTML files have their text content extracted while ignoring markup. JSON files are parsed and their structured data becomes searchable.
USE KB Keyword
The USE KB keyword loads a knowledge base folder into your current session. You can load multiple knowledge bases, and all of them become active simultaneously. This allows you to combine different document collections for comprehensive responses.
USE KB "circular"
' The circular KB folder is now loaded and searchable
' All documents in that folder are available for semantic queries
USE KB "comunicado"
' Now both circular and comunicado are active
' The LLM can draw from both collections when responding
CLEAR KB Keyword
The CLEAR KB keyword removes all loaded knowledge bases from the current session. This frees up memory and context space, which is particularly useful when switching between different topics or when you need to ensure the LLM only uses specific information.
CLEAR KB
' All loaded knowledge bases are removed
' Memory is freed and context space is reclaimed
Tool System
What are Tools?
Tools are callable functions that the LLM can invoke to perform specific actions beyond its training data. Tools enable your bot to query databases, call external APIs, process data, execute workflows, and integrate with external systems. When the LLM determines that a tool would help answer a user’s question, it generates a tool call with the appropriate parameters.
Tool Definition
Tools are defined in .bas files that automatically generate MCP and OpenAI-compatible tool definitions. When you create a BASIC file with PARAM declarations and a DESCRIPTION, the system compiles it into a tool specification that the LLM understands.
' weather.bas - becomes a tool automatically
PARAM location AS string
PARAM units AS string DEFAULT "celsius"
DESCRIPTION "Get current weather for a location"
' Tool implementation
weather_data = GET "https://api.weather.com/v1/current?location=" + location
SET CONTEXT "weather_data", weather_data
TALK "Here's the current weather for " + location
Tool Registration
Tools become available through two mechanisms. Auto-discovery scans all .bas files in your .gbdialog folder (except start.bas) and registers them as tools automatically. Dynamic loading uses the USE TOOL keyword to make external tools available during a session.
USE TOOL Keyword
The USE TOOL keyword makes a specific tool available for the LLM to call. You can enable multiple tools, giving your bot access to various capabilities during a conversation.
USE TOOL "weather"
' The weather tool is now available
USE TOOL "database_query"
' Database querying capability is added
USE TOOL "email_sender"
' The bot can now send emails when appropriate
CLEAR TOOLS Keyword
The CLEAR TOOLS keyword removes all tools from the current session. After clearing, the LLM can no longer call external functions and must rely solely on its training and any loaded knowledge bases.
CLEAR TOOLS
' All tools are disabled
' LLM cannot call external functions
Session Management
Context Lifecycle
Each conversation session follows a predictable lifecycle. When a session starts, the bot has a clean slate with no knowledge bases or tools loaded. During the conversation, you load resources as needed using USE KB and USE TOOL commands. The LLM actively uses these loaded resources to provide informed, capable responses. When the topic changes or resources are no longer needed, you clear them with CLEAR KB and CLEAR TOOLS. When the session ends, automatic cleanup releases all remaining resources.
Best Practices for KB Management
Load only the knowledge bases relevant to the current conversation. Overloading context with unnecessary KBs reduces response quality and increases costs. Clear knowledge bases when switching topics to keep the context focused on what matters. Update your KB files regularly to keep the information current. Monitor token usage because vector search results add tokens to each query.
Best Practices for Tool Management
Enable only the minimum set of tools needed for the current task. Having too many tools available can confuse the LLM about which one to use. Always validate tool responses and check for errors before presenting results to users. Log tool usage for audit purposes and debugging. Consider implementing rate limits to prevent abuse in production environments.
Performance Considerations
Memory usage varies based on your configuration. Each loaded knowledge base typically uses 100-500MB of RAM depending on document count and size. Tools use minimal memory, usually less than 1MB each. Vector search operations add 10-50ms latency to responses. Clear unused resources promptly to free memory for other operations.
Token optimization is important for controlling costs. KB chunks add 500-2000 tokens per query depending on the number of relevant chunks retrieved. Each tool description uses 50-200 tokens. Clearing resources when they are no longer needed reduces token usage. Using specific KB folders rather than loading entire databases improves both performance and relevance.
Implementation Details
Vector Database
The vector database configuration uses one collection per bot instance to maintain isolation. The embedding model is text-embedding-ada-002, which produces 1536-dimensional vectors. Distance calculations use cosine similarity for semantic matching. The index uses HNSW (Hierarchical Navigable Small World) with M=16 and ef=100 for fast approximate nearest neighbor search.
File Processing Pipeline
When USE KB processes files, it follows a systematic pipeline. The system scans the specified directory to identify all files. Text is extracted based on each file’s type using appropriate parsers. The extracted text is cleaned and normalized to remove artifacts. Content is split into chunks of approximately 1000 characters with 200 character overlap to preserve context across boundaries. Embeddings are generated via the OpenAI API for each chunk. The vectors are stored in the vector database along with metadata about their source. Finally, the session context is updated to reflect the newly available knowledge base.
Tool Execution Engine
When USE TOOL prepares a tool for use, it parses the tool definition into a JSON schema that describes parameters and expected behavior. This schema is registered with the LLM context so the model knows the tool is available. The system listens for tool invocations in the LLM’s responses. When a tool call is detected, parameters are validated against the schema. The tool executes its logic, which might involve HTTP requests or function calls. Results return to the LLM for incorporation into the response. All executions are logged for audit purposes.
Error Handling
Common Issues
Several error conditions can occur when working with knowledge bases and tools. The KB_NOT_FOUND error indicates that the specified KB folder does not exist, so you should verify the folder name and path. A VECTORDB_ERROR suggests a connection issue with the vector database service that needs investigation. EMBEDDING_FAILED errors typically indicate problems with the embedding API, often related to API keys or rate limits. TOOL_NOT_FOUND means the specified tool is not registered, so verify the tool name matches exactly. TOOL_EXECUTION_ERROR indicates the tool failed during execution, requiring investigation of the tool endpoint or logic. MEMORY_LIMIT errors occur when too many knowledge bases are loaded simultaneously, requiring you to clear unused KBs.
Debugging Approach
Check logs for detailed information about issues. KB loading progress shows which documents are being processed. Embedding generation logs reveal any failures during vectorization. Vector search query logs help diagnose relevance problems. Tool invocation logs show parameter values and execution results. Error details provide stack traces and specific failure reasons.
Examples
Customer Support Bot
This example shows a customer support bot that loads product documentation and FAQs, enables ticket management tools, and provides informed assistance.
' Load product documentation
USE KB "product_docs"
USE KB "faqs"
' Enable support tools
USE TOOL "ticket_system"
USE TOOL "knowledge_search"
' The bot now has access to documentation and can work with tickets
TALK "How can I help you with your support needs today?"
' When the session ends, clean up
CLEAR KB
CLEAR TOOLS
Research Assistant
This example demonstrates a research assistant that can switch between different knowledge base collections depending on the research topic.
' Load research papers for current topic
USE KB "papers_2024"
USE KB "citations"
' Enable research tools
USE TOOL "arxiv_search"
USE TOOL "citation_formatter"
TALK "What research topic would you like to explore?"
' When switching to a different research area
CLEAR KB
USE KB "papers_biology"
Enterprise Integration
This example shows an enterprise bot with access to company policies and integration with internal systems like Active Directory, Jira, and Slack.
' Load company policies
USE KB "hr_policies"
USE KB "it_procedures"
' Enable enterprise integration tools
USE TOOL "active_directory"
USE TOOL "jira_integration"
USE TOOL "slack_notifier"
' The bot can now query AD, work with Jira tickets, and send Slack notifications
' Handle employee requests throughout the conversation
' Clean up at end of shift
CLEAR KB
CLEAR TOOLS
Security Considerations
KB Security
Knowledge base security involves multiple layers of protection. Access control ensures that knowledge bases require proper authorization before loading. Files are encrypted at rest to protect sensitive information. All KB access is logged for audit purposes. Per-session KB separation ensures that one user’s loaded knowledge bases cannot leak to another session.
Tool Security
Tool security protects against misuse and unauthorized access. Authentication requirements ensure tools only execute within valid sessions. Rate limiting prevents tool abuse through excessive calls. Parameter validation sanitizes all inputs before execution. Execution sandboxing isolates tool operations from the core system.
Best Practices
Follow the principle of least privilege by loading only the resources needed for the current task. Conduct regular audits to review KB and tool usage patterns. Ensure sensitive knowledge bases use encrypted storage. Rotate API keys used by tools on a regular schedule. Maintain session isolation by clearing resources between different users.
Configuration
Configuration options for knowledge bases and tools are set in your bot’s config.csv file. The vector database connection settings specify where embeddings are stored. Chunk size and overlap parameters control how documents are split. Embedding model selection determines vector quality and dimension. Tool timeout settings prevent long-running operations from blocking conversations.
Troubleshooting
KB Issues
If a knowledge base is not loading, first verify that the folder exists at the expected path within work/{bot_name}/{bot_name}.gbkb/. Check file permissions to ensure the system can read the documents. Verify the vector database connection is healthy. Review logs for any embedding errors during processing.
If search results are poor quality, consider adjusting the chunk overlap to provide more context at boundaries. Experiment with different chunk sizes for your content type. Ensure your embedding model is appropriate for the content language. Pre-process documents to remove noise and improve text quality before indexing.
Tool Issues
If a tool is not executing, first verify that the tool registration completed successfully by checking logs. Confirm parameter validation rules match the values being passed. Test the tool endpoint directly outside of the bot to isolate the issue. Review execution logs for specific error messages.
If tools are timing out, increase the timeout setting in configuration. Check network connectivity between the bot and tool endpoints. Optimize the tool endpoint to respond faster. Consider adding retry logic for transient failures.
Migration Guide
From File-based to Vector Search
If you are migrating from a file-based knowledge system to vector search, start by exporting your existing files into a clean directory structure. Organize the files into logical .gbkb folders based on topic or department. Run the embedding pipeline by loading each KB with USE KB. Test vector search queries to verify results match expectations. Update your bot logic to use the new KB keywords instead of file operations.
From Static to Dynamic Tools
If you have static function calls that should become dynamic tools, convert each function into a tool definition with PARAM declarations. Create a .bas file with the DESCRIPTION and parameter specifications. Implement the endpoint or handler that the tool will call. Test the tool using USE TOOL and verify it executes correctly. Remove the static function registration from your startup logic.
See Also
Documentation
The Vector Collections page explains how vector search works under the hood. The Document Indexing page covers automatic document processing in detail. The Semantic Search page describes meaning-based retrieval algorithms. The Context Compaction page explains how conversation context is managed. The Caching page covers performance optimization through semantic caching. The Chapter 6 BASIC Reference provides complete dialog scripting documentation. The Chapter 9 API and Tools reference covers tool integration in depth.
Further Reading
The Pragmatismo blog post on BASIC LLM Tools explains how to extend LLMs with custom tools. The MCP is the new API article covers modern tool integration patterns. The Beyond Chatbots post discusses using knowledge bases effectively for sophisticated applications.
Next Chapter
Continue to Chapter 4 on User Interface to learn about creating bot interfaces that present your knowledge base and tool capabilities to users effectively.
Vector Collections
This chapter explains how BotServer organizes knowledge into vector collections, the searchable units that power semantic retrieval. Understanding how collections work helps you structure documents effectively and optimize the knowledge your bots can access.
From Folders to Collections
Vector collections emerge automatically from the folder structure within your .gbkb directory. Each folder you create becomes a distinct collection, indexed separately and activated independently. This direct mapping between physical organization and logical collections makes knowledge management intuitive—organize files into folders by topic, and those folders become the collections you reference in your scripts.
When BotServer encounters a .gbkb folder, it scans for documents in supported formats including PDF, DOCX, TXT, HTML, and Markdown. Each file’s content is extracted, split into manageable chunks, converted to vector embeddings, and stored in the vector database. The folder name becomes the collection identifier you use with the USE KB keyword.
This automatic process means no manual indexing configuration is required. Add files to a folder, and they become searchable. Remove files, and they disappear from search results. The system tracks file changes through hash comparisons, triggering reindexing only when content actually changes.
The Indexing Pipeline
Understanding the indexing pipeline helps diagnose issues and optimize performance. When a folder is processed, the system first detects which files are new or modified since the last indexing run. This incremental approach avoids reprocessing unchanged content.
For each file requiring processing, text extraction pulls readable content from the document regardless of its format. PDF extraction handles complex layouts, DOCX processing unwraps the underlying XML, and plain text formats are read directly. The extracted text preserves paragraph structure and meaningful breaks.
The chunking phase splits long documents into smaller pieces suitable for embedding and retrieval. Each chunk contains approximately 500 tokens with overlap between adjacent chunks to preserve context across boundaries. This sizing balances granularity—enabling precise matches—against coherence—keeping related information together.
Embedding generation converts each text chunk into a numerical vector representation. BotServer uses the BGE embedding model by default, producing 384-dimensional vectors that capture semantic meaning. These embeddings enable the similarity comparisons that power semantic search.
Finally, the vectors and their associated metadata are stored in the vector database, organized by collection. Each entry includes the embedding vector, the original text chunk, the source file path, and position information enabling reconstruction of context.
Working with Collections
Activating a collection for use in conversations requires only the USE KB statement with the collection name matching the folder. Once activated, the collection becomes part of the knowledge available when answering questions.
Multiple collections can be active simultaneously, and the system searches across all of them when looking for relevant content. This capability allows bots to draw on diverse knowledge sources. A comprehensive assistant might activate employee policies, product documentation, and procedural guides, answering questions that span any combination of these areas.
The CLEAR KB keyword deactivates collections, either removing all active collections at once or targeting specific ones by name. Clearing collections frees memory and focuses search results on remaining active knowledge. Scripts that handle diverse topics might activate and clear collections as the conversation shifts between subject areas.
Collections operate at the session level, meaning activation persists until the session ends or the collection is explicitly cleared. Users can ask follow-up questions that build on retrieved knowledge without requiring reactivation between each query.
Website Indexing
Beyond static documents, collections can include content crawled from websites. The USE WEBSITE keyword registers a URL for crawling, with the retrieved content becoming searchable alongside document-based collections.
For content that changes over time, scheduled crawling keeps the collection current. A script with SET SCHEDULE can periodically re-crawl websites, ensuring that the bot’s knowledge reflects recent updates. This approach works well for documentation sites, knowledge bases, or any web content relevant to your bot’s domain.
Website content goes through the same indexing pipeline as documents—text extraction, chunking, embedding, and storage. The resulting collection is indistinguishable in use from document-based collections.
How Search Utilizes Collections
When a user asks a question and collections are active, the search process finds relevant content automatically. The system embeds the query using the same model that indexed the documents, ensuring that queries and content exist in the same semantic space.
Vector similarity search identifies chunks whose embeddings are closest to the query embedding. The system retrieves the top matches from each active collection, then combines and ranks them by relevance. This process typically completes in milliseconds even for large collections.
The most relevant chunks become part of the context provided to the language model when generating a response. The model sees both the user’s question and the retrieved information, enabling it to produce answers grounded in your organization’s actual documentation.
This entire process happens transparently. Developers don’t write search queries or handle result sets. Users don’t know that retrieval is occurring. The system simply provides knowledgeable responses informed by the activated collections.
Embedding Configuration
The embedding model determines how meaning is captured in vectors and significantly influences search quality. BotServer uses a locally-running BGE model by default, configured through the embedding URL and model path settings in config.csv.
The default model provides good general-purpose performance for English content. Organizations with specialized vocabulary or multilingual requirements might benefit from alternative models. The embedding infrastructure supports any compatible model, allowing customization for specific domains.
Changing embedding models requires reindexing existing collections since embeddings from different models aren’t comparable. Plan model changes carefully, accounting for the reprocessing time required for large document collections.
Collection Management Practices
Effective collection organization follows the principle of coherent groupings. Each folder should contain documents about a related topic area, enabling targeted activation. Overly broad collections that mix unrelated content produce noisier search results than focused collections containing cohesive material.
Clear naming conventions help scripts remain readable and maintainable. Collection names should indicate their content clearly enough that someone reading a script understands what knowledge is being activated without examining the folder contents.
Regular content maintenance keeps collections valuable. Remove outdated documents that might produce incorrect answers. Update files when information changes. Schedule website re-crawls frequently enough that cached content doesn’t become stale.
Monitoring collection usage helps identify optimization opportunities. If certain collections are rarely activated, consider whether they should exist separately or merge into related collections. If search results frequently miss relevant content, examine whether documents are organized in ways that match how users think about topics.
Performance Considerations
Collection size affects both memory usage and search performance. Larger collections require more storage for their embeddings and take longer to search, though the impact is usually modest given vector database optimizations. Very large collections might benefit from subdivision into more focused subcollections.
Active collection count influences context-building overhead. Each active collection contributes potential results that must be ranked and filtered. Activating only relevant collections for each conversation keeps search focused and efficient.
Embedding generation represents the primary indexing cost. Initial indexing of large document sets takes time proportional to total content size. Incremental updates process only changed files, making ongoing maintenance much faster than initial setup.
Caching at multiple levels improves performance for common patterns. Frequently accessed chunks remain in memory. Repeated queries benefit from result caching. The system automatically manages these caches without requiring configuration.
Summary
Vector collections bridge the gap between static documents and dynamic conversation knowledge. The automatic indexing pipeline transforms folder contents into searchable collections without requiring manual configuration. Simple activation through USE KB makes knowledge available, while the underlying vector search finds relevant content based on meaning rather than keywords. Thoughtful organization of documents into focused collections maximizes the value of this powerful capability.
Document Indexing
Documents in .gbkb folders are indexed automatically. No manual configuration required.
Automatic Triggers
Indexing occurs when:
- Files added to
.gbkbfolders - Files modified or updated
USE KBcalled for a collectionUSE WEBSITEregisters URLs for crawling
Processing Pipeline
Document → Extract Text → Chunk → Embed → Store in Qdrant
| Stage | Description |
|---|---|
| Extract | Pull text from PDF, DOCX, HTML, MD, TXT, CSV |
| Chunk | Split into ~500 token segments with 50 token overlap |
| Embed | Generate vectors using BGE model |
| Store | Save to Qdrant with metadata |
Supported File Types
| Format | Notes |
|---|---|
| Full text extraction, OCR for scanned docs | |
| DOCX | Microsoft Word documents |
| TXT/MD | Plain text and Markdown |
| HTML | Web pages (text only) |
| CSV/JSON | Structured data |
Website Indexing
Schedule regular crawls for web content:
SET SCHEDULE "0 2 * * *" ' Daily at 2 AM
USE WEBSITE "https://docs.example.com"
Schedule Examples
| Pattern | Frequency |
|---|---|
"0 * * * *" | Hourly |
"*/30 * * * *" | Every 30 minutes |
"0 0 * * 0" | Weekly (Sunday) |
"0 0 1 * *" | Monthly (1st) |
Configuration
In config.csv:
name,value
embedding-url,http://localhost:8082
embedding-model,../../../../data/llm/bge-small-en-v1.5-f32.gguf
Using Indexed Content
USE KB "documentation"
' All documents now searchable
' LLM uses this knowledge automatically
Troubleshooting
| Issue | Solution |
|---|---|
| Documents not found | Check file is in .gbkb folder, verify USE KB called |
| Slow indexing | Large PDFs take time; consider splitting documents |
| Outdated content | Set up scheduled crawls for web content |
See Also
- Knowledge Base System - Architecture overview
- Semantic Search - How search works
- Vector Collections - Collection management
Semantic Search
Semantic search finds relevant content by meaning, not just keywords. When a user asks “How many days off do I get?”, the system matches documents about “vacation policy” or “PTO allowance” even though the words differ.
How It Works
- Query embedding - Convert question to vector using same model as documents
- Similarity search - Find document chunks with closest embeddings (cosine distance)
- Result selection - Take top-k results above relevance threshold
- Context injection - Add retrieved text to LLM prompt
Automatic Integration
Semantic search requires no explicit coding. Just activate knowledge bases:
USE KB "policies"
USE KB "products"
' Now all user questions automatically search both collections
TALK "How can I help you?"
The system handles query embedding, vector search, ranking, and context assembly transparently.
Search Pipeline Details
| Stage | Operation | Default |
|---|---|---|
| Embedding | Convert query to vector | BGE model |
| Search | Vector similarity lookup | Qdrant |
| Distance | Cosine similarity | 0.0-1.0 |
| Top-k | Results returned | 5 |
| Threshold | Minimum relevance | 0.7 |
Multiple Collections
When multiple KBs are active, the system searches all and combines best results:
USE KB "hr-docs" ' Active
USE KB "it-docs" ' Active
USE KB "finance" ' Active
' Query searches all three, returns best matches regardless of source
Use CLEAR KB to deactivate collections when switching topics.
Performance
- Cold search: 100-200ms (first query)
- Warm search: 20-50ms (cached embeddings)
- Indexing: One-time cost per document
Optimizations:
- Embedding cache for repeated queries
- HNSW index for fast vector search
- Only active collections consume resources
Optimizing Quality
Document factors:
- Clear, descriptive text produces better matches
- Use vocabulary similar to how users ask questions
- Avoid jargon-heavy content when possible
Collection factors:
- Focused collections (one topic) beat catch-all collections
- Fewer active collections = less noise in results
- Split large document sets by domain area
Troubleshooting
| Issue | Cause | Solution |
|---|---|---|
| No results | Collection not active | Call USE KB "name" |
| Wrong results | Too many collections | Clear irrelevant KBs |
| Missing matches | Document not indexed | Check file is in .gbkb folder |
| Poor relevance | Content mismatch | Review document quality |
Configuration
Semantic search uses sensible defaults. Two settings affect context:
name,value
episodic-memory-history,2 # Previous exchanges to include
episodic-memory-threshold,4 # When to compress older context
See Also
- Hybrid Search - Combining semantic + keyword search
- Document Indexing - How documents are processed
- Vector Collections - Technical vector DB details
- USE KB - Keyword reference
Episodic Memory
Episodic memory automatically manages conversation history to stay within LLM token limits while preserving important information through intelligent summarization. This system handles context compaction transparently, ensuring conversations remain coherent without manual intervention.
Overview
Large Language Models have fixed context windows (e.g., 8K, 32K, 128K tokens). Long conversations can exceed these limits, causing truncation or errors. Episodic memory solves this by:
- Monitoring conversation length
- Summarizing older exchanges when thresholds are reached
- Keeping recent messages in full detail
- Storing summaries as “episodic memory” for continuity
Configuration
Episodic memory is controlled by parameters in config.csv:
name,value
episodic-memory-enabled,true
episodic-memory-threshold,4
episodic-memory-history,2
episodic-memory-model,fast
episodic-memory-max-episodes,100
episodic-memory-retention-days,365
episodic-memory-auto-summarize,true
Parameter Reference
| Parameter | Default | Type | Description |
|---|---|---|---|
episodic-memory-enabled | true | Boolean | Enable/disable episodic memory system |
episodic-memory-threshold | 4 | Integer | Number of exchanges before compaction triggers |
episodic-memory-history | 2 | Integer | Recent exchanges to keep in full detail |
episodic-memory-model | fast | String | Model for generating summaries (fast, quality, or model name) |
episodic-memory-max-episodes | 100 | Integer | Maximum episode summaries per user |
episodic-memory-retention-days | 365 | Integer | Days to retain episode summaries |
episodic-memory-auto-summarize | true | Boolean | Automatically summarize when threshold reached |
How It Works
Context Compaction Process
- Monitor: System tracks message count since last summary
- Trigger: When count reaches
episodic-memory-threshold, compaction starts - Summarize: Older messages are summarized using the configured LLM
- Preserve: Last
episodic-memory-historyexchanges remain in full - Store: Summary saved with role “episodic” for future context
Example Timeline
With defaults (episodic-memory-threshold=4, episodic-memory-history=2):
| Exchange | Action | Context State |
|---|---|---|
| 1-2 | Normal | Messages 1-2 in full |
| 3-4 | Normal | Messages 1-4 in full |
| 5 | Compaction | Summary of 1-2 + Messages 3-5 in full |
| 6-7 | Normal | Summary + Messages 3-7 in full |
| 8 | Compaction | Summary of 1-5 + Messages 6-8 in full |
Automatic Behavior
The system automatically:
- Tracks conversation length
- Triggers compaction when exchanges exceed
episodic-memory-threshold - Summarizes older messages using the configured LLM
- Keeps only the last
episodic-memory-historyexchanges in full - Stores the summary as an “episodic memory” for future context
The scheduler runs every 60 seconds, checking all active sessions and processing those that exceed the threshold.
Tuning Guidelines
High-Context Conversations
For complex discussions requiring more history:
name,value
episodic-memory-history,5
episodic-memory-threshold,10
Token-Constrained Environments
For smaller context windows or cost optimization:
name,value
episodic-memory-history,1
episodic-memory-threshold,2
Disable Compaction
Set threshold to 0 to disable automatic compaction:
name,value
episodic-memory-threshold,0
Extended Retention
For long-term memory across sessions:
name,value
episodic-memory-max-episodes,500
episodic-memory-retention-days,730
Use Case Recommendations
| Use Case | History | Threshold | Rationale |
|---|---|---|---|
| FAQ Bot | 1 | 2 | Questions are independent |
| Customer Support | 2 | 4 | Some context needed |
| Technical Discussion | 4 | 8 | Complex topics require history |
| Therapy/Coaching | 5 | 10 | Continuity is critical |
| Long-term Assistant | 3 | 6 | Balance memory and context |
Token Savings
Compaction significantly reduces token usage:
| Scenario | Without Compaction | With Compaction | Savings |
|---|---|---|---|
| 10 exchanges | ~5,000 tokens | ~2,000 tokens | 60% |
| 20 exchanges | ~10,000 tokens | ~3,000 tokens | 70% |
| 50 exchanges | ~25,000 tokens | ~5,000 tokens | 80% |
Actual savings depend on message length and summary quality.
Summary Storage
Summaries are stored with special role identifiers:
- Role
episodicorcompactmarks summary messages - Summaries include key points from compacted exchanges
- Original messages are not deleted, just excluded from active context
- Episodes are searchable for context retrieval across sessions
Benefits
- Automatic management - No manual intervention needed
- Token efficiency - Stay within model context limits
- Context preservation - Important information kept via summaries
- Relevant context - Recent exchanges kept in full detail
- Cost savings - Fewer tokens = lower API costs
- Long-term memory - Episode storage enables recall across sessions
Interaction with Caching
Episodic memory works alongside semantic caching:
- Caching: Reuses responses for similar queries (see Semantic Caching)
- Episodic Memory: Manages conversation length over time
Both features reduce costs and improve performance independently.
Best Practices
- Start with defaults - Work well for most use cases
- Monitor token usage - Adjust if hitting context limits
- Consider conversation type - Support vs complex discussion
- Test different values - Find optimal balance for your users
- Set retention appropriately - Balance memory vs privacy requirements
Troubleshooting
| Issue | Cause | Solution |
|---|---|---|
| Context too long | Threshold too high | Lower episodic-memory-threshold |
| Lost context | History too low | Increase episodic-memory-history |
| Summaries missing info | Model limitations | Use quality instead of fast |
| No compaction occurring | Threshold is 0 or disabled | Set positive threshold, enable feature |
| Old episodes not deleted | Retention too long | Lower episodic-memory-retention-days |
See Also
- Semantic Caching - Response caching system
- Configuration Parameters - Full parameter reference
- LLM Configuration - Model settings
Caching
BotServer includes automatic caching to improve response times and reduce redundant processing, including semantic caching for LLM responses using an in-memory cache component.
Features
The caching system provides exact match caching for identical prompts and semantic similarity matching to find and reuse responses for semantically similar prompts. Configurable TTL settings control how long cached responses remain valid. Caching can be enabled or disabled on a per-bot basis through configuration. Embedding-based similarity uses local embedding models for semantic matching, and comprehensive statistics and monitoring track cache hits, misses, and performance metrics.
How Caching Works
Caching in BotServer is controlled by configuration parameters in config.csv. The system automatically caches LLM responses and manages conversation history.
When enabled, the semantic cache operates through a straightforward process. When a user asks a question, the system checks if a semantically similar question was asked before. If the similarity exceeds the threshold (typically 0.95), it returns the cached response. Otherwise, it generates a new response and caches it for future queries.
Configuration
Basic Cache Settings
From default.gbai/default.gbot/config.csv:
llm-cache,false # Enable/disable LLM response caching
llm-cache-ttl,3600 # Cache time-to-live in seconds
llm-cache-semantic,true # Use semantic similarity for cache matching
llm-cache-threshold,0.95 # Similarity threshold for cache hits
Configuration Parameters
| Parameter | Type | Default | Description |
|---|---|---|---|
llm-cache | boolean | false | Enable/disable LLM response caching |
llm-cache-ttl | integer | 3600 | Time-to-live for cached entries (in seconds) |
llm-cache-semantic | boolean | true | Enable semantic similarity matching |
llm-cache-threshold | float | 0.95 | Similarity threshold for semantic matches (0.0-1.0) |
Embedding Service Configuration
For semantic similarity matching, ensure your embedding service is configured:
embedding-url,http://localhost:8082
embedding-model,../../../../data/llm/bge-small-en-v1.5-f32.gguf
Conversation History Management
The system manages conversation context through these parameters:
episodic-memory-history,2 # Number of previous messages to include in context
episodic-memory-threshold,4 # Compact conversation after N exchanges
The episodic-memory-history setting keeps the last 2 exchanges in the conversation context, providing continuity without excessive token usage. The episodic-memory-threshold setting triggers summarization or removal of older messages after 4 exchanges to save tokens while preserving essential context.
Cache Storage
Architecture
The caching system uses a multi-level approach for optimal performance, combining fast in-memory access with configurable persistence options.
Cache Key Structure
The cache uses a multi-level key structure where exact matches use a hash of the exact prompt while semantic matches store embedding vectors with a semantic index for similarity comparison.
Cache Component Features
The cache component provides fast in-memory storage with sub-millisecond response times. Automatic expiration handles TTL-based cache invalidation without manual intervention. Distributed caching enables sharing the cache across multiple bot instances for consistent performance. Persistence options offer optional disk persistence for cache durability across restarts.
Example Usage
Basic Caching
' Caching happens automatically when enabled
USE KB "policies"
' First user asks: "What's the vacation policy?"
' System generates response and caches it
' Second user asks: "Tell me about vacation rules"
' System finds semantic match (>0.95 similarity) and returns cached response
Tool Response Caching
' Tool responses can also be cached
USE TOOL "weather-api"
' First request: "What's the weather in NYC?"
' Makes API call, caches response for 1 hour
' Second request within TTL: "NYC weather?"
' Returns cached response without API call
Cache Management
The cache operates automatically based on your configuration settings. Cache entries are managed through TTL expiration and memory policies without requiring manual intervention.
Best Practices
When to Enable Caching
Enable caching for FAQ bots with repetitive questions, knowledge base queries where the same information is requested frequently, API-heavy integrations where external calls are expensive, and high-traffic bots where response latency impacts user experience.
Disable caching for real-time data queries where freshness is critical, personalized responses that should vary per user, time-sensitive information that changes frequently, and development or testing environments where you need to see actual responses.
Tuning Cache Parameters
TTL settings should match your data freshness requirements. Use short TTL values around 300 seconds for news, weather, and stock prices. Medium TTL values around 3600 seconds work well for general knowledge and FAQs. Long TTL values around 86400 seconds suit static documentation and policies.
Similarity threshold affects matching precision. High thresholds of 0.95 or above provide strict matching with fewer false positives. Medium thresholds between 0.85 and 0.95 balance coverage and accuracy. Low thresholds below 0.85 enable broad matching but risk returning incorrect responses.
Memory Management
The cache component automatically manages memory through LRU (Least Recently Used) eviction policies that remove the oldest accessed entries first. Configurable memory limits prevent unbounded growth. Automatic key expiration cleans up entries that have exceeded their TTL.
Performance Impact
Typical performance improvements with caching enabled:
| Metric | Without Cache | With Cache | Improvement |
|---|---|---|---|
| Response Time | 2-5s | 50-200ms | 10-100x faster |
| API Calls | Every request | First request only | 90%+ reduction |
| Token Usage | Full context | Cached response | 95%+ reduction |
| Cost | $0.02/request | $0.001/request | 95% cost saving |
Troubleshooting
Cache Not Working
If caching isn’t working as expected, verify that the cache service is running and accessible. Confirm caching is enabled in your config with llm-cache,true. Check that the TTL hasn’t expired for entries you expect to be cached. Review the similarity threshold to ensure it isn’t set too high for your use case.
Clear Cache
Cache is managed automatically through TTL expiration and eviction policies. To clear the cache manually, restart the cache component or use the admin API endpoint /api/admin/cache/clear.
Summary
The semantic caching system in BotServer provides intelligent response caching that reduces response latency by 10-100x and cuts API costs by 90% or more. Response quality is maintained through semantic matching that understands query intent rather than requiring exact matches. The system scales automatically with the cache component to handle increasing load. Configure caching based on your bot’s needs, monitor performance metrics, and tune parameters for optimal results.
Chapter 04: User Interface
General Bots UI system built with HTMX and server-side rendering.
UI Modes
| Mode | Description | Use Case |
|---|---|---|
| default.gbui | Full desktop suite | Complete productivity |
| single.gbui | Simple chat widget | Embedded chat |
| console | Terminal interface | Development/testing |
Architecture
- HTMX - Dynamic updates without JavaScript frameworks
- Server-Side Rendering - Fast, SEO-friendly pages
- Minimal JS - No build process required
Quick Access
http://localhost:8080 → Main interface
http://localhost:8080/chat → Chat app
http://localhost:8080/drive → File manager
http://localhost:8080/console → Terminal mode
Suite Applications
| App | Purpose |
|---|---|
| Chat | AI assistant conversations |
| Drive | File management |
| Tasks | To-do lists |
| Email client | |
| Calendar | Scheduling |
| Meet | Video calls |
| Paper | AI writing |
| Research | AI search |
Chapter Contents
- Suite User Manual - End-user guide
- UI Structure - Component layout
- default.gbui - Full desktop mode
- single.gbui - Chat widget mode
- Console Mode - Terminal interface
- HTMX Architecture - Technical details
- Suite Applications - App documentation
- How-To Tutorials - Step-by-step guides
See Also
- .gbtheme Package - Styling and themes
- .gbui Structure - Package format
General Bots Suite - User Manual
The Complete Productivity Workspace
AI-native productivity suite for modern teams
Welcome to General Bots Suite
General Bots Suite is your all-in-one workspace that combines communication, productivity, and AI assistance. All your productivity tools in one place - simple and powerful.
What makes it special:
- Everything works together seamlessly
- AI helps you with every task
- No complex setup required
- Works in your browser
Part I: Getting Started
Opening the Suite
- Open your web browser (Chrome, Firefox, Safari, or Edge)
- Go to your General Bots address (example:
http://your-company.bot:8080) - The Suite loads automatically - you’ll see the workspace in seconds
Your First Look
When the Suite opens, you see:
The Apps Menu
Click the nine-dot grid (⋮⋮⋮) in the top right to see all applications:
| Icon | App | What It Does |
|---|---|---|
| 💬 | Chat | Talk with your AI assistant |
| 📁 | Drive | Store and manage your files |
| ✓ | Tasks | Manage your to-do lists |
| ✉ | Send and receive emails | |
| 📅 | Calendar | Schedule meetings and events |
| 🎥 | Meet | Video calls and meetings |
| 📝 | Paper | Write documents with AI help |
| 🔍 | Research | AI-powered search |
| 📊 | Analytics | View reports and statistics |
Part II: Chat - Your AI Assistant
What Chat Does
Chat is your personal AI assistant. Ask questions, get help with tasks, or have a conversation - just like talking to a helpful colleague.
How to Use Chat
Sending a Message:
- Click in the message box at the bottom
- Type your question or request
- Press Enter or click the ↑ button
Example Conversations:
You: What meetings do I have today?
Bot: You have 2 meetings scheduled:
• 10:00 AM - Team Standup
• 2:00 PM - Project Review
You: Remind me to call John tomorrow at 3pm
Bot: Done! I've created a reminder for tomorrow at 3:00 PM
to call John.
You: Write a professional email to decline a meeting
Bot: Here's a draft:
Subject: Unable to Attend Meeting
Dear [Name],
Thank you for the invitation. Unfortunately, I have
a scheduling conflict and won't be able to attend.
Would it be possible to reschedule or receive a
summary of the discussion?
Best regards
Voice Input
- Click the 🎤 microphone button
- Speak your message
- Click again to stop
- Your words appear as text automatically
Quick Suggestions
Below the message box, you’ll see suggestion chips - common actions you can click:
[📊 Show my tasks] [📧 Check email] [📅 Today's schedule] [❓ Help]
Keyboard Shortcuts for Chat
| Shortcut | Action |
|---|---|
Enter | Send message |
Shift+Enter | New line (without sending) |
↑ Arrow | Edit last message |
/ | Show command menu |
Part III: Drive - File Management
What Drive Does
Drive is your file storage - like Google Drive or OneDrive. Store documents, images, spreadsheets, and any file you need.
The Drive Interface
Creating and Uploading
Upload Files:
- Click + New button
- Select Upload Files
- Choose files from your computer
- Or: Drag files directly into Drive
Create New Folder:
- Click + New
- Select New Folder
- Type the folder name
- Press Enter
Working with Files
Open a file: Double-click it
Select files: Click the checkbox beside the file name
Multiple selection: Hold Ctrl (or Cmd on Mac) and click files
Right-click menu options:
- 📂 Open
- ⬇️ Download
- ✏️ Rename
- 📋 Copy
- 📁 Move to…
- ⭐ Add to Starred
- 🔗 Share
- 🗑 Delete
View Options
| Button | View | Best For |
|---|---|---|
| ⊞ | Grid view | Images and visual files |
| ≡ | List view | Documents and details |
Keyboard Shortcuts for Drive
| Shortcut | Action |
|---|---|
Ctrl+U | Upload files |
Ctrl+N | New folder |
Delete | Move to trash |
Ctrl+C | Copy |
Ctrl+V | Paste |
Enter | Open selected |
Part IV: Tasks - To-Do Management
What Tasks Does
Tasks helps you track what needs to be done. Create to-do lists, set due dates, and mark items complete.
The Tasks Interface
Adding a Task
- Type your task in the input box
- (Optional) Select a category: Work, Personal, Shopping, Health
- (Optional) Set a due date
- Click + Add or press Enter
Pro tip: Be specific! Instead of “Work on project”, write “Draft introduction section for project proposal”
Task Priorities
| Color | Priority | When to Use |
|---|---|---|
| 🔴 Red | High | Must do today |
| 🟡 Yellow | Medium | Important but not urgent |
| 🟢 Green | Low | Can wait |
Completing Tasks
- Click the checkbox to mark a task done
- Completed tasks move to the “Completed” tab
- Feel the satisfaction! ✓
Filter Tabs
| Tab | Shows |
|---|---|
| All | Everything |
| Active | Tasks not yet done |
| Completed | Finished tasks |
| Priority | High-priority tasks only |
Creating Tasks from Chat
In Chat, just say:
You: Create a task to review the budget by Friday
Bot: Task created: "Review the budget" - Due: Friday
Part V: Mail - Email Management
What Mail Does
Mail connects to your email accounts so you can read, write, and organize emails without leaving the Suite.
The Mail Interface
Reading Email
- Click on Mail in the Apps menu
- Click any email in the list to read it
- The full email appears on the right
Composing Email
- Click ✏ Compose
- Fill in the fields:
- To: recipient’s email
- Subject: what it’s about
- Body: your message
- Click Send
AI-Assisted Writing:
You: Help me write an email to reschedule tomorrow's meeting
Bot: Here's a draft:
To: [recipient]
Subject: Request to Reschedule Meeting
Hi [Name],
I hope this message finds you well. Would it be
possible to reschedule our meeting tomorrow?
I have an unexpected conflict.
Please let me know what times work for you
later this week.
Thank you for understanding.
Email Folders
| Folder | Purpose |
|---|---|
| Inbox | New and unread messages |
| Sent | Emails you’ve sent |
| Drafts | Unfinished emails |
| Trash | Deleted emails (emptied after 30 days) |
Email Actions
| Button | Action |
|---|---|
| Reply | Respond to sender |
| Reply All | Respond to everyone |
| Forward | Send to someone else |
| Delete | Move to Trash |
| Archive | Remove from Inbox but keep |
Part VI: Calendar - Scheduling
What Calendar Does
Calendar shows your schedule, meetings, and events. Plan your day, week, or month at a glance.
The Calendar Interface
Creating an Event
Method 1: Click and Create
- Click on a day/time slot
- Enter event details
- Click Save
Method 2: Ask the AI
You: Schedule a team meeting for next Tuesday at 2pm
Bot: Event created:
📅 Team Meeting
🕐 Tuesday, March 19 at 2:00 PM
⏱ Duration: 1 hour
Event Details
When creating an event, you can set:
- Title - What the event is
- Date & Time - When it happens
- Duration - How long it lasts
- Location - Where (room or video link)
- Attendees - Who to invite
- Reminder - When to notify you
- Repeat - Daily, weekly, monthly
Calendar Views
| View | Shows | Best For |
|---|---|---|
| Day | Hour by hour | Detailed daily planning |
| Week | 7 days | Seeing your week ahead |
| Month | Full month | Long-term planning |
Keyboard Navigation
| Key | Action |
|---|---|
← → | Previous/Next period |
T | Jump to Today |
D | Day view |
W | Week view |
M | Month view |
Part VII: Meet - Video Calls
What Meet Does
Meet lets you have video calls with one person or many. Share your screen, record meetings, and get AI transcriptions.
Starting a Meeting
Start Instant Meeting:
- Click Meet in Apps menu
- Click Start Meeting
- Share the link with others
Schedule for Later:
You: Schedule a video call with the team for tomorrow at 10am
Bot: Meeting scheduled:
🎥 Team Video Call
📅 Tomorrow at 10:00 AM
🔗 Link: meet.bot/abc-defg-hij
Shall I send invites to the team?
The Meeting Interface
Meeting Controls
| Button | Function |
|---|---|
| 🎤 Mute | Turn microphone on/off |
| 📹 Video | Turn camera on/off |
| 🖥 Share | Share your screen |
| 🔴 Record | Record the meeting |
| 📝 Transcribe | Get live captions |
| 💬 Chat | Open meeting chat |
| 👥 Participants | See who’s in the call |
| 📞 End | Leave the meeting |
Screen Sharing
- Click 🖥 Share
- Choose what to share:
- Entire Screen - Everything you see
- Window - One application
- Tab - One browser tab
- Click Share
- Click Stop Sharing when done
AI Features in Meetings
Live Transcription:
- Enable with the 📝 Transcribe button
- Words appear as people speak
- Great for accessibility and note-taking
Meeting Summary: After the meeting, ask:
You: Summarize today's project meeting
Bot: Meeting Summary:
Duration: 45 minutes
Participants: You, John, Sarah
Key Points:
• Project deadline moved to April 15
• John will handle client communication
• Sarah completing design by Friday
Action Items:
• [You] Review budget proposal
• [John] Send client update
• [Sarah] Share design mockups
Part VIII: Paper - AI Writing
What Paper Does
Paper is your writing space with AI assistance. Write documents, notes, reports - and let AI help you write better.
The Paper Interface
Creating a Document
- Click + New in the sidebar
- Choose a template:
- Blank - Start fresh
- Meeting Notes - Pre-formatted for meetings
- To-Do List - Checkbox format
- Research - Sections for sources
Formatting Toolbar
| Button | Function | Shortcut |
|---|---|---|
| B | Bold | Ctrl+B |
| I | Italic | Ctrl+I |
| U | Underline | Ctrl+U |
| H1 | Heading 1 | Ctrl+1 |
| H2 | Heading 2 | Ctrl+2 |
| • | Bullet list | Ctrl+Shift+8 |
| ― | Numbered list | Ctrl+Shift+7 |
| 🔗 | Insert link | Ctrl+K |
| 📷 | Insert image | - |
AI Writing Assistant ✨
Click the AI ✨ button or type /ai for AI help:
Commands:
/ai improve → Make the text better
/ai shorter → Make it more concise
/ai longer → Expand with more detail
/ai formal → Make it professional
/ai friendly → Make it casual
/ai translate → Translate to another language
/ai summarize → Create a summary
Example:
You wrote: "The thing we need to do is make the stuff better"
/ai formal
AI suggests: "Our objective is to enhance the quality of
our deliverables to meet higher standards."
Auto-Save
Paper saves automatically as you type. Look for:
- “Saving…” - Currently saving
- “Saved” - All changes saved
- “Offline” - Will save when connected
Part IX: Research - AI Search
What Research Does
Research is like having a research assistant. Search the web, your documents, and knowledge bases - then get AI-synthesized answers.
The Research Interface
Search Modes
| Mode | Icon | Searches |
|---|---|---|
| All | 🌐 | Everything |
| Academic | 📚 | Research papers, journals |
| Code | 💻 | Documentation, code examples |
| Internal | 🏠 | Your company’s knowledge base |
Using Research
- Type your question in the search box
- Select a focus mode (optional)
- Press Enter
- Read the AI-synthesized answer
- Click sources to see original content
Collections
Save important searches and sources:
- Click + New Collection
- Name it (e.g., “Q1 Project Research”)
- Add sources by clicking Save to Collection
- Access anytime from the sidebar
Pro Tips
Be specific:
- ❌ “marketing”
- ✅ “B2B SaaS marketing strategies for startups under 50 employees”
Use follow-up questions:
Search: What is machine learning?
Follow-up: How is it different from deep learning?
Follow-up: What are practical business applications?
Part X: Analytics - Reports & Insights
What Analytics Does
Analytics shows you reports about usage, conversations, and performance. Understand how the bot is being used and what’s working.
The Analytics Interface
Key Metrics
| Metric | What It Means |
|---|---|
| Messages | Total conversations |
| Success Rate | % of questions answered well |
| Avg Response Time | How fast the bot replies |
| Users | Number of people using the bot |
| Popular Topics | What people ask about most |
Time Ranges
Select different periods to analyze:
- Last Hour
- Last 6 Hours
- Last 24 Hours
- Last 7 Days
- Last 30 Days
- Custom Range
Exporting Data
Click Export to download reports as:
- CSV - For spreadsheets
- PDF - For sharing
- JSON - For developers
Part XI: Designer - Visual Dialog Builder
What Designer Does
Designer lets you create bot conversations visually. Drag and drop blocks to build dialogs without coding.
The Designer Interface
Building a Dialog
Step 1: Drag Blocks
- Drag from Toolbox to Canvas
- Blocks snap to grid
Step 2: Connect Blocks
- Drag from output port (●) to input port
- Lines show conversation flow
Step 3: Configure Properties
- Click a block
- Edit settings in Properties panel
Step 4: Export
- Click Export to .bas
- Save your dialog file
Block Types
| Block | Icon | Purpose | Example |
|---|---|---|---|
| TALK | 💬 | Bot speaks | “Welcome! How can I help?” |
| HEAR | 👂 | Wait for user input | Store response in name |
| SET | 📝 | Set a variable | total = price * quantity |
| IF | 🔀 | Make decisions | If age > 18 then… |
| FOR | 🔄 | Repeat for items | For each item in cart… |
| SWITCH | 🔃 | Multiple choices | Switch on category… |
| CALL | 📞 | Call another dialog | Call “checkout” |
| SEND MAIL | 📧 | Send email | Send confirmation |
| SAVE | 💾 | Save data | Save to database |
| WAIT | ⏱ | Pause | Wait 5 seconds |
Example: Simple Greeting Dialog
The Designer canvas shows flow diagrams like the one in the interface above. A simple greeting dialog flows from a TALK node (“What’s your name?”) to a HEAR node (capturing the name as a string variable) to another TALK node (“Nice to meet you, {name}!”).
Generated Code:
TALK "What's your name?"
HEAR name AS STRING
TALK "Nice to meet you, " + name + "!"
Keyboard Shortcuts in Designer
| Shortcut | Action |
|---|---|
Ctrl+S | Save |
Ctrl+O | Open file |
Ctrl+Z | Undo |
Ctrl+Y | Redo |
Ctrl+C | Copy block |
Ctrl+V | Paste block |
Delete | Delete selected |
Escape | Deselect |
Part XII: Sources - Prompts & Templates
What Sources Does
Sources is your library of prompts, templates, tools, and AI models. Find and use pre-built components to extend your bot.
The Sources Interface
Tabs Explained
| Tab | Contains | Use For |
|---|---|---|
| Prompts | Pre-written AI instructions | Starting conversations |
| Templates | Complete bot packages | Full solutions |
| MCP Servers | External tool connections | Integrations |
| LLM Tools | AI functions | Extending capabilities |
| Models | AI model options | Choosing AI provider |
Using a Prompt
- Browse or search prompts
- Click on a prompt card
- Click Use to apply it
- Customize if needed
Installing a Template
- Go to Templates tab
- Find a template (e.g., “CRM Contacts”)
- Click Install
- Configure settings
- Template is now active
Available Models
| Model | Provider | Best For |
|---|---|---|
| Claude Sonnet 4.5 | Anthropic | General tasks, coding |
| Claude Opus 4.5 | Anthropic | Complex analysis |
| Gemini Pro | Long documents | |
| Llama 3.3 | Meta | Open source, privacy |
Part XIII: Tools - System Utilities
Compliance Scanner
What It Checks:
- Hardcoded passwords
- Exposed API keys
- SQL injection risks
- Deprecated keywords
- Security best practices
Part XIV: Settings
Accessing Settings
- Click your avatar (top right)
- Select Settings
Setting Categories
Profile:
- Display name
- Avatar image
- Email address
- Language preference
Notifications:
- Email notifications
- Desktop alerts
- Sound preferences
Appearance:
- Theme (Light/Dark/Auto)
- Accent color
- Font size
Privacy:
- Data retention
- Conversation history
- Usage analytics
Connections:
- Email accounts
- Calendar sync
- Cloud storage
Part XV: Keyboard Shortcuts Reference
Global Shortcuts
| Shortcut | Action |
|---|---|
Alt+1 | Open Chat |
Alt+2 | Open Drive |
Alt+3 | Open Tasks |
Alt+4 | Open Mail |
Alt+5 | Open Calendar |
Escape | Close dialog/menu |
/ | Focus search |
Ctrl+K | Command palette |
Common Shortcuts
| Shortcut | Action |
|---|---|
Ctrl+S | Save |
Ctrl+Z | Undo |
Ctrl+Y | Redo |
Ctrl+C | Copy |
Ctrl+V | Paste |
Ctrl+A | Select all |
Ctrl+F | Find |
Part XVI: Tips & Best Practices
Daily Workflow
Morning:
- Open Suite
- Check Chat for overnight messages
- Review Tasks for the day
- Check Calendar for meetings
During Work:
- Use Chat for quick questions
- Upload files to Drive
- Update Tasks as you complete them
- Take notes in Paper
End of Day:
- Mark completed tasks done
- Archive old emails
- Review tomorrow’s calendar
Productivity Tips
In Chat:
- Be specific in your questions
- Use follow-up questions
- Say “summarize” for long responses
In Drive:
- Use folders to organize
- Star important files
- Use search instead of browsing
In Tasks:
- Break big tasks into smaller ones
- Set realistic due dates
- Use categories to organize
In Mail:
- Process emails once
- Archive instead of delete
- Use AI for drafting
Getting Help
Ask the Bot:
You: How do I upload a file?
You: What keyboard shortcuts are there?
You: Help me with tasks
Resources:
- This manual
- In-app help (click ?)
- Support team
Appendix A: Troubleshooting
Common Issues
Suite won’t load:
- Refresh the page (
F5orCtrl+R) - Clear browser cache
- Try a different browser
Files won’t upload:
- Check file size (max 100MB)
- Check internet connection
- Try a smaller file first
Bot not responding:
- Wait a few seconds
- Refresh the page
- Check internet connection
Video/audio not working:
- Allow camera/microphone in browser
- Check device permissions
- Try different browser
Error Messages
| Message | Solution |
|---|---|
| “Connection lost” | Check internet, refresh page |
| “File too large” | Reduce file size |
| “Permission denied” | Contact administrator |
| “Session expired” | Log in again |
Appendix B: Glossary
| Term | Definition |
|---|---|
| Bot | AI assistant that responds to your messages |
| Dialog | A conversation flow or script |
| HTMX | Technology that makes pages interactive |
| KB | Knowledge Base - stored information |
| MCP | Model Context Protocol - tool connections |
| Suite | The complete workspace application |
| Template | Pre-built bot configuration |
© General Bots - Built with ❤️ and AI
For the latest documentation, visit the online manual
Admin vs User Views
The General Bots Suite separates functionality into two distinct interfaces: the User View for personal productivity and the Admin View for organization management. This separation ensures users only see features relevant to their role while administrators have access to system-wide controls.
Overview
| View | Access | Purpose |
|---|---|---|
| User View | All authenticated users | Personal settings, files, tasks, calendar |
| Admin View | Users with admin role | Organization management, user provisioning, DNS |
User View
The User View is the default interface for all authenticated users. It provides access to personal productivity tools and settings.
Accessing User Settings
- Click your avatar in the top-right corner
- Select Settings
User Settings Sections
Profile
- Display name and avatar
- Email address
- Language and timezone
Security
- Change password
- Two-factor authentication (2FA)
- Active sessions management
- Trusted devices
Appearance
- Theme selection (dark, light, blue, purple, green, orange)
- Accent color
- Font size preferences
Notifications
- Email notification preferences
- Desktop alerts
- Sound settings
Storage
- View storage quota usage
- Manage connected storage providers
- Clear cache
Integrations
- API keys for external access
- Webhook configurations
- Connected OAuth providers
Privacy
- Data visibility settings
- Online status preferences
- Data export and account deletion
User API Endpoints
All user endpoints use the /api/user/ prefix:
| Endpoint | Method | Description |
|---|---|---|
/api/user/profile | GET, PUT | User profile data |
/api/user/password | POST | Change password |
/api/user/security/2fa/status | GET | 2FA status |
/api/user/security/2fa/enable | POST | Enable 2FA |
/api/user/security/sessions | GET | Active sessions |
/api/user/notifications/preferences | GET, PUT | Notification settings |
/api/user/storage | GET | Storage quota |
/api/user/api-keys | GET, POST, DELETE | API key management |
/api/user/webhooks | GET, POST, DELETE | Webhook management |
/api/user/data/export | POST | Request data export |
Admin View
The Admin View provides organization-wide management capabilities. Access requires the admin role.
Accessing Admin Panel
- Click your avatar in the top-right corner
- Select Admin Panel
If you don’t see “Admin Panel”, you don’t have administrator privileges.
Admin Panel Sections
Dashboard
- Quick statistics (users, groups, bots, storage)
- System health overview
- Recent activity feed
- Quick action buttons
Users
- View all organization users
- Create new users
- Edit user details and roles
- Disable or delete accounts
- Reset user passwords
Groups
- Create and manage groups
- Assign users to groups
- Set group permissions
- Manage group invitations
Bots
- View deployed bots
- Bot configuration management
- Usage statistics per bot
DNS
- Register custom hostnames
- Manage DNS records
- SSL certificate status
Audit Log
- View all system events
- Filter by user, action, or date
- Export audit reports
Organization Billing (Admin-level)
- Organization subscription status
- Usage across all users
- Payment methods for organization
- Invoice history
Admin API Endpoints
All admin endpoints use the /api/admin/ prefix and require admin role:
| Endpoint | Method | Description |
|---|---|---|
/api/admin/dashboard | GET | Dashboard statistics |
/api/admin/users | GET, POST | List/create users |
/api/admin/users/:id | GET, PUT, DELETE | Manage specific user |
/api/admin/groups | GET, POST | List/create groups |
/api/admin/groups/:id | GET, PUT, DELETE | Manage specific group |
/api/admin/bots | GET | List organization bots |
/api/admin/dns | GET, POST, DELETE | DNS management |
/api/admin/audit | GET | Audit log entries |
/api/admin/stats/* | GET | Various statistics |
/api/admin/health | GET | System health status |
/api/admin/activity/recent | GET | Recent activity feed |
Permission Levels
The system uses role-based access control (RBAC):
| Role | User View | Admin View | Description |
|---|---|---|---|
guest | Limited | ❌ | Read-only chat access |
user | ✅ | ❌ | Standard user features |
manager | ✅ | Partial | Can view monitoring |
admin | ✅ | ✅ | Full system access |
Checking User Role
In BASIC scripts, check the user’s role:
role = user.role
IF role = "admin" THEN
TALK "Welcome, administrator!"
ELSE
TALK "Welcome, " + user.name
END IF
Desktop App Considerations
When running the Suite as a desktop application (via Tauri), additional features become available:
Desktop-Only Features
- Local file system access
- Rclone-based file synchronization
- System tray integration
- Native notifications
Sync Feature The desktop app can sync local folders with cloud Drive using rclone:
- Configure remote in Settings → Storage → Sync
- Select local folder to sync
- Start/stop sync from Drive sidebar
Note: Sync controls (/files/sync/start, /files/sync/stop) communicate with the local rclone process on the desktop. These features are not available in the web-only version.
Security Best Practices
For Users
- Enable 2FA on your account
- Review active sessions regularly
- Use strong, unique passwords
- Revoke unused API keys
For Administrators
- Follow principle of least privilege
- Review audit logs regularly
- Rotate service account credentials
- Monitor for unusual activity
- Keep user list current (remove departed employees)
Related Documentation
- Permissions Matrix - Detailed permission definitions
- User Authentication - Login and session management
- REST Endpoints - Complete API reference
- Suite User Manual - End-user guide
UI Structure
The BotServer UI system provides two interface implementations designed for different deployment scenarios. Choose the right interface based on your use case and performance requirements.
Directory Layout
ui/
├── suite/ # Full-featured interface
│ ├── index.html
│ ├── base.html
│ ├── home.html
│ ├── default.gbui
│ ├── single.gbui
│ ├── designer.html
│ ├── editor.html
│ ├── settings.html
│ ├── js/
│ │ ├── htmx-app.js
│ │ ├── theme-manager.js
│ │ └── vendor/
│ ├── css/
│ │ ├── app.css
│ │ ├── apps-extended.css
│ │ ├── components.css
│ │ └── global.css
│ ├── public/
│ ├── partials/
│ ├── auth/
│ ├── attendant/
│ ├── chat/
│ │ ├── chat.html
│ │ ├── chat.css
│ │ └── projector.html
│ ├── drive/
│ ├── mail/
│ ├── tasks/
│ ├── calendar/
│ ├── meet/
│ ├── paper/
│ ├── research/
│ ├── analytics/
│ ├── sources/
│ ├── tools/
│ └── monitoring/
│
└── minimal/ # Lightweight interface
├── index.html
└── js/
Suite Interface
The Suite interface (ui/suite/) delivers a comprehensive, desktop-class experience with multi-application integration. It includes Chat, Drive, Tasks, and Mail modules in a unified workspace.
The Suite interface provides multi-application integration with seamless navigation between modules, rich interactions for complex workflows, and responsive design that adapts across desktop, tablet, and mobile form factors. Customizable GBUI templates allow you to choose between default.gbui for the full layout or single.gbui for a chat-focused experience. Tauri integration enables native desktop packaging for distribution outside the browser.
The Suite interface is best suited for enterprise deployments requiring full functionality, power users working with multiple services simultaneously, desktop application distribution via Tauri builds, and multi-service integrations where context switching between modules matters.
You can access the Suite interface via web at http://localhost:8080/suite or as a desktop application through the Tauri build using the --desktop flag.
Minimal Interface
The Minimal interface (ui/minimal/) prioritizes speed and simplicity. It loads fast, uses minimal resources, and focuses on essential chat interactions.
This lightweight interface provides core chat and basic interactions only, fast loading with minimal dependencies, and low resource usage suitable for constrained environments. The design supports easy embedding into existing applications and takes a mobile-first approach to responsive layout.
The Minimal interface excels for mobile web access, embedded chatbots in external websites, low-bandwidth environments, quick access terminals and kiosks, and scenarios where simplicity matters more than features.
Access the Minimal interface at the root URL http://localhost:8080 where it is served by default, explicitly at http://localhost:8080/minimal, or embedded via iframe or WebView in your own applications.
Configuration
Server Configuration
UI paths are configured in several locations throughout the codebase.
The main server configuration in src/main.rs sets the static path:
#![allow(unused)] fn main() { let static_path = std::path::Path::new("./ui/suite"); }
The UI server module at src/core/ui_server/mod.rs defines its own path:
#![allow(unused)] fn main() { let static_path = PathBuf::from("./ui/suite"); }
For Tauri desktop builds, tauri.conf.json specifies the frontend distribution:
{
"build": {
"frontendDist": "./ui/suite"
}
}
Routing
Both interfaces can be served simultaneously with different routes:
#![allow(unused)] fn main() { Router::new() .route("/", get(serve_minimal)) .route("/minimal", get(serve_minimal)) .route("/suite", get(serve_suite)) }
The minimal interface serves at root by default, providing faster loading for most users who need quick chat interactions.
API Compliance
The Minimal UI implements full compliance with the Bot Core API. Both interfaces support the same backend endpoints, ensuring consistent functionality regardless of which interface you choose.
| Endpoint | Method | Purpose |
|---|---|---|
/ws | WebSocket | Real-time messaging |
/api/auth | GET | Authentication |
/api/sessions | GET/POST | Session management |
/api/sessions/current/message | POST | Send message (current session) |
/api/sessions/current/history | GET | Message history (current session) |
/api/sessions/:id | GET | Session details |
/api/sessions/:id/history | GET | Message history by ID |
/api/sessions/:id/start | POST | Start session |
/api/sessions/:id/end | POST | End session |
/api/voice/start | POST | Voice input start |
/api/voice/stop | POST | Voice input stop |
/api/voice/status | GET | Voice status |
Note: The frontend uses
/api/sessions/current/*endpoints which resolve to the active session automatically.
Both interfaces use the same WebSocket message types for communication. TEXT (1) handles regular text messages, VOICE (2) handles voice messages, CONTINUE (3) continues interrupted responses, CONTEXT (4) manages context changes, and SYSTEM (5) delivers system messages.
Performance Characteristics
Suite Interface
| Metric | Typical Value |
|---|---|
| Initial load | ~500KB |
| Time to interactive | ~1.5s |
| Memory usage | ~80MB |
| Best for | Full productivity |
Minimal Interface
| Metric | Typical Value |
|---|---|
| Initial load | ~50KB |
| Time to interactive | ~200ms |
| Memory usage | ~20MB |
| Best for | Quick interactions |
Browser Support
Both interfaces support modern browsers with full functionality:
| Browser | Minimum Version | WebSocket | Voice |
|---|---|---|---|
| Chrome | 90+ | ✅ | ✅ |
| Firefox | 88+ | ✅ | ✅ |
| Safari | 14+ | ✅ | ✅ |
| Edge | 90+ | ✅ | ✅ |
| Mobile Chrome | 90+ | ✅ | ✅ |
| Mobile Safari | 14+ | ✅ | ✅ |
Switching Interfaces
Users can switch between interfaces by navigating to the appropriate URL. For programmatic switching, update the ui_server/mod.rs to change the default:
#![allow(unused)] fn main() { // Serve minimal at root (default) match fs::read_to_string("ui/minimal/index.html") // Or serve suite at root match fs::read_to_string("ui/suite/index.html") }
Troubleshooting
If you encounter 404 errors, clear your browser cache, rebuild the project with cargo clean && cargo build, and verify the files exist in the ui/suite/ or ui/minimal/ directories.
For Tauri build failures, check that tauri.conf.json has the correct frontendDist path and ensure ui/suite/index.html exists.
When static files aren’t loading, verify the ServeDir configuration in the router and check that subdirectories (js, css, public) exist with their expected contents.
Debug commands can help diagnose issues:
# Verify UI structure
ls -la ui/suite/
ls -la ui/minimal/
# Test interfaces
curl http://localhost:8080/
curl http://localhost:8080/suite/
# Check static file serving
curl http://localhost:8080/js/htmx-app.js
Customization
GBUI Templates
The Suite interface uses GBUI templates for layout customization. The default.gbui template provides the full multi-app layout with sidebar navigation, while single.gbui offers a streamlined chat-focused view. Edit these files to customize the interface structure without modifying core code.
CSS Theming
Both interfaces support CSS customization through their respective stylesheets. The Suite interface provides more extensive theming options through CSS custom properties, allowing you to adjust colors, spacing, and typography to match your brand.
Future Enhancements
Planned improvements include dynamic UI selection based on device capabilities to automatically serve the most appropriate interface, progressive enhancement from minimal to suite as users need additional features, service worker implementation for offline support, and WebAssembly components for high-performance features that require client-side computation.
See Also
- default.gbui Reference - Full desktop template
- single.gbui Reference - Simple chat template
- Console Mode - Terminal interface
- Monitoring Dashboard - System observability
single.gbui - Simplified Chat Interface
The single.gbui template provides a streamlined, single-page chat interface focused on conversation without distractions.
Overview
Location: ui/suite/single.gbui
This minimalist chat interface delivers a clean, focused chat experience with WebSocket real-time messaging, dark mode support, mobile-responsive design, and fast loading under 50KB.
Features
Core Components
The interface consists of four main components. The header displays the bot name, status, and connection indicator with minimal branding. The messages area provides an auto-scrolling message list with clear user and bot message distinction, timestamps, and smooth animations. The input area offers a single-line text input with a send button, Enter key support, and auto-focus on load. The typing indicator shows a three-dot animation when the bot is processing a response.
Design Philosophy
The single.gbui template embraces minimalism by eliminating unnecessary UI elements. Speed is prioritized so the interface loads instantly and works on slow connections. Accessibility features include keyboard navigation and screen reader support. Visual clarity comes from a clear hierarchy that guides users naturally through the conversation.
Responsive Behavior
Desktop
On desktop displays, the interface uses a centered container with 800px maximum width for comfortable reading, ample whitespace, and optimal line lengths for extended conversations.
Mobile
On mobile devices, the layout expands to full width with larger touch targets meeting the 44px minimum requirement. The input remains bottom-aligned and adjusts appropriately when the virtual keyboard appears.
Styling
The interface uses minimal inline CSS for maximum performance:
/* Core styles only */
body {
font-family: system-ui, -apple-system, sans-serif;
margin: 0;
height: 100vh;
display: flex;
flex-direction: column;
}
.chat-container {
flex: 1;
display: flex;
flex-direction: column;
max-width: 800px;
margin: 0 auto;
width: 100%;
}
Dark Mode
Automatic dark mode activates based on system preference:
@media (prefers-color-scheme: dark) {
:root {
--background: #111827;
--text: #f9fafb;
--message-bot: #374151;
}
}
WebSocket Integration
Connection handling is simplified for reliability:
const ws = new WebSocket('ws://localhost:8080/ws');
ws.onmessage = (event) => {
const data = JSON.parse(event.data);
addMessage(data.content, 'bot');
};
function sendMessage() {
const message = input.value.trim();
if (message) {
ws.send(JSON.stringify({
type: 'message',
content: message
}));
addMessage(message, 'user');
input.value = '';
}
}
Use Cases
Embedded Widget
The single.gbui template is perfect for embedding in existing websites:
<iframe src="http://localhost:8080/ui/suite/single.gbui"
width="400"
height="600">
</iframe>
Kiosk Mode
The interface works well for public terminals with no navigation elements, focus on conversation, and easy reset between users.
Mobile-First
Optimization for mobile devices includes fast loading, minimal data usage, and touch-friendly controls.
Customization
Change Colors
Edit the CSS variables to match your brand:
:root {
--primary: #3b82f6; /* Your brand color */
--background: #ffffff; /* Background */
--text: #1f2937; /* Text color */
}
Modify Welcome Message
Update the initial bot message in the HTML:
<div class="message bot">
<div class="message-content">
Your custom welcome message here
</div>
</div>
Add Logo
Insert a logo in the header:
<header class="header">
<img src="logo.png" alt="Logo" height="32">
<span>Bot Name</span>
</header>
Performance
The single.gbui template achieves first paint in under 100ms and becomes interactive within 200ms. Total size stays under 50KB with no external dependencies since everything is inline.
Accessibility
The template uses semantic HTML structure throughout, ARIA labels on interactive elements, full keyboard navigation support, proper focus management, and high contrast mode support for users who need it.
Browser Support
The interface works on all modern browsers including Chrome 90+, Firefox 88+, Safari 14+, Edge 90+, and their mobile counterparts. It degrades gracefully on older browsers, maintaining core functionality.
See Also
- default.gbui - Full-featured interface
- Console Mode - Terminal interface
- Chapter 5: Themes - Custom styling
- Chapter 6: BASIC - Dialog scripting
Next Step
For terminal users, see Console Mode.
Console Mode
The BotServer console mode provides a text-based interface for monitoring your bot’s operation directly in the terminal.
Starting Console Mode
# Start BotServer with console UI
./botserver --console
Console Interface
The console displays real-time information about your running BotServer instance:
╔════════════════════════════════════════════════════════════╗
║ BotServer Console ║
╠════════════════════════════════════════════════════════════╣
║ Status: Running ║
║ Uptime: 2h 34m 12s ║
║ Port: 8080 ║
║ ║
║ Components: ║
║ PostgreSQL: ✓ Connected ║
║ Cache: ✓ Connected ║
║ Storage: ✓ Connected ║
║ Vectors: ✓ Connected ║
║ ║
║ Active Sessions: 12 ║
║ Messages Today: 1,234 ║
║ ║
║ Press 'q' to quit, 'r' to refresh ║
╚════════════════════════════════════════════════════════════╝
Console Features
Status Overview
The status overview displays the server’s current state including whether it is running or stopped, an uptime counter showing how long the server has been active, the port the server is listening on, and health checks for all connected components.
Session Information
Session information provides visibility into current activity with a count of active sessions, the total number of messages processed today, and recent activity indicators that show when the last interactions occurred.
Component Status
Real-time status monitoring covers all infrastructure components including database connectivity to PostgreSQL, cache service status, storage availability for file operations, and vector database connection status for semantic search functionality.
Keyboard Controls
| Key | Action |
|---|---|
q | Quit console mode |
r | Force refresh display |
c | Clear console |
h | Show help |
Console Output
The console provides basic logging output showing timestamped events as they occur:
[2024-01-15 10:23:45] Server started on port 8080
[2024-01-15 10:23:46] Database connected
[2024-01-15 10:23:47] Cache initialized
[2024-01-15 10:23:48] Storage mounted
[2024-01-15 10:24:01] New session: abc123
[2024-01-15 10:24:15] Message processed
Using Console Mode
Development
Console mode is particularly useful during development for monitoring component initialization, tracking connection status, observing error messages as they occur, and watching session activity in real time.
Production
In production environments, console mode helps with quick status checks when you need immediate visibility, basic monitoring of system health, and troubleshooting connection issues without accessing the web interface.
Limitations
Console mode provides basic monitoring only and is not intended for detailed analytics. For comprehensive data analysis, query PostgreSQL directly for session data. System logs contain detailed error information for debugging. The cache service provides its own statistics interface. Application logs offer the most complete picture for troubleshooting complex issues.
Terminal Requirements
Console mode supports any terminal with basic text output capabilities. UTF-8 support is recommended to properly render box drawing characters. A minimum width of 80 columns is recommended for optimal display. The console works over SSH connections, making it suitable for remote server monitoring.
Tips
Console mode operates in read-only fashion and does not accept bot commands. For interactive bot testing, use the web interface available at http://localhost:8080. The display refreshes automatically every few seconds to show current status. Output is buffered for performance to avoid slowing down the server during high activity periods.
Troubleshooting
Console Not Updating
If the console stops updating, check terminal compatibility with your emulator, ensure the process has proper permissions to write to the terminal, and verify that all components are running and responsive.
Display Issues
Display problems can often be resolved by trying a different terminal emulator. Check that your terminal encoding is set to UTF-8 for proper character rendering. If text appears cut off, resize the terminal window to provide adequate width for the display.
Summary
Console mode provides a simple, lightweight way to monitor BotServer status without needing a web browser. It’s ideal for quick checks and basic monitoring, but for full functionality including interactive bot testing and detailed analytics, use the web interface.
Monitoring Dashboard
The Monitoring Dashboard is the live operations homepage for your General Bots deployment. It provides real-time visibility into system health, active sessions, and resource utilization through an animated, interactive SVG visualization.
Live System Architecture
The dashboard displays BotServer at the center orchestrating all interactions, with animated data packets flowing between components:
- Left Side (Data Layer): PostgreSQL, Qdrant vector database, and MinIO storage
- Right Side (Services): BotModels AI, Cache, and Vault security
- Center: BotServer core with pulsing rings indicating activity
- Top: Real-time metrics panels for sessions, messages, and response time
- Bottom: Resource utilization bars and activity ticker
Accessing the Dashboard
The monitoring dashboard is the default homepage when accessing Suite:
http://localhost:8080/monitoring
Or from within Suite:
- Click the apps menu (⋮⋮⋮)
- Select Monitoring
Real-Time Metrics
Active Sessions Panel
Displays current conversation sessions:
Active Sessions: 12
Peak Today: 47
Avg Duration: 8m 32s
Trend: ↑ +3 in last hour
Messages Panel
Shows message throughput:
Resource Utilization
Real-time system resources:
| Resource | Current | Threshold |
|---|---|---|
| CPU | 65% | Warning > 80% |
| Memory | 72% | Warning > 85% |
| GPU | 45% | Warning > 90% |
| Disk | 28% | Warning > 90% |
Service Health Status
Each service has a status indicator:
| Service | Status | Health Check |
|---|---|---|
| PostgreSQL | 🟢 Running | Connection pool, query latency |
| Qdrant | 🟢 Running | Vector count, search time |
| MinIO | 🟢 Running | Storage usage, object count |
| BotModels | 🟢 Running | Token usage, response time |
| Cache | 🟢 Running | Hit rate, memory usage |
| Vault | 🟢 Running | Seal status, policy count |
Status Indicators
| Status | Color | Animation |
|---|---|---|
| Running | 🟢 Green | Gentle pulse |
| Warning | 🟡 Amber | Fast pulse |
| Stopped | 🔴 Red | No animation |
Live Data Endpoints
The dashboard pulls real data from these HTMX endpoints:
| Endpoint | Interval | Data |
|---|---|---|
/api/monitoring/metric/sessions | 5s | Session count, trend |
/api/monitoring/metric/messages | 10s | Message count, rate |
/api/monitoring/metric/response_time | 10s | Avg response time |
/api/monitoring/resources/bars | 15s | CPU, memory, GPU, disk |
/api/monitoring/services/status | 30s | Service health JSON |
/api/monitoring/activity/latest | 5s | Activity ticker text |
/api/monitoring/bots/active | 30s | Active bot list |
API Access
Full Status Endpoint
GET /api/monitoring/status
Returns complete system status:
{
"sessions": {
"active": 12,
"peak_today": 47,
"avg_duration_seconds": 512
},
"messages": {
"today": 1234,
"this_hour": 89,
"avg_response_ms": 1200
},
"resources": {
"cpu_percent": 65,
"memory_percent": 72,
"gpu_percent": 45,
"disk_percent": 28
},
"services": {
"postgresql": "running",
"qdrant": "running",
"cache": "running",
"drive": "running",
"botmodels": "running",
"vault": "running"
}
}
Active Bots Endpoint
GET /api/monitoring/bots
Returns list of deployed bots with metrics:
{
"bots": [
{
"name": "default",
"status": "active",
"sessions_today": 34,
"messages_today": 567,
"avg_response_ms": 980
},
{
"name": "support",
"status": "active",
"sessions_today": 12,
"messages_today": 234,
"avg_response_ms": 1100
}
]
}
Historical Data
GET /api/monitoring/history?period=24h
Returns time-series data for charting.
Prometheus Export
GET /api/monitoring/prometheus
Returns metrics in Prometheus format for external monitoring systems.
View Modes
Toggle between two views using the grid button or press V:
Live View (Default)
The animated SVG visualization showing the complete system topology with flowing data packets. This is the recommended view for operations dashboards.
Grid View
Traditional panel-based layout with detailed metrics:
- Sessions Panel: Active, peak, average duration
- Messages Panel: Counts, rates, response times
- Resources Panel: Progress bars with thresholds
- Services Panel: Health status for each component
- Bots Panel: List of active bots with metrics
Keyboard Shortcuts
| Shortcut | Action |
|---|---|
V | Toggle between Live and Grid view |
R | Refresh all metrics immediately |
F | Toggle fullscreen mode |
? | Show keyboard shortcuts |
Alert Configuration
Configure alert thresholds in config.csv:
name,value
alert-cpu-threshold,80
alert-memory-threshold,85
alert-disk-threshold,90
alert-response-time-ms,5000
alert-email,admin@example.com
When thresholds are exceeded:
- Service status turns amber/red
- Alert notification sent to configured email
- Activity ticker shows alert message
Console Mode Monitoring
For terminal-based monitoring or headless servers:
./botserver --console --monitor
Output:
[MONITOR] 2025-01-15 14:32:00
Sessions: 12 active (peak: 47)
Messages: 1,234 today (89/hour)
CPU: 65% | MEM: 72% | GPU: 45%
Services: 6/6 running
Latest: User enrolled in Computer Science course
Component Health Details
| Component | Metrics Monitored | Warning Signs |
|---|---|---|
| PostgreSQL | Connection count, query rate, replication lag | > 80 connections, queries > 100ms |
| Qdrant | Vector count, search latency, memory | > 50ms search, > 80% memory |
| MinIO | Storage usage, object count, bandwidth | > 80% storage, high error rate |
| BotModels | Token usage, response latency, queue depth | > 2s response, queue > 10 |
| Vault | Seal status, policy count, auth failures | Sealed, repeated auth failures |
| Cache | Hit rate, memory usage, evictions | < 80% hit rate, frequent evictions |
Best Practices
- Keep monitoring visible — Use a dedicated screen or dashboard monitor for operations
- Set appropriate thresholds — Configure alerts before issues become critical
- Watch data flow — Animated packets indicate active communication between components
- Monitor trends — The session trend indicator (↑/↓) shows direction of change
- Use historical data — Query
/api/monitoring/historyfor trend analysis - Enable Prometheus export — Integrate with existing monitoring infrastructure
Troubleshooting
Dashboard Not Loading
- Check browser console for errors
- Verify
/api/monitoring/statusreturns data - Ensure WebSocket connection is established
- Refresh the page
Metrics Showing “–”
- Wait 5-10 seconds for initial data load
- Check network tab for failed API requests
- Verify all services are running
- Check BotServer logs for errors
Animations Stuttering
- Close unused browser tabs
- Enable hardware acceleration in browser settings
- Use Grid view for lower resource usage
- Check if system CPU is overloaded
Service Showing Red
- Check service-specific logs in
botserver-stack/logs/ - Verify Vault is unsealed
- Check database connection limits
- Restart the affected service
See Also
- Console Mode — Terminal-based interface
- HTMX Architecture — How real-time updates work
- Suite Manual — Complete user guide
- Analytics App — Business metrics and reporting
HTMX Architecture
Overview
General Bots Suite uses HTMX for its user interface - a modern approach that delivers the interactivity of a single-page application without the complexity of JavaScript frameworks like React, Vue, or Angular.
Why HTMX?
- Simpler code, easier maintenance
- Server-rendered HTML (fast, SEO-friendly)
- Progressive enhancement
- No build step required
- Smaller payload than SPA frameworks
How HTMX Works
Traditional Web vs HTMX
Traditional (Full Page Reload):
User clicks → Browser requests full page → Server returns entire HTML → Browser replaces everything
HTMX (Partial Update):
User clicks → HTMX requests fragment → Server returns HTML snippet → HTMX updates only that part
Core Concept
HTMX extends HTML with attributes that define:
- What triggers the request (
hx-trigger) - Where to send it (
hx-get,hx-post) - What to update (
hx-target) - How to update it (
hx-swap)
HTMX Attributes Reference
Request Attributes
| Attribute | Purpose | Example |
|---|---|---|
hx-get | GET request to URL | hx-get="/api/tasks" |
hx-post | POST request | hx-post="/api/tasks" |
hx-put | PUT request | hx-put="/api/tasks/1" |
hx-patch | PATCH request | hx-patch="/api/tasks/1" |
hx-delete | DELETE request | hx-delete="/api/tasks/1" |
Trigger Attributes
| Attribute | Purpose | Example |
|---|---|---|
hx-trigger | Event that triggers request | hx-trigger="click" |
| Load on page | hx-trigger="load" | |
| Periodic polling | hx-trigger="every 5s" | |
| Keyboard event | hx-trigger="keyup changed delay:300ms" |
Target & Swap Attributes
| Attribute | Purpose | Example |
|---|---|---|
hx-target | Element to update | hx-target="#results" |
hx-swap | How to insert content | hx-swap="innerHTML" |
hx-swap="outerHTML" | ||
hx-swap="beforeend" | ||
hx-swap="afterbegin" |
Suite Architecture
File Structure
ui/suite/
├── index.html # Main entry point with navigation
├── base.html # Base template
├── home.html # Home page
├── default.gbui # Full desktop layout
├── single.gbui # Simple chat layout
├── designer.html # Visual dialog designer
├── editor.html # Code editor
├── settings.html # User settings
├── css/
│ ├── app.css # Application styles
│ ├── apps-extended.css # Extended app styles
│ ├── components.css # UI components
│ └── global.css # Global styles
├── js/
│ ├── htmx-app.js # HTMX application logic
│ ├── theme-manager.js # Theme switching
│ └── vendor/ # Third-party libraries
├── partials/ # Reusable HTML fragments
├── auth/ # Authentication views
├── attendant/ # Attendant interface
├── chat/
│ ├── chat.html # Chat component
│ ├── chat.css # Chat styles
│ └── projector.html # Projector view
├── drive/ # File manager
├── tasks/ # Task manager
├── mail/ # Email client
├── calendar/ # Calendar view
├── meet/ # Video meetings
├── paper/ # Document editor
├── research/ # AI search
├── analytics/ # Dashboards
├── sources/ # Prompts & templates
├── tools/ # Developer tools
└── monitoring/ # System monitoring
Loading Pattern
The Suite uses lazy loading - components load only when needed:
<!-- Main navigation in index.html -->
<a href="#chat"
data-section="chat"
hx-get="/ui/suite/chat/chat.html"
hx-target="#main-content"
hx-swap="innerHTML">
Chat
</a>
When user clicks “Chat”:
- HTMX requests
/ui/suite/chat/chat.html - Server returns the Chat HTML fragment
- HTMX inserts it into
#main-content - Only Chat code loads, not entire app
Component Patterns
1. Load on Page View
<!-- Tasks load immediately when component is shown -->
<div id="task-list"
hx-get="/api/tasks"
hx-trigger="load"
hx-swap="innerHTML">
<div class="loading">Loading tasks...</div>
</div>
2. Form Submission
<!-- Add task form -->
<form hx-post="/api/tasks"
hx-target="#task-list"
hx-swap="afterbegin"
hx-on::after-request="this.reset()">
<input type="text" name="text" placeholder="New task..." required>
<button type="submit">Add</button>
</form>
Flow:
- User types task, clicks Add
- HTMX POSTs form data to
/api/tasks - Server creates task, returns HTML for new task item
- HTMX inserts at beginning of
#task-list - Form resets automatically
3. Click Actions
<!-- Task item with actions -->
<div class="task-item" id="task-123">
<input type="checkbox"
hx-patch="/api/tasks/123"
hx-vals='{"completed": true}'
hx-target="#task-123"
hx-swap="outerHTML">
<span>Review quarterly report</span>
<button hx-delete="/api/tasks/123"
hx-target="#task-123"
hx-swap="outerHTML"
hx-confirm="Delete this task?">
🗑
</button>
</div>
4. Search with Debounce
<!-- Search input with 300ms delay -->
<input type="text"
name="q"
placeholder="Search..."
hx-get="/api/search"
hx-trigger="keyup changed delay:300ms"
hx-target="#search-results"
hx-indicator="#search-spinner">
<span id="search-spinner" class="htmx-indicator">🔄</span>
<div id="search-results"></div>
Flow:
- User types in search box
- After 300ms of no typing, HTMX sends request
- Spinner shows during request
- Results replace
#search-resultscontent
5. Real-time Updates (WebSocket)
<!-- Chat with WebSocket -->
<div id="chat-app" hx-ext="ws" ws-connect="/ws">
<div id="messages"
hx-get="/api/sessions/current/history"
hx-trigger="load"
hx-swap="innerHTML">
</div>
<form ws-send>
<input name="content" type="text">
<button type="submit">Send</button>
</form>
</div>
Flow:
- WebSocket connects on load
- History loads via HTMX GET
- New messages sent via WebSocket (
ws-send) - Server pushes updates to all connected clients
6. Polling for Updates
<!-- Analytics that refresh every 30 seconds -->
<div class="metric-card"
hx-get="/api/analytics/messages/count"
hx-trigger="load, every 30s"
hx-swap="innerHTML">
<!-- Content updates automatically -->
</div>
7. Infinite Scroll
<!-- File list with infinite scroll -->
<div id="file-list">
<!-- Files here -->
<div hx-get="/api/files?page=2"
hx-trigger="revealed"
hx-swap="afterend">
Loading more...
</div>
</div>
API Response Patterns
Server Returns HTML Fragments
The server doesn’t return JSON - it returns ready-to-display HTML:
Request:
GET /api/tasks
Response:
<div class="task-item" id="task-1">
<input type="checkbox">
<span>Review quarterly report</span>
</div>
<div class="task-item" id="task-2">
<input type="checkbox">
<span>Update documentation</span>
</div>
Swap Strategies
| Strategy | Effect |
|---|---|
innerHTML | Replace contents of target |
outerHTML | Replace entire target element |
beforeend | Append inside target (at end) |
afterbegin | Prepend inside target (at start) |
beforebegin | Insert before target |
afterend | Insert after target |
delete | Delete target element |
none | Don’t swap (for side effects) |
CSS Integration
Loading Indicators
/* Hidden by default */
.htmx-indicator {
display: none;
}
/* Shown during request */
.htmx-request .htmx-indicator {
display: inline-block;
}
/* Or when indicator IS the requesting element */
.htmx-request.htmx-indicator {
display: inline-block;
}
Transition Effects
/* Fade in new content */
.htmx-settling {
opacity: 0;
}
.htmx-swapping {
opacity: 0;
transition: opacity 0.2s ease-out;
}
JavaScript Integration
HTMX Events
// After any HTMX swap
document.body.addEventListener('htmx:afterSwap', (e) => {
console.log('Content updated:', e.detail.target);
});
// Before request
document.body.addEventListener('htmx:beforeRequest', (e) => {
console.log('Sending request to:', e.detail.pathInfo.path);
});
// After request completes
document.body.addEventListener('htmx:afterRequest', (e) => {
if (e.detail.successful) {
console.log('Request succeeded');
} else {
console.error('Request failed');
}
});
// On WebSocket message
document.body.addEventListener('htmx:wsAfterMessage', (e) => {
console.log('Received:', e.detail.message);
});
Triggering HTMX from JavaScript
// Trigger an HTMX request programmatically
htmx.trigger('#task-list', 'load');
// Make an AJAX request
htmx.ajax('GET', '/api/tasks', {
target: '#task-list',
swap: 'innerHTML'
});
// Process new HTMX content
htmx.process(document.getElementById('new-content'));
Designer Page Architecture
The visual dialog designer uses a hybrid approach:
Canvas Management (JavaScript)
// State managed in JavaScript
const state = {
nodes: new Map(), // Node data
connections: [], // Connections between nodes
zoom: 1, // Canvas zoom level
pan: { x: 0, y: 0 } // Canvas position
};
File Operations (HTMX)
<!-- Load file via HTMX -->
<button hx-get="/api/v1/designer/files"
hx-target="#file-list-content">
Open File
</button>
<!-- Save via HTMX -->
<button hx-post="/api/v1/designer/save"
hx-include="#designer-data">
Save
</button>
Drag-and-Drop (JavaScript)
// Toolbox items are draggable
toolboxItems.forEach(item => {
item.addEventListener('dragstart', (e) => {
e.dataTransfer.setData('nodeType', item.dataset.nodeType);
});
});
// Canvas handles drop
canvas.addEventListener('drop', (e) => {
const nodeType = e.dataTransfer.getData('nodeType');
createNode(nodeType, e.clientX, e.clientY);
});
Performance Considerations
1. Minimize Request Size
Return only what’s needed:
<!-- Good: Return just the updated row -->
<tr id="row-123">...</tr>
<!-- Bad: Return entire table -->
<table>...</table>
2. Use Appropriate Triggers
<!-- Don't poll too frequently -->
hx-trigger="every 30s" <!-- Good for dashboards -->
hx-trigger="every 1s" <!-- Too frequent! -->
<!-- Debounce user input -->
hx-trigger="keyup changed delay:300ms" <!-- Good -->
hx-trigger="keyup" <!-- Too many requests -->
3. Lazy Load Heavy Content
<!-- Load tab content only when tab is clicked -->
<div role="tabpanel"
hx-get="/api/heavy-content"
hx-trigger="intersect once">
</div>
4. Use hx-boost for Navigation
<!-- Boost all links in nav -->
<nav hx-boost="true">
<a href="/page1">Page 1</a> <!-- Now uses HTMX -->
<a href="/page2">Page 2</a>
</nav>
Security
CSRF Protection
HTMX automatically includes CSRF tokens:
<meta name="csrf-token" content="abc123...">
// Configure HTMX to send CSRF token
document.body.addEventListener('htmx:configRequest', (e) => {
e.detail.headers['X-CSRF-Token'] = document.querySelector('meta[name="csrf-token"]').content;
});
Content Security
- Server validates all inputs
- HTML is sanitized before rendering
- Authentication checked on every request
Comparison: HTMX vs React
| Aspect | HTMX | React |
|---|---|---|
| Learning Curve | Low (HTML attributes) | High (JSX, hooks, state) |
| Bundle Size | ~14KB | ~40KB + app code |
| Build Step | None | Required |
| Server Load | More (renders HTML) | Less (returns JSON) |
| Client Load | Less | More |
| SEO | Excellent | Requires SSR |
| Complexity | Simple | Complex |
| Best For | Content sites, dashboards | Complex SPAs, offline apps |
Further Reading
Suite Applications
Individual app documentation for General Bots Suite
Each application in the Suite has its own dedicated documentation with:
- Flow diagrams (SVG with light/dark theme support)
- Interface layouts
- HTMX integration patterns
- API endpoints
- CSS classes
- JavaScript handlers
- Keyboard shortcuts
Core Applications
| App | Description | Documentation |
|---|---|---|
| 🖥️ Suite | Full desktop interface | suite.md |
| 💬 Chat | AI-powered conversation assistant | chat.md |
| 📁 Drive | Cloud file storage and management | drive.md |
| ✓ Tasks | To-do lists with priorities | tasks.md |
| Email client | mail.md | |
| 📅 Calendar | Scheduling and events | calendar.md |
| 🎥 Meet | Video conferencing | meet.md |
| 🎬 Player | Media viewer | player.md |
Productivity Applications
| App | Description | Documentation |
|---|---|---|
| 📝 Paper | AI-assisted document writing | paper.md |
| 🔍 Research | AI-powered search and discovery | research.md |
| 📊 Analytics | Reports and dashboards | analytics.md |
Developer Tools
| App | Description | Documentation |
|---|---|---|
| 🎨 Designer | Visual dialog builder | designer.md |
| 📚 Sources | Prompts, templates, and models | sources.md |
| 🛡️ Compliance | Security scanner | compliance.md |
System Components
| Component | Description | Location |
|---|---|---|
| 🔐 Auth | Authentication views | ui/suite/auth/ |
| 👤 Attendant | Attendant interface | ui/suite/attendant/ |
| 🧩 Partials | Reusable HTML fragments | ui/suite/partials/ |
| 🔧 Tools | Developer utilities | ui/suite/tools/ |
| 📈 Monitoring | System monitoring dashboard | ui/suite/monitoring/ |
App Launcher
The Suite features a Google-style app launcher accessible from the header:
Accessing Apps
- Click the grid icon (⋮⋮⋮) in the top-right corner
- Select an app from the dropdown menu
- App loads in the main content area
Keyboard Shortcuts
| Shortcut | App |
|---|---|
Alt+1 | Chat |
Alt+2 | Drive |
Alt+3 | Tasks |
Alt+4 | |
Alt+5 | Calendar |
Alt+6 | Meet |
Architecture Overview
All Suite apps follow the same patterns:
HTMX Loading
Apps are loaded lazily when selected:
<a href="#chat"
data-section="chat"
hx-get="/ui/suite/chat/chat.html"
hx-target="#main-content"
hx-swap="innerHTML">
Chat
</a>
Component Structure
Each app is a self-contained HTML fragment:
app-name/
├── app-name.html # Main component
├── app-name.css # Styles (optional)
└── app-name.js # JavaScript (optional)
API Integration
Apps communicate with the backend via REST APIs:
<div hx-get="/api/v1/app/data"
hx-trigger="load"
hx-swap="innerHTML">
Loading...
</div>
Real-Time Updates
WebSocket support for live data:
<div hx-ext="ws" ws-connect="/ws">
<!-- Real-time content -->
</div>
Creating Custom Apps
To add a new app to the Suite:
- Create the component in
ui/suite/your-app/ - Add navigation entry in
index.html - Define API endpoints in your Rust backend
- Document the app in this folder
Template
<!-- ui/suite/your-app/your-app.html -->
<div class="your-app-container" id="your-app">
<header class="your-app-header">
<h2>Your App</h2>
</header>
<main class="your-app-content"
hx-get="/api/v1/your-app/data"
hx-trigger="load"
hx-swap="innerHTML">
<div class="htmx-indicator">Loading...</div>
</main>
</div>
<style>
.your-app-container {
display: flex;
flex-direction: column;
height: 100%;
}
</style>
See Also
- Suite Manual - Complete user guide
- HTMX Architecture - Technical details
- UI Structure - File organization
- Chapter 10: REST API - API reference
Suite - Full Desktop Interface
Complete productivity suite with integrated applications
Overview
The Suite provides a complete desktop interface with multiple integrated applications for web, desktop, and mobile platforms. It serves as the primary interface for General Bots, combining AI-powered chat with productivity tools.
Features
Multi-Application Layout
The Suite includes integrated applications:
| App | Purpose |
|---|---|
| 💬 Chat | AI assistant conversations |
| 📁 Drive | File management |
| ✓ Tasks | To-do lists |
| Email client | |
| 📅 Calendar | Scheduling |
| 🎥 Meet | Video calls |
| 🎬 Player | Media viewer |
| 📝 Paper | AI writing |
| 🔍 Research | AI search |
| 📊 Analytics | Dashboards |
Responsive Design
The Suite adapts to any screen size:
- Desktop (>1024px): Full multi-panel layout with persistent navigation
- Tablet (768-1024px): Collapsible sidebar with touch-optimized controls
- Mobile (<768px): Single column with bottom navigation and swipe gestures
Theme Support
- Light and dark mode
- Custom color schemes via
.gbtheme - Automatic system preference detection
Navigation
Header Bar
The header provides quick access to:
- Logo and branding
- App selector grid
- Theme switcher
- User menu and settings
App Launcher
Click the grid icon (⋮⋮⋮) to access all applications in a dropdown menu.
Keyboard Shortcuts
| Shortcut | Action |
|---|---|
Alt+1 | Chat |
Alt+2 | Drive |
Alt+3 | Tasks |
Alt+4 | |
Alt+5 | Calendar |
Alt+6 | Meet |
Esc | Close menus |
Usage
As Desktop App
Used automatically when running with --desktop:
./botserver --desktop
# Opens Suite in native window
As Web Interface
Default template for browser access:
./botserver
# Browse to http://localhost:8080
# Loads Suite interface
As Mobile PWA
Install as Progressive Web App:
- Open in mobile browser
- Add to home screen
- Launches as standalone app
Example Workflows
Quick Task Creation
File Sharing
Meeting Scheduling
Performance
| Metric | Target |
|---|---|
| Initial Load | < 200KB |
| WebSocket Latency | < 50ms |
| Touch Response | 60fps animations |
| Offline Support | Service worker caching |
Browser Support
- Chrome/Edge 90+
- Firefox 88+
- Safari 14+
- Mobile browsers (iOS Safari, Chrome Mobile)
Configuration
Customize Suite behavior in config.csv:
key,value
theme-color1,#0d2b55
theme-color2,#e3f2fd
theme-title,My Company Suite
theme-logo,https://example.com/logo.svg
suite-default-app,chat
suite-sidebar-collapsed,false
See Also
- Chat App - AI assistant
- Drive App - File management
- Tasks App - Task management
- HTMX Architecture - Technical details
- Theme Customization - Styling
Chat - AI Assistant
Your intelligent conversation partner
Overview
Chat is the heart of General Bots Suite - your AI-powered assistant that understands context, remembers conversations, and helps you get things done. Built with WebSocket for real-time communication and HTMX for seamless updates.
Features
Real-Time Messaging
Messages are sent and received instantly via WebSocket connection.
Voice Input
Click the microphone button to speak your message:
- Click 🎤 to start recording
- Speak your message clearly
- Click again to stop
- Message converts to text automatically
Quick Suggestions
Pre-built action chips for common requests:
| Chip | Action |
|---|---|
| 📊 Tasks | Show your task list |
| 📧 Check mail | Display unread emails |
| 📅 Schedule | Today’s calendar |
| ❓ Help | Available commands |
Message History
- Auto-loads previous messages on page open
- Scroll up to load older messages
- Click “Scroll to bottom” button to return to latest
Markdown Support
Bot responses support full Markdown rendering:
- Bold and italic text
code snippetsand code blocks- Bullet and numbered lists
- Links and images
- Tables
Keyboard Shortcuts
| Shortcut | Action |
|---|---|
Enter | Send message |
Shift+Enter | New line (without sending) |
↑ (Up arrow) | Edit last message |
/ | Open command menu |
Escape | Cancel current action |
API Endpoints
WebSocket Connection
ws://your-server:8080/ws
Message Types:
TEXT (1)- Regular text messagesVOICE (2)- Voice messagesCONTINUE (3)- Continue interrupted responsesCONTEXT (4)- Context changesSYSTEM (5)- System messages
REST Endpoints
| Endpoint | Method | Description |
|---|---|---|
/api/sessions | GET | List sessions |
/api/sessions/current/message | POST | Send message (current session) |
/api/sessions/current/history | GET | Get chat history (current session) |
/api/sessions/:id | GET | Get session details |
/api/sessions/:id/history | GET | Get chat history by ID |
/api/sessions/:id/start | POST | Start session |
/api/sessions/:id/end | POST | End session |
/api/voice/start | POST | Start voice recording |
/api/voice/stop | POST | Stop voice recording |
/api/voice/status | GET | Get voice status |
Note: The frontend uses
/api/sessions/current/*endpoints which resolve to the active session automatically.
Example Conversations
Getting Information
Creating Tasks
Drafting Emails
Accessibility
- Full keyboard navigation
- Screen reader announcements for new messages
- High contrast mode support
- Adjustable font sizes
- ARIA labels on all interactive elements
Troubleshooting
Messages Not Sending
- Check connection status indicator
- Verify WebSocket is connected
- Try refreshing the page
- Check browser console for errors
Voice Not Working
- Allow microphone permissions in browser
- Check device microphone settings
- Try a different browser
- Ensure HTTPS connection (required for voice)
History Not Loading
- Check network connection
- Verify API endpoint is accessible
- Clear browser cache
- Check for JavaScript errors
See Also
- HTMX Architecture — How Chat uses HTMX
- Suite Manual — Complete user guide
- Tasks App — Create tasks from chat
- Mail App — Email integration
Drive - File Management
Your cloud storage workspace
Overview
Drive is your personal cloud storage within General Bots Suite. Upload, organize, and share files with a familiar interface. Built with HTMX for smooth interactions and SeaweedFS for reliable object storage.
Features
Upload Files
Drag and Drop:
- Drag files from your computer
- Drop anywhere in the file area
- Upload progress shows automatically
Click to Upload:
- Click + New button
- Select Upload Files or Upload Folder
- Choose files from file picker
File Operations
| Action | How to Access |
|---|---|
| Open | Double-click file |
| Download | Right-click > Download |
| Rename | Right-click > Rename |
| Copy | Right-click > Copy |
| Move | Right-click > Move to |
| Star | Right-click > Star |
| Share | Right-click > Share |
| Delete | Right-click > Delete |
View Modes
| Mode | Description |
|---|---|
| Grid | Large thumbnails with previews |
| List | Detailed table with columns |
Navigation
- Breadcrumb: Click any folder in the path to jump back
- Sidebar: Quick access to My Drive, Starred, Recent, Trash
- Search: Find files by name or content
Labels & Organization
| Label | Icon | Use For |
|---|---|---|
| Work | 🔵 | Professional files |
| Personal | 🟢 | Private documents |
| Projects | 🟡 | Project-specific files |
| Custom | 🟣 | Create your own |
File Sync (Desktop Only)
The desktop app provides bidirectional file synchronization between your local machine and cloud Drive using rclone.
Requirements:
- General Bots desktop app (Tauri)
- rclone installed on your system
Setup:
- Install rclone:
https://rclone.org/install/ - Open Drive in the desktop app
- Click Settings → Sync
- Configure your sync folder (default:
~/GeneralBots) - Click Start Sync
Sync Controls: Located in the Drive sidebar under “Sync Status”
| Control | Description |
|---|---|
| Start | Begin synchronization |
| Stop | Stop current sync |
| Status | Shows idle, syncing, or error |
Sync Modes:
| Mode | Description |
|---|---|
| Push | Local → Cloud only |
| Pull | Cloud → Local only |
| Bisync | Bidirectional (default) |
Note: Sync features are only available in the desktop app. Web users see an “unavailable” status as sync requires local filesystem access.
Keyboard Shortcuts
| Shortcut | Action |
|---|---|
Enter | Open selected file |
Delete | Move to trash |
Ctrl+C | Copy selected |
Ctrl+V | Paste |
Ctrl+X | Cut selected |
Ctrl+A | Select all |
F2 | Rename selected |
Ctrl+F | Search files |
Ctrl+U | Upload files |
↑ / ↓ | Navigate files |
Managing Files via Chat
Finding Files
Sharing Files
Creating Folders
API Endpoints
| Endpoint | Method | Description |
|---|---|---|
/api/drive/list | GET | List files |
/api/drive/upload | POST | Upload file |
/api/drive/download/:path | GET | Download file |
/api/drive/delete/:path | DELETE | Delete file |
/api/drive/move | POST | Move/rename file |
/api/drive/copy | POST | Copy file |
/api/drive/mkdir | POST | Create folder |
/api/drive/share | POST | Share file |
Query Parameters
| Parameter | Values | Default |
|---|---|---|
path | Folder path | / |
sort | name, size, modified | name |
order | asc, desc | asc |
view | grid, list | grid |
filter | starred, recent, trash | none |
Response Format
{
"path": "/Projects/2024",
"files": [
{
"name": "Report.pdf",
"type": "file",
"size": 245000,
"modified": "2024-03-15T10:30:00Z",
"starred": false,
"shared": true
},
{
"name": "Documents",
"type": "folder",
"modified": "2024-03-14T09:00:00Z",
"starred": true
}
],
"storage": {
"used": 4500000000,
"total": 10737418240
}
}
File Type Icons
| Type | Extensions | Icon |
|---|---|---|
| Document | .pdf, .doc, .docx | 📄 |
| Spreadsheet | .xls, .xlsx, .csv | 📊 |
| Presentation | .ppt, .pptx | 📽️ |
| Image | .jpg, .png, .gif, .svg | 🖼️ |
| Video | .mp4, .webm, .mov | 🎬 |
| Audio | .mp3, .wav, .ogg | 🎵 |
| Archive | .zip, .tar, .gz | 📦 |
| Code | .js, .py, .rs, .html | 💻 |
| Folder | - | 📁 |
Storage Backend
Drive uses SeaweedFS for object storage:
- Scalable: Grows with your needs
- Redundant: Data replicated across nodes
- Fast: Optimized for small and large files
- S3 Compatible: Works with standard S3 tools
Configure storage in config.csv:
key,value
drive-server,http://localhost:9000
drive-bucket,bot-files
drive-quota-gb,10
Troubleshooting
Upload Fails
- Check file size (default limit: 100MB)
- Verify storage quota isn’t exceeded
- Check network connection
- Try smaller files or compress first
Files Not Displaying
- Refresh the page
- Check folder path is correct
- Verify file permissions
- Clear browser cache
Sharing Not Working
- Verify recipient email address
- Check sharing permissions
- Ensure file isn’t in Trash
See Also
- Suite Manual - Complete user guide
- Admin vs User Views - Permission levels
- Chat App - Upload files via chat
- Player App - View files in Player
- Storage API - API reference
Tasks - To-Do Management
Track what needs to be done
Overview
Tasks is your to-do list manager within General Bots Suite. Create tasks, set priorities, organize by category, and track your progress. Built with HTMX for instant updates without page reloads.
Features
Adding Tasks
Quick Add:
- Type task description in the input box
- Press Enter or click + Add
With Details:
- Type task description
- Select a category (optional)
- Pick a due date (optional)
- Click + Add
Priority Levels
| Priority | Icon | When to Use |
|---|---|---|
| High | 🔴 | Must do today |
| Medium | 🟡 | Important but not urgent |
| Low | 🟢 | Can wait |
| None | ⚪ | No deadline |
Categories
| Category | Icon |
|---|---|
| Work | 💼 |
| Personal | 🏠 |
| Shopping | 🛒 |
| Health | ❤️ |
| Custom | 🏷️ |
Filter Tabs
| Tab | Shows |
|---|---|
| All | All tasks |
| Active | Uncompleted tasks |
| Completed | Done tasks |
| Priority | High priority only |
Keyboard Shortcuts
| Shortcut | Action |
|---|---|
Enter | Add task (when in input) |
Space | Toggle task completion |
Delete | Remove selected task |
Tab | Move to next field |
Escape | Cancel editing |
↑ / ↓ | Navigate tasks |
Creating Tasks from Chat
Managing Tasks via Chat
Setting Reminders
API Endpoints
| Endpoint | Method | Description |
|---|---|---|
/api/tasks | GET | List all tasks |
/api/tasks | POST | Create new task |
/api/tasks/:id | GET | Get single task |
/api/tasks/:id | PATCH | Update task |
/api/tasks/:id | DELETE | Delete task |
/api/tasks/stats | GET | Get task statistics |
Query Parameters
| Parameter | Values | Default |
|---|---|---|
filter | all, active, completed, priority | all |
category | work, personal, shopping, health | none |
sort | created, dueDate, priority, text | created |
order | asc, desc | desc |
Request Body (Create/Update)
{
"text": "Review quarterly report",
"category": "work",
"dueDate": "2024-03-20",
"priority": "high",
"completed": false
}
Response Format
{
"id": 123,
"text": "Review quarterly report",
"category": "work",
"dueDate": "2024-03-20",
"priority": "high",
"completed": false,
"createdAt": "2024-03-18T10:30:00Z",
"updatedAt": "2024-03-18T10:30:00Z"
}
Integration with Calendar
Tasks with due dates automatically appear in your Calendar view, helping you visualize your workload across days and weeks.
Troubleshooting
Tasks Not Saving
- Check network connection
- Verify API endpoint is accessible
- Check browser console for errors
- Try refreshing the page
Filters Not Working
- Click the filter tab again
- Check if tasks exist for that filter
- Clear browser cache
Stats Not Updating
- Reload the page
- Check for JavaScript errors in console
See Also
- Suite Manual - Complete user guide
- Chat App - Create tasks from chat
- Calendar App - View tasks in calendar
- Tasks API - API reference
Mail - Email Client
Your intelligent inbox
Overview
Mail is the email application in General Bots Suite. Read, compose, and organize your emails with AI assistance. Mail helps you write better emails, find important messages, and stay on top of your inbox without the clutter.
Features
Folders
| Folder | Description |
|---|---|
| 📥 Inbox | Incoming messages |
| ⭐ Starred | Important emails |
| 📤 Sent | Sent messages |
| 📝 Drafts | Unsent drafts |
| 🗑️ Trash | Deleted emails |
Labels
| Label | Icon | Use For |
|---|---|---|
| Urgent | 🔴 | Time-sensitive |
| Personal | 🟢 | Private emails |
| Work | 🔵 | Professional |
| Finance | 🟡 | Bills & money |
| Custom | 🟣 | Create your own |
Email Actions
| Action | Description |
|---|---|
| Reply | Respond to sender |
| Reply All | Respond to all recipients |
| Forward | Send to someone else |
| Archive | Remove from inbox |
| Star | Mark as important |
| Label | Organize with labels |
| Delete | Move to trash |
Keyboard Shortcuts
| Shortcut | Action |
|---|---|
C | Compose new email |
R | Reply |
A | Reply all |
F | Forward |
E | Archive |
S | Star/unstar |
Delete | Move to trash |
Ctrl+Enter | Send email |
Escape | Close/cancel |
↑ / ↓ | Navigate emails |
Email via Chat
Checking Your Inbox
Reading an Email
Composing an Email
AI-Assisted Writing
Searching Emails
API Endpoints
| Endpoint | Method | Description |
|---|---|---|
/api/email/inbox | GET | List inbox messages |
/api/email/send | POST | Send email |
/api/email/draft | POST | Save draft |
/api/email/:id | GET | Read email |
/api/email/:id | DELETE | Delete email |
/api/email/:id/star | POST | Toggle star |
/api/email/:id/label | POST | Add label |
/api/email/search | GET | Search emails |
Query Parameters
| Parameter | Values | Default |
|---|---|---|
folder | inbox, sent, drafts, trash, starred | inbox |
label | Label name | none |
unread | true, false | none |
limit | 1-100 | 25 |
offset | Number | 0 |
Send Email Request
{
"to": ["john@company.com"],
"cc": [],
"bcc": [],
"subject": "Project Update",
"body": "Hi John,\n\nHere's the latest update...",
"attachments": ["file-id-123"]
}
Email Response
{
"id": "msg-456",
"from": "sarah@company.com",
"to": ["you@company.com"],
"subject": "Q2 Report Review",
"body": "Hi,\n\nPlease review the attached...",
"date": "2025-05-15T10:32:00Z",
"read": false,
"starred": true,
"labels": ["work"],
"attachments": [
{
"id": "att-789",
"name": "Q2-Report.pdf",
"size": 2457600
}
]
}
Configuration
Configure email in config.csv:
key,value
smtp-server,smtp.gmail.com
smtp-port,587
imap-server,imap.gmail.com
imap-port,993
email-from,Your Name <you@gmail.com>
Note: Use app-specific passwords for Gmail, not your main password.
Troubleshooting
Emails Not Loading
- Check internet connection
- Verify email credentials
- Check IMAP settings
- Refresh the page
Send Fails
- Check recipient address
- Verify SMTP settings
- Check attachment size (max 25MB)
- Try again in a moment
Missing Emails
- Check spam/junk folder
- Verify filters aren’t hiding emails
- Check trash folder
- Sync may take a few minutes
See Also
- Suite Manual - Complete user guide
- Chat App - Send quick emails via chat
- Email API - API reference
- SEND MAIL Keyword - BASIC integration
Calendar - Scheduling
Your personal scheduling assistant
Overview
Calendar is your scheduling hub in General Bots Suite. Create events, manage appointments, schedule meetings, and let the AI help you find the perfect time. Calendar syncs with your other apps so you never miss an important date.
Features
Views
| View | Description |
|---|---|
| Day | Hourly breakdown of single day |
| Week | 7-day overview |
| Month | Full month grid |
| Agenda | List of upcoming events |
Calendars
| Calendar | Icon | Use For |
|---|---|---|
| Personal | 🟢 | Private appointments |
| Work | 🔵 | Professional meetings |
| Team | 🟣 | Shared team events |
| Holidays | 🔴 | Public holidays |
Event Types
| Type | Icon | Description |
|---|---|---|
| Meeting | 👥 | Group meetings |
| Call | 📞 | Phone/video calls |
| Reminder | 🔔 | Personal reminders |
| All-Day | 📅 | Full day events |
| Recurring | 🔄 | Repeating events |
Keyboard Shortcuts
| Shortcut | Action |
|---|---|
C | Create new event |
T | Go to today |
D | Day view |
W | Week view |
M | Month view |
← / → | Previous / Next period |
Delete | Delete selected event |
Enter | Open event details |
Scheduling via Chat
Creating an Event
Checking Your Schedule
Finding Available Time
Rescheduling Events
Setting Reminders
API Endpoints
| Endpoint | Method | Description |
|---|---|---|
/api/calendar/events | GET | List events |
/api/calendar/events | POST | Create event |
/api/calendar/events/:id | GET | Get event details |
/api/calendar/events/:id | PATCH | Update event |
/api/calendar/events/:id | DELETE | Delete event |
/api/calendar/availability | GET | Check free/busy |
Query Parameters
| Parameter | Values | Default |
|---|---|---|
start | ISO date | Today |
end | ISO date | +30 days |
calendar | Calendar name | All |
view | day, week, month | month |
Create Event Request
{
"title": "Team Meeting",
"start": "2025-05-16T14:00:00Z",
"end": "2025-05-16T15:00:00Z",
"calendar": "work",
"attendees": ["sarah@company.com"],
"location": "Conference Room A",
"reminder": 15,
"recurrence": null
}
Event Response
{
"id": "evt-123",
"title": "Team Meeting",
"start": "2025-05-16T14:00:00Z",
"end": "2025-05-16T15:00:00Z",
"calendar": "work",
"attendees": [
{
"email": "sarah@company.com",
"status": "accepted"
}
],
"location": "Conference Room A",
"reminder": 15,
"created": "2025-05-15T10:30:00Z"
}
Integration with Tasks
Tasks with due dates automatically appear on your calendar. When you complete a task, it’s marked as done on the calendar too.
Troubleshooting
Events Not Syncing
- Refresh the calendar
- Check internet connection
- Verify calendar is enabled in sidebar
- Wait a few minutes for sync
Can’t Create Events
- Verify you have write access to the calendar
- Check that start time is before end time
- Ensure date is not in the past
Missing Invitations
- Check spam/junk folder in email
- Verify attendee email addresses
- Check notification settings
See Also
- Suite Manual - Complete user guide
- Tasks App - Task integration
- Meet App - Video meetings
- Calendar API - API reference
Meet - Video Calls
Your virtual meeting room
Overview
Meet is the video conferencing app in General Bots Suite. Host video calls, share your screen, collaborate in real-time, and let the AI take notes for you. Meet integrates seamlessly with Calendar so joining meetings is just one click away.
Features
Meeting Controls
| Control | Description |
|---|---|
| 🎤 Mute | Toggle microphone |
| 📹 Video | Toggle camera |
| 🖥️ Share | Share screen |
| ✋ Raise Hand | Get attention |
| 💬 Chat | In-meeting chat |
| 👥 Participants | View attendees |
| ⚙️ Settings | Audio/video settings |
| 📞 Leave | End call |
Meeting Features
| Feature | Description |
|---|---|
| Screen Sharing | Share your entire screen or specific window |
| Recording | Record meetings for later review |
| Chat | Send messages during the meeting |
| Reactions | Quick emoji reactions |
| Raise Hand | Queue to speak |
| Breakout Rooms | Split into smaller groups |
| AI Notes | Automatic meeting notes |
| Transcription | Live captions |
Layouts
| Layout | Best For |
|---|---|
| Gallery | Group discussions |
| Speaker | Presentations |
| Sidebar | Screen sharing |
| Spotlight | Featured speaker |
Keyboard Shortcuts
| Shortcut | Action |
|---|---|
M | Toggle mute |
V | Toggle video |
S | Share screen |
H | Raise/lower hand |
C | Toggle chat |
P | Toggle participants |
R | Start/stop recording |
Space | Push to talk (when muted) |
Escape | Exit fullscreen |
Meetings via Chat
Starting a Meeting
Inviting Participants
Scheduling a Meeting
Getting Meeting Notes
Recording Access
API Endpoints
| Endpoint | Method | Description |
|---|---|---|
/api/meet/rooms | GET | List meeting rooms |
/api/meet/rooms | POST | Create room |
/api/meet/rooms/:id | GET | Get room details |
/api/meet/rooms/:id/join | POST | Join meeting |
/api/meet/rooms/:id/leave | POST | Leave meeting |
/api/meet/rooms/:id/invite | POST | Invite participants |
/api/meet/recordings | GET | List recordings |
/api/meet/recordings/:id | GET | Get recording |
Create Room Request
{
"name": "Team Sync",
"scheduled": "2025-05-16T15:00:00Z",
"duration": 60,
"participants": ["sarah@company.com", "john@company.com"],
"settings": {
"recording": true,
"transcription": true,
"waitingRoom": false
}
}
Room Response
{
"id": "room-abc123",
"name": "Team Sync",
"url": "https://meet.gb/abc-123",
"host": "you@company.com",
"scheduled": "2025-05-16T15:00:00Z",
"status": "scheduled",
"participants": [
{
"email": "sarah@company.com",
"status": "invited"
}
],
"settings": {
"recording": true,
"transcription": true
}
}
Configuration
Configure Meet in config.csv:
key,value
meet-provider,livekit
meet-server-url,wss://localhost:7880
meet-recording-enabled,true
meet-transcription-enabled,true
meet-max-participants,50
Requirements
Browser Support
- Chrome 90+
- Firefox 88+
- Safari 14+
- Edge 90+
Hardware
- Webcam (optional)
- Microphone
- Speakers or headphones
- Stable internet connection
Permissions
Allow browser access to:
- Camera
- Microphone
- Screen sharing (when needed)
Troubleshooting
No Audio/Video
- Check browser permissions
- Verify correct device selected in settings
- Try refreshing the page
- Check if another app is using the camera
Poor Quality
- Check internet connection speed
- Close other bandwidth-heavy applications
- Try disabling HD video
- Move closer to your router
Can’t Join Meeting
- Verify the meeting link is correct
- Check if meeting has started
- Ensure you’re not blocked by waiting room
- Try a different browser
Recording Not Working
- Verify recording is enabled for the room
- Check storage quota
- Ensure you have host permissions
See Also
- Suite Manual - Complete user guide
- Calendar App - Schedule meetings
- Chat App - Quick calls from chat
- Calls API - API reference
Player - Media Viewer
Integrated viewing for documents, audio, video, and presentations
Overview
Player enables users to view and interact with various file types without leaving the conversation:
- Documents: PDF, DOCX, TXT, MD
- Presentations: PPTX, ODP
- Audio: MP3, WAV, OGG, M4A
- Video: MP4, WEBM, OGV
- Images: PNG, JPG, GIF, SVG, WEBP
Accessing Player
From Chat
When a bot shares a file, click the preview to open in Player:
From Drive
Navigate to Drive tab and click any supported file to open in Player.
Direct URL
Access files directly:
/player/{bot_id}/{file_path}
Controls by File Type
Document Controls
| Control | Action |
|---|---|
| Previous / Next | Navigate pages |
| Zoom in / out | Adjust view size |
| Download | Download original |
| Search | Search in document |
| Thumbnails | Page thumbnails |
Audio Controls
| Control | Action |
|---|---|
| Play / Pause | Control playback |
| Rewind / Forward | Skip 10 seconds |
| Volume | Volume slider |
| Loop | Loop toggle |
| Download | Download file |
Video Controls
| Control | Action |
|---|---|
| Play / Pause | Control playback |
| Skip | Skip backward / forward |
| Volume | Volume control |
| Fullscreen | Enter fullscreen |
| Speed | Playback speed |
| Picture-in-picture | Floating window |
| Download | Download file |
Presentation Controls
| Control | Action |
|---|---|
| Previous / Next | Navigate slides |
| Fullscreen | Presentation mode |
| Overview | Slide overview |
| Notes | Speaker notes (if available) |
| Download | Download original |
Keyboard Shortcuts
| Key | Action |
|---|---|
Space | Play/Pause (audio/video) or Next (slides) |
← / → | Previous / Next |
↑ / ↓ | Volume up / down |
F | Fullscreen toggle |
M | Mute toggle |
Esc | Exit fullscreen / Close player |
+ / - | Zoom in / out |
Home / End | Go to start / end |
BASIC Integration
Share Files with Player Preview
Supported Formats
Documents
| Format | Extension | Notes |
|---|---|---|
.pdf | Full support with text search | |
| Word | .docx | Converted to viewable format |
| Text | .txt | Plain text with syntax highlighting |
| Markdown | .md | Rendered with formatting |
| HTML | .html | Sanitized rendering |
Presentations
| Format | Extension | Notes |
|---|---|---|
| PowerPoint | .pptx | Full slide support |
| OpenDocument | .odp | Converted to slides |
.pdf | Treated as slides |
Audio
| Format | Extension | Notes |
|---|---|---|
| MP3 | .mp3 | Universal support |
| WAV | .wav | Uncompressed audio |
| OGG | .ogg | Open format |
| M4A | .m4a | AAC audio |
| FLAC | .flac | Lossless audio |
Video
| Format | Extension | Notes |
|---|---|---|
| MP4 | .mp4 | H.264/H.265 |
| WebM | .webm | VP8/VP9 |
| OGV | .ogv | Theora |
Images
| Format | Extension | Notes |
|---|---|---|
| PNG | .png | Lossless with transparency |
| JPEG | .jpg, .jpeg | Compressed photos |
| GIF | .gif | Animated support |
| SVG | .svg | Vector graphics |
| WebP | .webp | Modern format |
Configuration
Configure Player behavior in config.csv:
key,value
player-autoplay,false
player-default-volume,80
player-video-quality,auto
player-preload,metadata
player-allow-download,true
player-max-file-size-mb,100
API Access
Get File for Player
GET /api/drive/{bot_id}/files/{file_path}?preview=true
Stream Media
GET /api/drive/{bot_id}/stream/{file_path}
Supports HTTP Range requests for seeking.
Get Thumbnail
GET /api/drive/{bot_id}/thumbnail/{file_path}
Security
- Files are served through authenticated endpoints
- User permissions respected for file access
- Downloads can be disabled per bot
- Watermarking available for sensitive documents
Performance
- Lazy loading for large documents
- Adaptive streaming for video
- Thumbnail generation for previews
- Client-side caching for repeat views
Mobile Support
Player is fully responsive:
- Touch gestures for navigation
- Pinch-to-zoom for documents
- Swipe for slides
- Native fullscreen support
See Also
- Drive App - File management
- Drive Integration - File storage configuration
- Storage API - File management API
Paper - AI Writing
Your intelligent document editor
Overview
Paper is the AI-powered writing app in General Bots Suite. Create documents, reports, letters, and more with help from your AI assistant. Paper understands context, suggests improvements, and helps you write faster and better.
Features
Creating a New Document
Method 1: Click New Document
- Click + New Doc in the left sidebar
- Start typing immediately
- Document auto-saves as you work
Method 2: From Template
- Click + New Doc
- Select From Template
- Choose a template:
| Template | Best For |
|---|---|
| Blank | Starting from scratch |
| Report | Business reports with sections |
| Letter | Formal correspondence |
| Meeting Notes | Agenda and action items |
| Checklist | Task tracking |
| Email Template | Reusable email formats |
| Project Proposal | Pitches and proposals |
| Resume/CV | Job applications |
| Invoice | Billing documents |
Method 3: Ask the Bot
Formatting Text
Use the toolbar or keyboard shortcuts to format your text:
| Format | Toolbar | Shortcut |
|---|---|---|
| Bold | B | Ctrl+B |
| Italic | I | Ctrl+I |
| Underline | U | Ctrl+U |
| S | Ctrl+Shift+S | |
| Heading 1 | H1 | Ctrl+1 |
| Heading 2 | H2 | Ctrl+2 |
| Heading 3 | H3 | Ctrl+3 |
| Bullet list | • | Ctrl+Shift+8 |
| Numbered list | ≡ | Ctrl+Shift+7 |
| Checklist | ☐ | Ctrl+Shift+9 |
| Link | 🔗 | Ctrl+K |
AI Writing Assistant ✨
Paper’s AI assistant helps you write better and faster. Click the ✨ AI button or type / to access AI features.
Generate Content:
Improve Writing Options:
| Option | What It Does |
|---|---|
| 📝 Improve writing | General quality enhancement |
| ✂️ Make it shorter | Condense without losing meaning |
| 📖 Make it longer | Expand with more detail |
| 🎯 Make it clearer | Simplify complex language |
| 💼 Make it professional | Formal business tone |
| 😊 Make it friendly | Casual, approachable tone |
| 🔧 Fix grammar | Correct errors |
| 🌐 Translate | Convert to another language |
Summarize:
Generate Ideas:
Document Organization
Folders:
Organize your documents into folders:
- Right-click in the sidebar
- Select New Folder
- Name your folder
- Drag documents into it
Search Documents:
Find documents quickly:
- Press
Ctrl+Por click the search icon - Type document name or content
- Press Enter to open
Collaboration
Share a Document:
- Click Share button
- Enter email addresses
- Set permissions
- Click Send
Permissions Explained:
| Permission | Can View | Can Comment | Can Edit |
|---|---|---|---|
| View | ✅ | ❌ | ❌ |
| Comment | ✅ | ✅ | ❌ |
| Edit | ✅ | ✅ | ✅ |
Export Options
Export your documents to different formats:
- Click Export ▼
- Choose a format:
| Format | Best For |
|---|---|
| Printing, sharing final versions | |
| Word (.docx) | Editing in Microsoft Word |
| Markdown (.md) | Technical documentation |
| Plain Text (.txt) | Simple text without formatting |
| HTML | Web publishing |
Export Options:
- Include headers and footers
- Include comments
- Include page numbers
Version History
Paper automatically saves versions of your document:
- Click ⚙️ → Version History
- See all saved versions
- Click to preview
- Restore if needed
Keyboard Shortcuts
Text Formatting
| Shortcut | Action |
|---|---|
Ctrl+B | Bold |
Ctrl+I | Italic |
Ctrl+U | Underline |
Ctrl+Shift+S | Strikethrough |
Ctrl+1 | Heading 1 |
Ctrl+2 | Heading 2 |
Ctrl+3 | Heading 3 |
Ctrl+0 | Normal text |
Lists & Structure
| Shortcut | Action |
|---|---|
Ctrl+Shift+7 | Numbered list |
Ctrl+Shift+8 | Bullet list |
Ctrl+Shift+9 | Checklist |
Tab | Indent |
Shift+Tab | Outdent |
Editing
| Shortcut | Action |
|---|---|
Ctrl+Z | Undo |
Ctrl+Y | Redo |
Ctrl+C | Copy |
Ctrl+X | Cut |
Ctrl+V | Paste |
Ctrl+A | Select all |
Ctrl+F | Find |
Ctrl+H | Find and replace |
Navigation
| Shortcut | Action |
|---|---|
Ctrl+P | Quick open document |
Ctrl+S | Save (auto-saves anyway) |
Ctrl+N | New document |
Ctrl+W | Close document |
Escape | Close dialog/menu |
AI Features
| Shortcut | Action |
|---|---|
/ | Open AI command menu |
Ctrl+Shift+A | AI improve selection |
Ctrl+Shift+G | Generate content |
Tips & Tricks
Writing Tips
💡 Use headings to organize your document - makes it scannable
💡 Write first, edit later - don’t let perfectionism slow you down
💡 Use AI to overcome writer’s block - ask for ideas or outlines
💡 Break long paragraphs into shorter ones for readability
Productivity Tips
💡 Use templates for recurring documents (reports, meeting notes)
💡 Learn keyboard shortcuts - much faster than clicking
💡 Use / commands for quick AI assistance
💡 Set up folders to keep documents organized
AI Tips
💡 Be specific when asking AI for help - better prompts = better results
💡 Use “Make it shorter” for concise professional writing
💡 Ask for multiple versions and pick the best one
💡 Use AI to check grammar before sharing important documents
Troubleshooting
Document not saving
Possible causes:
- Internet connection lost
- Browser storage full
- Session expired
Solution:
- Check internet connection
- Copy your text as backup (
Ctrl+A,Ctrl+C) - Refresh the page
- Log in again if prompted
- Paste your text back if needed
Formatting not working
Possible causes:
- Text not selected
- Format not supported in current context
- Browser compatibility issue
Solution:
- Select the text first, then apply formatting
- Try a different format
- Use keyboard shortcuts instead of toolbar
- Try a different browser
AI features not responding
Possible causes:
- AI service temporarily unavailable
- Network timeout
- Request too long
Solution:
- Wait a few seconds and try again
- Try a shorter text selection
- Refresh the page
- Check if other AI features work
Can’t share document
Possible causes:
- No sharing permissions
- Invalid email address
- Document not saved
Solution:
- Check if you’re the document owner
- Verify email addresses are correct
- Wait for document to save (check status bar)
- Contact administrator if sharing is restricted
Export fails
Possible causes:
- Document too large
- Special characters causing issues
- Browser blocking download
Solution:
- Try exporting a smaller section first
- Remove any unusual characters or images
- Check browser download settings
- Try a different export format
User Storage
Paper documents are stored in your personal .gbusers folder within the bot’s .gbdrive storage. This ensures your documents are private and accessible only to you.
Storage Structure
mybot.gbai/
mybot.gbdrive/
users/
your.email@example.com/ # Your user folder
papers/
current/ # Working documents (auto-saved)
untitled-1.md
meeting-notes.md
named/ # Saved documents
quarterly-report/
document.md
metadata.json
project-proposal/
document.md
metadata.json
exports/ # Exported files (PDF, DOCX, etc.)
quarterly-report.pdf
project-proposal.docx
Storage Types
| Type | Location | Description |
|---|---|---|
| Current | papers/current/ | Auto-saved working documents. These are drafts being actively edited. |
| Named | papers/named/{name}/ | Explicitly saved documents with metadata. Each gets its own folder. |
| Exports | exports/ | Generated export files (PDF, Word, HTML, etc.) |
Auto-Save Behavior
Paper auto-saves your work every 30 seconds to papers/current/. When you explicitly save with a title:
- Document moves from
current/tonamed/{title}/ - Metadata file is created with title, timestamps, and word count
- Original draft in
current/is removed
Accessing Your Documents
Your documents follow you across sessions and devices. As long as you’re logged in with the same email or phone number, you’ll see all your documents.
From the UI:
- Documents appear in the sidebar automatically
- Search finds documents by title
- Recent documents shown first
From BASIC scripts:
' Read your document
content = READ USER DRIVE "papers/named/my-report/document.md"
' List your papers
papers = LIST USER DRIVE "papers/named/"
Storage Limits
Default limits per user (configurable by administrator):
| Setting | Default | Description |
|---|---|---|
| Total storage | 100 MB | Maximum storage per user |
| File size | 5 MB | Maximum single document |
| File count | 500 | Maximum number of documents |
BASIC Integration
Control Paper from your bot dialogs:
Create a Document
doc = CREATE DOCUMENT "Project Notes"
doc.content = "Meeting notes from " + TODAY
SAVE DOCUMENT doc
TALK "Document created: " + doc.id
Generate Content with AI
HEAR topic AS TEXT "What should I write about?"
content = GENERATE TEXT "Write a brief introduction about " + topic
doc = CREATE DOCUMENT topic
doc.content = content
SAVE DOCUMENT doc
TALK "I've created a document about " + topic
TALK "Here's a preview:"
TALK LEFT(content, 200) + "..."
Export a Document
HEAR docName AS TEXT "Which document should I export?"
doc = FIND DOCUMENT docName
IF doc IS NOT NULL THEN
pdf = EXPORT DOCUMENT doc AS "PDF"
TALK "Here's your PDF:"
SEND FILE pdf
ELSE
TALK "Document not found"
END IF
Search Documents
HEAR query AS TEXT "What are you looking for?"
results = SEARCH DOCUMENTS query
IF COUNT(results) > 0 THEN
TALK "I found " + COUNT(results) + " documents:"
FOR EACH doc IN results
TALK "- " + doc.title
NEXT
ELSE
TALK "No documents found matching '" + query + "'"
END IF
Summarize a Document
HEAR docName AS TEXT "Which document should I summarize?"
doc = FIND DOCUMENT docName
IF doc IS NOT NULL THEN
summary = SUMMARIZE doc.content
TALK "Summary of '" + doc.title + "':"
TALK summary
ELSE
TALK "Document not found"
END IF
See Also
- Drive App - Store and organize files
- Mail App - Email your documents
- Research App - Research topics for your writing
- How To: Add Documents to Knowledge Base
Research - AI Search
Your intelligent research assistant
Overview
Research is the AI-powered search and discovery app in General Bots Suite. Find information from the web, your documents, and databases using natural language. Research understands your questions, finds relevant sources, and presents organized answers with citations.
Features
Basic Search
Just type your question in natural language:
Example Questions:
| Category | Examples |
|---|---|
| 📊 Business | “What are our sales numbers for Q1 2025?” |
| 📚 Knowledge | “How does photosynthesis work?” |
| 🔍 Research | “Compare React vs Vue for web development” |
| 📋 Documents | “What does our employee handbook say about PTO?” |
Search Sources
Choose where to search:
| Source | What It Searches | Best For |
|---|---|---|
| All | Everything available | General questions |
| Web | Internet/public websites | Current events, general knowledge |
| Documents | Your uploaded files (.gbkb) | Company policies, internal info |
| Database | Connected databases | Business data, reports |
| Custom | Specific sources you choose | Focused research |
AI-Powered Answers
Research doesn’t just find links—it reads, understands, and summarizes:
Follow-Up Questions
Continue your research with follow-up questions:
Source Citations
Every answer includes citations so you can verify:
| Source Type | Information Provided |
|---|---|
| Internal Documents | File name, location, relevant pages |
| Web Sources | URL, retrieval date, site name |
| Database | Table name, query used |
Actions available:
- View Document - Open the source file
- Open Link - Navigate to web source
- Copy Citation - Copy formatted citation
Research History
Access your previous searches:
- Click History in the top right
- Browse or search past queries
- Click to revisit any search
History is organized by:
- Today - Recent searches
- Yesterday - Previous day
- Last Week - Older searches
Export Results
Save your research for later use:
| Format | Best For |
|---|---|
| Sharing, printing | |
| Markdown | Documentation |
| Word | Reports, editing |
| Copy to Paper | Continue writing |
Export Options:
- Include answer
- Include sources with citations
- Include search query
- Include timestamp
Advanced Search
Use operators for more precise searches:
| Operator | Example | What It Does |
|---|---|---|
"" | "exact phrase" | Find exact match |
AND | solar AND wind | Both terms required |
OR | solar OR wind | Either term |
NOT | energy NOT nuclear | Exclude term |
site: | site:company.com | Search specific site |
type: | type:pdf | Search specific file type |
date: | date:2025 | Filter by date |
in: | in:documents | Search specific source |
Examples:
"quarterly report" AND sales date:2025 - Finds documents with exact phrase “quarterly report” AND the word “sales” from 2025
project proposal NOT draft type:pdf - Finds PDF files about project proposals, excluding drafts
Keyboard Shortcuts
| Shortcut | Action |
|---|---|
/ | Focus search box |
Enter | Search |
Ctrl+Enter | Search in new tab |
Escape | Clear search / close panel |
↑ / ↓ | Navigate results |
Ctrl+C | Copy answer |
Ctrl+S | Save/export results |
H | Open history |
Tab | Cycle through sources |
1-5 | Jump to source N |
Tips & Tricks
Better Search Results
💡 Be specific - “Q1 2025 sales revenue by region” works better than “sales”
💡 Use natural language - Ask questions like you would ask a colleague
💡 Try different phrasings - If results aren’t great, rephrase your question
💡 Use follow-ups - Build on previous searches for deeper research
Finding Documents
💡 Mention the document type - “Find the PDF about vacation policy”
💡 Reference dates - “Meeting notes from last Tuesday”
💡 Name departments - “HR policies about sick leave”
Web Research
💡 Be current - Add “2025” or “latest” for recent information
💡 Compare sources - Research shows multiple sources for verification
💡 Check citations - Click through to verify important information
Troubleshooting
No results found
Possible causes:
- Query too specific
- Information not in knowledge base
- Typo in search terms
Solution:
- Try broader search terms
- Search “All” sources instead of one
- Check spelling
- Try different phrasing
- Upload relevant documents to knowledge base
Wrong or irrelevant results
Possible causes:
- Ambiguous query
- Outdated documents in KB
- Source selection too broad
Solution:
- Be more specific in your question
- Use quotes for exact phrases
- Select specific source (Documents only, Web only)
- Use advanced operators
Sources not loading
Possible causes:
- Document was moved or deleted
- Web page no longer available
- Permission issues
Solution:
- Check if document exists in Drive
- Try opening the web link directly
- Ask administrator about permissions
- Use cached/saved version if available
Search is slow
Possible causes:
- Searching many sources
- Large knowledge base
- Complex query
Solution:
- Select specific source instead of “All”
- Be more specific to narrow results
- Wait for indexing to complete (if recent uploads)
- Check network connection
AI answer seems incorrect
Possible causes:
- Outdated information in sources
- AI misinterpreted question
- Conflicting information in sources
Solution:
- Always verify with cited sources
- Rephrase your question
- Ask for clarification: “Are you sure about X?”
- Check multiple sources for accuracy
BASIC Integration
Use Research in your bot dialogs:
Basic Search
HEAR question AS TEXT "What would you like to know?"
result = SEARCH question
TALK result.answer
TALK "Sources:"
FOR EACH source IN result.sources
TALK "- " + source.title
NEXT
Search Specific Sources
' Search only documents
result = SEARCH "vacation policy" IN "documents"
' Search only web
result = SEARCH "latest AI news" IN "web"
' Search specific knowledge base
result = SEARCH "product specs" IN "products.gbkb"
Research with Follow-up
TALK "What would you like to research?"
HEAR topic AS TEXT
result = SEARCH topic
TALK result.answer
HEAR followUp AS TEXT "Any follow-up questions? (or 'done')"
WHILE followUp <> "done"
result = SEARCH followUp WITH CONTEXT result
TALK result.answer
HEAR followUp AS TEXT "Any more questions? (or 'done')"
WEND
TALK "Research complete!"
Export Research
HEAR query AS TEXT "What should I research?"
result = SEARCH query
' Export as PDF
pdf = EXPORT RESEARCH result AS "PDF"
SEND FILE pdf
' Or copy to Paper
doc = CREATE DOCUMENT "Research: " + query
doc.content = result.answer + "\n\nSources:\n" + result.citations
SAVE DOCUMENT doc
TALK "Research saved to Paper"
Automated Research Report
topics = ["market trends", "competitor analysis", "customer feedback"]
report = ""
FOR EACH topic IN topics
result = SEARCH topic + " 2025"
report = report + "## " + topic + "\n\n"
report = report + result.answer + "\n\n"
NEXT
doc = CREATE DOCUMENT "Weekly Research Report"
doc.content = report
SAVE DOCUMENT doc
TALK "Research report created with " + COUNT(topics) + " topics"
See Also
- Paper App - Write documents based on your research
- Drive App - Upload documents to knowledge base
- Chat App - Ask quick questions
- How To: Add Documents to Knowledge Base
Analytics - Dashboards
Your business intelligence center
Overview
Analytics is the data visualization and reporting app in General Bots Suite. Track key metrics, build custom dashboards, generate reports, and get AI-powered insights about your business. Analytics turns your data into actionable information.
Features
Dashboard Overview
Dashboards are collections of widgets that display your data visually.
Default Dashboards:
| Dashboard | What It Shows |
|---|---|
| Overview | Key metrics across all areas |
| Sales | Revenue, deals, pipeline |
| Marketing | Campaigns, leads, conversion |
| Support | Tickets, response time, satisfaction |
| HR | Headcount, hiring, retention |
Creating a Dashboard
Step 1: Click “+ New” in the sidebar
Fill in the dashboard details:
- Dashboard Name - A descriptive title (e.g., “Q2 Performance”)
- Description - Optional context for the dashboard
- Template - Start blank, use a template, or copy from existing
Step 2: Add Widgets
Click + Widget and choose a visualization type.
Widget Types
Numbers:
- Number - Single metric display
- Comparison - Metric with percentage change
- Progress - Goal tracking with progress bar
Charts:
- Line - Trends over time
- Bar - Category comparisons
- Area - Volume visualization
- Pie - Proportional breakdown
Tables & Lists:
- Table - Data grid with sorting
- Leaderboard - Ranked list
- List - Simple bullet items
Special:
- Geography - Map visualization
- Heatmap - Intensity grid
- AI Summary - AI-generated insights
Configuring Widgets
After selecting a widget type, configure the data source:
| Setting | Description |
|---|---|
| Title | Widget display name |
| Source | Database or data connection |
| Table | Specific table to query |
| X-Axis | Horizontal dimension |
| Y-Axis | Vertical measure |
| Group By | Time period or category |
| Aggregate | Sum, Count, Average, etc. |
| Filters | Conditions to apply |
Key Metric Cards
Display important numbers with context:
| Element | Purpose |
|---|---|
| Value | The main metric number |
| Trend Arrow | Up/down indicator |
| Comparison | vs last period |
| Progress Bar | Visual goal tracking |
| Goal | Target value |
Color Indicators:
| Color | Meaning |
|---|---|
| 🟢 Green (▲) | Positive trend / On target |
| 🔴 Red (▼) | Negative trend / Below target |
| 🟡 Yellow (─) | No change / Needs attention |
| ⚪ Gray | No comparison available |
AI Insights ✨
Let AI analyze your data and surface insights:
Insight Types:
| Category | What It Shows |
|---|---|
| 📈 Trends | Patterns and momentum in your data |
| ⚠️ Alerts | Issues that need attention |
| 💡 Recommendations | Suggested actions |
| 🎯 Predictions | Forecasts based on current data |
Ask the AI:
Reports
Generate and schedule reports:
Creating a Report:
- Click + New under Reports
- Select report type
- Configure data and format
- Schedule delivery (optional)
Report Options:
| Setting | Options |
|---|---|
| Content | Dashboard, AI insights, raw data |
| Date Range | Last 7/30/90 days, quarter, custom |
| Format | PDF, Interactive Web, Excel, PowerPoint |
| Schedule | Daily, Weekly, Monthly |
| Recipients | Email addresses for delivery |
Data Sources
Connect Analytics to various data sources:
| Source Type | Examples |
|---|---|
| Databases | PostgreSQL, MySQL, SQLite |
| Files | Excel, CSV, JSON |
| APIs | REST endpoints, GraphQL |
| Apps | CRM, Support, Calendar data |
| Bot Data | Conversation logs, user data |
Adding a Data Source:
- Go to Settings → Data Sources
- Click + Add Source
- Select source type
- Enter connection details
- Test and save
Sharing Dashboards
Share dashboards with your team:
- Click Share on any dashboard
- Set permissions (View, Edit, Owner)
- Copy link or invite by email
Permission Levels:
| Level | Can Do |
|---|---|
| View | See dashboard, apply filters |
| Edit | Modify widgets, change layout |
| Owner | Full control, manage sharing |
Link Sharing:
- Off - Only specific people can access
- On - Anyone with link can view
Keyboard Shortcuts
| Shortcut | Action |
|---|---|
R | Refresh dashboard |
F | Toggle fullscreen |
E | Edit mode |
N | New widget |
D | Duplicate widget |
Delete | Delete selected widget |
Ctrl+S | Save dashboard |
Ctrl+P | Print / Export PDF |
Ctrl+F | Find / Filter |
/ | Quick search |
← → | Navigate dashboards |
Escape | Exit edit mode |
Tips & Tricks
Dashboard Design
💡 Keep it simple - 5-7 widgets per dashboard is optimal
💡 Most important metrics at top - Follow the F-pattern reading
💡 Use consistent colors - Same metric = same color across widgets
💡 Group related widgets - Keep sales metrics together
Data Tips
💡 Set up daily sync for data sources that change frequently
💡 Use filters to let viewers customize their view
💡 Add comparison periods (vs last month, vs last year)
💡 Include goals/targets to show progress
AI Tips
💡 Ask “why” questions - AI excels at explaining trends
💡 Request predictions for planning
💡 Use AI for anomaly detection - “What’s unusual this month?”
💡 Generate executive summaries before board meetings
Troubleshooting
Dashboard not loading
Possible causes:
- Data source disconnected
- Query timeout
- Permission issues
Solution:
- Check data source status in Settings
- Reduce date range or add filters
- Verify you have dashboard access
- Refresh the page
Data not updating
Possible causes:
- Sync schedule not running
- Source data hasn’t changed
- Cache showing old data
Solution:
- Click Refresh on the dashboard
- Check data source sync status
- Go to Settings → Clear cache
- Verify source data has new records
Charts showing wrong numbers
Possible causes:
- Filter applied incorrectly
- Wrong aggregation method
- Date range mismatch
Solution:
- Check widget filters
- Verify aggregation (Sum vs Count vs Average)
- Confirm date range matches expectations
- Edit widget and review query
Export not working
Possible causes:
- Dashboard too large
- Browser blocking download
- Permission restrictions
Solution:
- Try exporting individual widgets
- Check browser download settings
- Use a different export format
- Contact administrator for permissions
BASIC Integration
Use Analytics in your bot dialogs:
Query Metrics
revenue = GET METRIC "total_revenue" FOR "this month"
lastMonth = GET METRIC "total_revenue" FOR "last month"
growth = ((revenue - lastMonth) / lastMonth) * 100
TALK "Revenue this month: $" + FORMAT(revenue, "#,##0")
TALK "Growth: " + FORMAT(growth, "#0.0") + "%"
Generate Reports
HEAR period AS TEXT "Which period? (weekly/monthly/quarterly)"
report = GENERATE REPORT "Sales Summary" FOR period
TALK "Here's your " + period + " sales report:"
SEND FILE report.pdf
TALK "Key highlights:"
TALK report.summary
Get AI Insights
insights = GET INSIGHTS FOR "Sales Dashboard"
TALK "Here are today's insights:"
FOR EACH insight IN insights.trends
TALK "📈 " + insight
NEXT
TALK "Alerts:"
FOR EACH alert IN insights.alerts
TALK "⚠️ " + alert
NEXT
Create Dashboard Widget
widget = NEW OBJECT
widget.type = "line_chart"
widget.title = "Daily Active Users"
widget.source = "bot_analytics"
widget.xAxis = "date"
widget.yAxis = "active_users"
widget.dateRange = "last 30 days"
ADD WIDGET widget TO "Overview Dashboard"
TALK "Widget added successfully"
Scheduled Reports
' This dialog runs on a schedule
report = GENERATE REPORT "Weekly Metrics" FOR "last 7 days"
recipients = ["ceo@company.com", "team@company.com"]
FOR EACH recipient IN recipients
SEND MAIL recipient, "Weekly Metrics Report - " + TODAY,
"Please find attached the weekly metrics report.", [report.pdf]
NEXT
LOG "Weekly report sent to " + COUNT(recipients) + " recipients"
See Also
- Research App - Deep dive into data questions
- Paper App - Create reports from insights
- How To: Monitor Your Bot
- Talk to Data Template
Designer - Visual Builder
Your no-code bot building studio
Overview
Designer is the visual bot builder in General Bots Suite. Create conversation flows, design user interfaces, and build automations without writing code. Designer uses a drag-and-drop interface that makes bot development accessible to everyone.
Features
Creating a New Flow
Step 1: Open Designer
- Click the apps menu (⋮⋮⋮)
- Select Designer
- Click + New Flow
Step 2: Configure Flow
| Setting | Description |
|---|---|
| Flow Name | Descriptive title (e.g., “Customer Support”) |
| Description | Brief explanation of what the flow does |
| Start from | Blank canvas, Template, or Import from file |
Step 3: Add Components
Drag components from the left panel onto the canvas.
Step 4: Connect Components
Click and drag from one component’s output to another’s input.
Component Types
Communication Components
| Component | Icon | Purpose |
|---|---|---|
| Talk | 💬 | Send a message to the user |
| Hear | 👂 | Wait for user input |
| Ask | ❓ | Ask a question and capture response |
| Show | 🖼️ | Display an image, card, or media |
| Menu | 📋 | Show clickable options |
Talk Component Options:
- Message text with variations (AI picks randomly)
- Use AI to personalize
- Include typing indicator
- Delay before sending
Ask Component Options:
- Question text
- Variable name to save response
- Expected type: Text, Number, Email, Phone, Date, Yes/No, Multiple Choice
- Validation message for invalid input
Logic Components
| Component | Icon | Purpose |
|---|---|---|
| Branch | 🔀 | Conditional logic (if/else) |
| Loop | 🔄 | Repeat actions |
| Switch | 🔃 | Multiple conditions |
| Wait | ⏱️ | Pause execution |
| End | 🏁 | End the flow |
Branch Configuration:
- Set condition using variable comparisons
- Add multiple AND/OR conditions
- TRUE and FALSE output paths
Action Components
| Component | Icon | Purpose |
|---|---|---|
| Action | ⚡ | Execute a BASIC keyword |
| API Call | 🌐 | Call external API |
| Database | 🗄️ | Query or update data |
| ✉️ | Send an email | |
| Set Variable | 📝 | Store a value |
Action Error Handling:
- Stop flow and show error
- Continue to error path
- Retry N times
AI Components
| Component | Icon | Purpose |
|---|---|---|
| AI Chat | 🤖 | Natural language conversation |
| Search KB | 🔍 | Search knowledge base |
| Generate | ✨ | Generate text with AI |
| Classify | 🏷️ | Categorize user input |
| Extract | 📤 | Extract data from text |
Classify Example Categories:
support- Customer needs help with a problemsales- Customer interested in buyingbilling- Payment or invoice questionsfeedback- Customer giving feedbackother- Anything else
Working with the Canvas
Navigation
| Action | How To |
|---|---|
| Pan | Click and drag on empty space |
| Zoom in | Scroll up or click [+] |
| Zoom out | Scroll down or click [-] |
| Fit to screen | Click [⌖] or press F |
| Select multiple | Hold Shift and click |
| Box select | Hold Ctrl and drag |
Canvas Controls
| Control | Purpose |
|---|---|
| [+] [-] | Zoom in/out |
| [⌖] | Fit to view |
| Grid | Show/hide grid |
| Snap | Snap to grid |
| Auto | Auto-arrange components |
Using Variables
Variables store information during the conversation.
System Variables (read-only):
| Variable | Description |
|---|---|
{{user.name}} | User’s display name |
{{user.email}} | User’s email address |
{{user.phone}} | User’s phone number |
{{channel}} | Current channel (web, whatsapp, etc) |
{{today}} | Today’s date |
{{now}} | Current date and time |
{{botName}} | Name of this bot |
Flow Variables: Variables you create using Ask or Set Variable components.
Reference variables with double curly braces: {{variableName}}
Testing Your Flow
Preview Mode:
- Click Preview button
- A chat window opens
- Test the conversation
- Watch the flow highlight active steps
The Preview panel shows:
- Flow visualization with active step highlighted
- Test conversation area
- Current variable values
- Clear and Reset buttons
Deploying Your Flow
When your flow is ready:
- Click Deploy
- Choose deployment options:
- Production or Staging only
- Immediate or Scheduled
- Configure triggers:
- Specific phrases (e.g., “help”, “support”)
- As default fallback
- On schedule
- Review changes since last deploy
- Confirm deployment
Templates
Start faster with pre-built templates:
| Template | Description |
|---|---|
| 📋 FAQ Bot | Answer common questions from knowledge base |
| 🎫 Support | Ticket creation and tracking |
| 💰 Sales | Lead capture and qualification |
| 📅 Appointment | Schedule meetings and appointments |
| 📝 Feedback | Collect customer feedback |
| 🚀 Onboarding | New user welcome and setup guide |
Keyboard Shortcuts
Canvas
| Shortcut | Action |
|---|---|
Space + Drag | Pan canvas |
Ctrl + + | Zoom in |
Ctrl + - | Zoom out |
Ctrl + 0 | Reset zoom |
F | Fit to screen |
G | Toggle grid |
Delete | Delete selected |
Ctrl + D | Duplicate selected |
Ctrl + Z | Undo |
Ctrl + Y | Redo |
Components
| Shortcut | Action |
|---|---|
T | Add Talk component |
H | Add Hear component |
A | Add Ask component |
B | Add Branch component |
E | Edit selected component |
Ctrl + C | Copy component |
Ctrl + V | Paste component |
Ctrl + X | Cut component |
Flow
| Shortcut | Action |
|---|---|
Ctrl + S | Save flow |
Ctrl + P | Preview flow |
Ctrl + Enter | Deploy flow |
Ctrl + E | Export flow |
Ctrl + I | Import flow |
Tips & Tricks
Design Tips
💡 Keep flows simple - Break complex flows into smaller sub-flows
💡 Use descriptive names - “Ask for Email” is better than “Step 3”
💡 Add comments - Right-click any component to add notes
💡 Test often - Preview after every few changes
Organization Tips
💡 Use folders to organize related flows
💡 Version your flows - Save before major changes
💡 Use templates for consistent starting points
💡 Color-code paths - Use colors for different intents
Performance Tips
💡 Minimize API calls - Cache results when possible
💡 Use AI classification early - Route users quickly
💡 Set timeouts - Don’t let flows hang indefinitely
💡 Handle errors - Always add error paths
Troubleshooting
Flow not triggering
Possible causes:
- Flow not deployed
- Trigger words not matching
- Another flow has priority
Solution:
- Click Deploy and confirm it’s active
- Check trigger configuration
- Review flow priority in settings
- Test with exact trigger phrases
Variables not working
Possible causes:
- Typo in variable name
- Variable not set yet in flow
- Wrong scope
Solution:
- Check spelling matches exactly (case-sensitive)
- Ensure variable is set before being used
- Use Preview mode to watch variable values
- Check the Variables panel for current values
Component errors
Possible causes:
- Missing required configuration
- Invalid connection
- Action failed
Solution:
- Click the red error icon for details
- Fill in all required fields
- Check that connections make logical sense
- Review error logs in Preview mode
Preview not matching production
Possible causes:
- Changes not deployed
- Different data in production
- External service differences
Solution:
- Deploy latest changes
- Test with same data as production
- Check API connections are identical
- Review production logs
BASIC Integration
Designer flows generate BASIC code. You can view and customize it.
View Generated Code
Right-click any component and select “View Code”:
' Generated from "Customer Support" flow
TALK "Hello! How can I help you today?"
HEAR userMessage AS TEXT
intent = CLASSIFY userMessage INTO ["support", "sales", "billing", "other"]
IF intent = "support" THEN
TALK "I'm sorry to hear you're having issues!"
HEAR orderNumber AS TEXT "What's your order number?"
result = SEARCH KB "order " + orderNumber
TALK result.answer
ELSE IF intent = "sales" THEN
' ... sales flow
END IF
Mix Designer and Code
Use the Code component to add custom BASIC:
' Custom calculation
discount = 0
IF userType = "premium" THEN
discount = orderTotal * 0.15
ELSE IF orderTotal > 100 THEN
discount = orderTotal * 0.05
END IF
finalPrice = orderTotal - discount
See Also
- Sources App - Manage prompts and templates
- Chat App - Test your flows
- How To: Write Your First Dialog
- BASIC Keywords Reference
Sources - Prompts & Templates
Your bot configuration hub
Overview
Sources is the configuration center in General Bots Suite. Manage your bots, prompts, templates, and knowledge bases all in one place. Sources is where you create new bots, customize their behavior, and organize the content that powers your AI assistant.
Features
Managing Bots
Creating a New Bot
- Click + New Bot in the top right
- Fill in the bot details:
| Field | Description |
|---|---|
| Bot ID | Unique identifier (lowercase, numbers, hyphens only) |
| Display Name | User-friendly name shown in chat |
| Description | Brief explanation of what the bot does |
| Start from | Blank, Template, or Clone existing |
Bot Settings
Click the ⚙️ icon on any bot to configure:
General Settings:
| Setting | Description |
|---|---|
| Display Name | Name shown to users |
| Welcome Message | First message when conversation starts |
| Language | Primary language for the bot |
| Timezone | Bot’s timezone for date/time operations |
| Status | Live, Draft, or Maintenance |
Status Options:
- Live - Bot is active and responding
- Draft - Bot is hidden from users
- Maintenance - Shows maintenance message
AI Settings
| Setting | Description |
|---|---|
| Provider | AI provider (OpenAI, Azure, etc.) |
| Model | Model to use (GPT-5, Claude Sonnet 4.5, local GGUF, etc.) |
| Temperature | Creativity level (0 = focused, 1 = creative) |
| Max Tokens | Maximum response length |
| System Prompt | Bot personality and instructions |
| Knowledge Base | Connected .gbkb for answers |
Managing Prompts
Prompts are reusable text templates for AI interactions.
Prompt Types:
| Type | Purpose |
|---|---|
| System Prompt | Bot personality/behavior |
| Task Prompt | Specific task instructions |
| Template | Reusable text with variables |
Creating a Prompt:
- Click + New Prompt
- Enter a name (e.g., “support-agent”)
- Select type
- Write prompt content with optional
{{variables}} - Save and link to bots
Example Prompt:
You are a friendly and professional customer support agent
for {{company_name}}.
## Your Personality
- Be warm and empathetic
- Use simple, clear language
- Be patient and thorough
## Guidelines
- Always verify customer identity before sharing account info
- If unsure, search the knowledge base
- Escalate complex issues to human agents
- Never make promises about refunds or compensation
Managing Templates
Templates are pre-built bot packages you can reuse.
Installed Templates:
| Template | Description |
|---|---|
| 📋 CRM | Full CRM with leads, contacts |
| 📋 Support | Ticket management and customer service |
| 📋 FAQ | Answer common questions from KB |
Available Templates:
| Template | Description |
|---|---|
| 📋 HR | Employee self-service |
| 📋 Analytics | Dashboard and metrics |
| 📋 Compliance | LGPD, GDPR compliance |
Template Contents:
Templates include:
- Dialog scripts (.bas files)
- Bot configuration
- Knowledge base documentation
- Sample conversations
Managing Knowledge Bases
Knowledge bases store documents that your bot can search for answers.
| Field | Description |
|---|---|
| Documents | Count of uploaded files |
| Size | Total storage used |
| Last Indexed | When content was last processed |
| Used By | Bots connected to this KB |
Uploading Documents:
- Open the knowledge base
- Click Upload or drag files
- Organize into folders
- Click Reindex to process new content
Supported Formats:
- PDF, DOCX, TXT, MD
- CSV, XLSX
- HTML, JSON
Import and Export
Exporting a Bot
- Click ⚙️ on the bot
- Select Export
- Choose what to include:
- Bot configuration
- Dialog scripts (.bas files)
- Prompts
- Knowledge base (optional, large)
- Conversation history (optional)
- Select format: .gbai, ZIP, or JSON
Importing a Bot
- Click Import at the top
- Drop file or browse (supported: .gbai, .zip)
- Choose:
- Create new bot, or
- Replace existing bot
- Configure merge options for prompts and KB
Keyboard Shortcuts
| Shortcut | Action |
|---|---|
Ctrl+N | New bot |
Ctrl+S | Save changes |
Ctrl+E | Export selected |
Ctrl+I | Import |
Delete | Delete selected |
Ctrl+D | Duplicate |
F2 | Rename |
/ | Search |
Enter | Open selected |
Escape | Close dialog |
Tips & Tricks
Bot Management
💡 Use descriptive names - “customer-support-v2” is better than “bot1”
💡 Keep prompts separate - Reuse prompts across multiple bots
💡 Version your exports - Export before major changes
💡 Test in Draft mode - Don’t go Live until fully tested
Prompt Writing
💡 Be specific - Clear instructions give better results
💡 Use examples - Show the AI what good responses look like
💡 Set boundaries - Define what the bot should NOT do
💡 Use variables - Make prompts reusable with {{placeholders}}
Knowledge Base
💡 Organize in folders - Group related documents together
💡 Keep documents current - Remove outdated information
💡 Use clear filenames - “refund-policy-2025.pdf” not “doc1.pdf”
💡 Reindex after changes - New content isn’t searchable until indexed
Troubleshooting
Bot not responding
Possible causes:
- Bot is in Draft mode
- AI provider not configured
- No dialogs or prompts set up
Solution:
- Check bot status is “Live”
- Verify AI settings have valid API key
- Ensure at least start.bas exists
- Check error logs in Analytics
Knowledge base not finding answers
Possible causes:
- Documents not indexed
- Document format not supported
- Query doesn’t match content
Solution:
- Click “Reindex” and wait for completion
- Convert documents to supported formats
- Check document actually contains the information
- Try different phrasing
Import fails
Possible causes:
- File corrupted
- Incompatible version
- Duplicate bot ID
Solution:
- Try re-exporting from source
- Check General Bots version compatibility
- Choose “Create new bot” instead of replace
- Rename bot ID if duplicate
Prompts not applying
Possible causes:
- Prompt not linked to bot
- Variable not defined
- Syntax error in prompt
Solution:
- Check AI Settings → System Prompt selection
- Verify all {{variables}} have values
- Test prompt with “Test” button
- Check for unclosed brackets or quotes
BASIC Integration
Access Sources data from dialogs:
Get Bot Configuration
config = GET BOT CONFIG
TALK "Bot name: " + config.displayName
TALK "Language: " + config.language
Use Prompts
' Load a prompt template
prompt = GET PROMPT "summarize"
' Use with variables
summary = GENERATE WITH PROMPT prompt, content
TALK summary
Search Knowledge Base
HEAR question AS TEXT "What would you like to know?"
results = SEARCH KB question IN "support.gbkb"
IF COUNT(results) > 0 THEN
TALK results[0].answer
TALK "Source: " + results[0].source
ELSE
TALK "I couldn't find information about that."
END IF
List Available Bots
bots = GET BOTS
TALK "Available bots:"
FOR EACH bot IN bots
IF bot.status = "live" THEN
TALK "● " + bot.displayName
ELSE
TALK "○ " + bot.displayName + " (draft)"
END IF
NEXT
See Also
- Designer App - Visual flow builder
- Drive App - Upload KB documents
- How To: Create Your First Bot
- How To: Add Documents to Knowledge Base
Compliance - Security Scanner
Your privacy and security guardian
Overview
Compliance is the security and privacy management app in General Bots Suite. Monitor data handling, manage consent, respond to data subject requests, and ensure your bots comply with regulations like LGPD, GDPR, and CCPA. Compliance helps you protect user data and maintain trust.
Features
Compliance Dashboard
The dashboard gives you an at-a-glance view of your compliance status:
| Metric | Description |
|---|---|
| Overall Score | Percentage score with color indicator |
| Open Requests | Pending data subject requests |
| Data Breaches | Count in last 90 days |
| Consent Rate | Percentage of users with active consent |
Score Breakdown by Area:
- Data Protection
- Consent Management
- Access Controls
- Data Retention
- Breach Response
- Documentation
Score Meanings:
| Score | Status | Action Needed |
|---|---|---|
| 90-100% | ✓ Excellent | Maintain current practices |
| 70-89% | ⚠ Good | Address minor issues |
| 50-69% | ⚠ Fair | Prioritize improvements |
| Below 50% | ✗ Poor | Immediate action required |
Security Scanner
Automatically scan your bots and data for compliance issues.
Running a Scan
- Click Scan Now in the top right
- Select scan type:
- Quick - Basic checks (5 minutes)
- Full - Complete audit (30 minutes)
- Custom - Select specific areas
- Choose scan targets:
- All bots
- Knowledge bases
- User data
- Conversation logs
- External integrations
- Click Start Scan
Scan Results
Results are categorized by severity:
| Severity | Icon | Description |
|---|---|---|
| Critical | ✗ | Requires immediate attention |
| Warning | ⚠ | Should be addressed soon |
| Passed | ✓ | No issues found |
Common Issues Found:
- Unencrypted PII in logs
- Consent records needing renewal
- Missing retention policies
- Missing privacy policy links
Data Subject Requests (DSR)
Handle user requests for their data rights.
Request Types
| Type | Icon | Description | Deadline |
|---|---|---|---|
| Data Access | 📥 | User wants copy of their data | 15-30 days |
| Data Deletion | 🗑️ | User wants data erased | 15-30 days |
| Data Portability | 📤 | User wants data in machine format | 15-30 days |
| Rectification | ✏️ | User wants to correct data | 15-30 days |
| Processing Objection | ✋ | User objects to data processing | Immediate |
| Consent Withdrawal | 🚫 | User withdraws consent | Immediate |
Processing a Request
- Verify user identity
- Review data found:
- User Profile
- Conversation History
- Consent Records
- Activity Logs
- Generate data package (for access requests)
- Send to user or complete deletion
- Mark request as complete
Consent Management
Track and manage user consent.
Consent Types:
| Type | Required | Description |
|---|---|---|
| Terms of Service | Yes | Agreement to terms and conditions |
| Marketing | No | Promotional communications |
| Analytics | No | Usage data collection |
| Third-Party Sharing | No | Sharing with partners |
Consent Record Information:
- User ID and email
- Consent status (given/denied/withdrawn)
- Timestamp
- Collection method (web, chat, email)
- IP address and browser info
Data Mapping
See where personal data is stored:
| Category | Data Types | Storage Locations | Retention |
|---|---|---|---|
| Personal Identifiers | Names, emails, phones | Users table, conversation logs | 3 years |
| Communication Data | Messages, attachments | Conversation logs, MinIO, Qdrant | 1 year |
| Behavioral Data | Page views, clicks | Analytics events, preferences | 90 days |
Policy Management
Manage your compliance policies:
Policy Types:
- Privacy Policy
- Data Retention Policy
- Cookie Policy
Data Retention Rules:
| Data Type | Retention | Action |
|---|---|---|
| Conversation logs | 1 year | Auto-delete |
| User profiles | 3 years | Anonymize |
| Analytics data | 90 days | Auto-delete |
| Consent records | 5 years | Archive |
| Audit logs | 7 years | Archive |
Keyboard Shortcuts
| Shortcut | Action |
|---|---|
S | Start scan |
R | View reports |
D | Open data map |
P | View policies |
N | New request |
/ | Search |
Ctrl+E | Export report |
Escape | Close dialog |
Tips & Tricks
Staying Compliant
💡 Schedule regular scans - Weekly scans catch issues early
💡 Set up alerts - Get notified of critical issues immediately
💡 Document everything - Keep records of all compliance decisions
💡 Train your team - Everyone should understand data handling rules
Handling Requests
💡 Respond quickly - Start processing within 24 hours
💡 Verify identity - Confirm requestor is the data subject
💡 Be thorough - Check all data sources before responding
💡 Keep records - Document how each request was handled
Data Protection
💡 Minimize data collection - Only collect what you need
💡 Enable encryption - Protect data at rest and in transit
💡 Use anonymization - Remove PII when possible
💡 Regular audits - Review who has access to what data
Troubleshooting
Scan finds false positives
Possible causes:
- Pattern matching too aggressive
- Test data flagged as real PII
- Encrypted data misidentified
Solution:
- Review and dismiss false positives
- Add test data locations to exclusion list
- Configure scan sensitivity in settings
- Report issues to improve detection
DSR deadline approaching
Possible causes:
- Complex request requiring manual review
- Data spread across multiple systems
- Identity verification pending
Solution:
- Prioritize the request immediately
- Use automated data collection tools
- Contact user if verification needed
- Document reason if extension required
Consent not recording
Possible causes:
- Consent widget not configured
- JavaScript error on page
- Database connection issue
Solution:
- Check consent configuration in settings
- Test consent flow in preview mode
- Check error logs for issues
- Verify database connectivity
Data not deleting automatically
Possible causes:
- Retention policy not applied
- Scheduled job not running
- Data referenced by other records
Solution:
- Verify policy is active and applied to bot
- Check scheduled job status in settings
- Review dependencies that prevent deletion
- Manually delete if needed
BASIC Integration
Use Compliance features in your dialogs:
Check Consent
hasConsent = CHECK CONSENT user.id FOR "marketing"
IF hasConsent THEN
TALK "I can send you our newsletter!"
ELSE
TALK "Would you like to receive our newsletter?"
HEAR response AS BOOLEAN
IF response THEN
RECORD CONSENT user.id FOR "marketing"
TALK "Great! You're now subscribed."
END IF
END IF
Request Data Access
TALK "I can help you access your personal data."
HEAR email AS EMAIL "Please confirm your email address"
IF email = user.email THEN
request = CREATE DSR REQUEST
TYPE "access"
USER user.id
EMAIL email
TALK "Your request #" + request.id + " has been submitted."
TALK "You'll receive your data within 15 days."
ELSE
TALK "Email doesn't match. Please contact support."
END IF
Delete User Data
TALK "Are you sure you want to delete all your data?"
TALK "This action cannot be undone."
HEAR confirm AS BOOLEAN
IF confirm THEN
request = CREATE DSR REQUEST
TYPE "deletion"
USER user.id
TALK "Deletion request submitted: #" + request.id
TALK "Your data will be deleted within 30 days."
ELSE
TALK "No problem. Your data remains safe."
END IF
Log Compliance Event
' Log when sensitive data is accessed
LOG COMPLIANCE EVENT
TYPE "data_access"
USER user.id
DATA_TYPE "order_history"
REASON "User requested order status"
BOT "support"
TALK "Here's your order history..."
API Endpoint: /api/compliance
The Compliance API allows programmatic access to compliance features.
Endpoints Summary
| Endpoint | Method | Description |
|---|---|---|
/api/compliance/scan | POST | Start a compliance scan |
/api/compliance/scan/{id} | GET | Get scan results |
/api/compliance/dsr | POST | Create DSR request |
/api/compliance/dsr/{id} | GET | Get DSR status |
/api/compliance/consent | POST | Record consent |
/api/compliance/consent/{userId} | GET | Get user consent |
/api/compliance/report | GET | Generate compliance report |
Authentication
All endpoints require API key authentication:
Authorization: Bearer your-api-key
Example: Check User Consent
GET /api/compliance/consent/usr_abc123
Response:
{
"userId": "usr_abc123",
"consents": [
{
"type": "terms_of_service",
"status": "given",
"timestamp": "2025-01-15T10:32:00Z"
},
{
"type": "marketing",
"status": "withdrawn",
"timestamp": "2025-03-22T15:15:00Z"
}
]
}
See Also
- Compliance API Reference - Full API documentation
- Analytics App - Monitor compliance metrics
- Sources App - Configure bot policies
- How To: Monitor Your Bot
Compliance API Reference
Programmatic access to privacy and compliance features
Overview
The Compliance API allows you to programmatically manage data subject requests, consent records, and compliance scanning. Use this API to integrate privacy features into your applications or automate compliance workflows.
Base URL: https://your-server.com/api/compliance
Authentication
All API requests require authentication using a Bearer token:
Authorization: Bearer your-api-key
Get your API key from Settings → API Keys → Create New Key with compliance scope.
Endpoints
Data Subject Requests (DSR)
List All Requests
GET /api/compliance/dsr
Query Parameters:
| Parameter | Type | Description |
|---|---|---|
status | string | Filter by status: pending, processing, completed, rejected |
type | string | Filter by type: access, deletion, rectification, portability, objection |
from | date | Start date (YYYY-MM-DD) |
to | date | End date (YYYY-MM-DD) |
limit | number | Results per page (default: 20, max: 100) |
offset | number | Pagination offset |
Example Request:
GET /api/compliance/dsr?status=pending&limit=10
Example Response:
{
"total": 7,
"limit": 10,
"offset": 0,
"requests": [
{
"id": "DSR-2025-0142",
"type": "access",
"status": "pending",
"userId": "usr_abc123",
"email": "john.doe@email.com",
"submittedAt": "2025-05-13T10:30:00Z",
"dueDate": "2025-05-28T10:30:00Z",
"assignee": null
},
{
"id": "DSR-2025-0141",
"type": "deletion",
"status": "processing",
"userId": "usr_def456",
"email": "sarah@company.com",
"submittedAt": "2025-05-10T14:15:00Z",
"dueDate": "2025-05-25T14:15:00Z",
"assignee": "admin@company.com"
}
]
}
Get Single Request
GET /api/compliance/dsr/{id}
Example Response:
{
"id": "DSR-2025-0142",
"type": "access",
"status": "pending",
"userId": "usr_abc123",
"email": "john.doe@email.com",
"name": "John Doe",
"submittedAt": "2025-05-13T10:30:00Z",
"dueDate": "2025-05-28T10:30:00Z",
"assignee": null,
"message": "I would like a copy of all my data",
"verifiedAt": "2025-05-13T10:35:00Z",
"dataFound": {
"profile": true,
"conversations": true,
"consents": true,
"activityLogs": true
},
"history": [
{
"action": "created",
"timestamp": "2025-05-13T10:30:00Z",
"actor": "system"
},
{
"action": "verified",
"timestamp": "2025-05-13T10:35:00Z",
"actor": "system"
}
]
}
Create Request
POST /api/compliance/dsr
Request Body:
| Field | Type | Required | Description |
|---|---|---|---|
type | string | Yes | access, deletion, rectification, portability, objection |
email | string | Yes | User’s email address |
userId | string | No | User ID if known |
message | string | No | User’s message/reason |
skipVerification | boolean | No | Skip email verification (default: false) |
Example Request:
POST /api/compliance/dsr
Content-Type: application/json
{
"type": "access",
"email": "john.doe@email.com",
"message": "Please provide all my personal data"
}
Example Response:
{
"id": "DSR-2025-0143",
"type": "access",
"status": "pending_verification",
"email": "john.doe@email.com",
"submittedAt": "2025-05-15T14:00:00Z",
"dueDate": "2025-05-30T14:00:00Z",
"verificationSent": true
}
Update Request Status
PATCH /api/compliance/dsr/{id}
Request Body:
| Field | Type | Description |
|---|---|---|
status | string | processing, completed, rejected |
assignee | string | Email of person handling request |
notes | string | Internal notes |
rejectionReason | string | Required if status is rejected |
Example Request:
PATCH /api/compliance/dsr/DSR-2025-0142
Content-Type: application/json
{
"status": "processing",
"assignee": "admin@company.com"
}
Complete Request (with data package)
POST /api/compliance/dsr/{id}/complete
Request Body:
| Field | Type | Description |
|---|---|---|
notifyUser | boolean | Send completion email (default: true) |
dataPackageUrl | string | URL to downloadable data (for access/portability) |
expiresAt | datetime | When download link expires |
Example Request:
POST /api/compliance/dsr/DSR-2025-0142/complete
Content-Type: application/json
{
"notifyUser": true,
"dataPackageUrl": "https://secure.company.com/data/abc123.zip",
"expiresAt": "2025-06-15T00:00:00Z"
}
Consent Management
Get User Consent
GET /api/compliance/consent/{userId}
Example Response:
{
"userId": "usr_abc123",
"email": "john.doe@email.com",
"consents": [
{
"type": "terms_of_service",
"status": "given",
"version": "2.3",
"timestamp": "2025-01-15T10:32:00Z",
"method": "web_form",
"ip": "192.168.1.100"
},
{
"type": "marketing",
"status": "given",
"timestamp": "2025-01-15T10:32:00Z",
"method": "web_form"
},
{
"type": "analytics",
"status": "withdrawn",
"timestamp": "2025-03-22T15:15:00Z",
"method": "preference_center"
}
]
}
Record Consent
POST /api/compliance/consent
Request Body:
| Field | Type | Required | Description |
|---|---|---|---|
userId | string | Yes | User identifier |
email | string | Yes | User’s email |
type | string | Yes | Consent type (e.g., marketing, analytics) |
status | string | Yes | given or withdrawn |
method | string | No | How consent was collected |
ip | string | No | User’s IP address |
userAgent | string | No | User’s browser |
Example Request:
POST /api/compliance/consent
Content-Type: application/json
{
"userId": "usr_abc123",
"email": "john.doe@email.com",
"type": "marketing",
"status": "given",
"method": "chatbot",
"ip": "192.168.1.100"
}
Example Response:
{
"success": true,
"consentId": "con_xyz789",
"userId": "usr_abc123",
"type": "marketing",
"status": "given",
"timestamp": "2025-05-15T14:30:00Z"
}
Withdraw Consent
DELETE /api/compliance/consent/{userId}/{type}
Example Request:
DELETE /api/compliance/consent/usr_abc123/marketing
Example Response:
{
"success": true,
"userId": "usr_abc123",
"type": "marketing",
"status": "withdrawn",
"timestamp": "2025-05-15T14:35:00Z"
}
List Consent Types
GET /api/compliance/consent-types
Example Response:
{
"consentTypes": [
{
"id": "terms_of_service",
"name": "Terms of Service",
"required": true,
"description": "Agreement to terms and conditions",
"currentVersion": "2.3"
},
{
"id": "marketing",
"name": "Marketing Communications",
"required": false,
"description": "Receive promotional emails and offers"
},
{
"id": "analytics",
"name": "Analytics & Improvement",
"required": false,
"description": "Help us improve by analyzing usage patterns"
}
]
}
Compliance Scanning
Start a Scan
POST /api/compliance/scan
Request Body:
| Field | Type | Description |
|---|---|---|
type | string | quick, full, or custom |
targets | array | For custom: ["bots", "kb", "users", "logs"] |
botId | string | Scan specific bot only |
Example Request:
POST /api/compliance/scan
Content-Type: application/json
{
"type": "full",
"targets": ["bots", "kb", "users", "logs"]
}
Example Response:
{
"scanId": "scan_20250515_001",
"status": "running",
"type": "full",
"startedAt": "2025-05-15T14:45:00Z",
"estimatedDuration": "30 minutes"
}
Get Scan Status
GET /api/compliance/scan/{scanId}
Example Response (In Progress):
{
"scanId": "scan_20250515_001",
"status": "running",
"progress": 45,
"currentStep": "Scanning conversation logs",
"startedAt": "2025-05-15T14:45:00Z"
}
Example Response (Complete):
{
"scanId": "scan_20250515_001",
"status": "completed",
"progress": 100,
"startedAt": "2025-05-15T14:45:00Z",
"completedAt": "2025-05-15T15:12:00Z",
"summary": {
"totalChecks": 148,
"passed": 145,
"warnings": 2,
"critical": 1
},
"issues": [
{
"severity": "critical",
"type": "unencrypted_pii",
"description": "Unencrypted PII found in conversation logs",
"location": "support-bot/logs/2025-05-10",
"affectedRecords": 23,
"recommendation": "Enable automatic PII redaction"
},
{
"severity": "warning",
"type": "consent_expiring",
"description": "Consent records older than 2 years",
"affectedUsers": 12,
"recommendation": "Send consent renewal requests"
}
]
}
Get Latest Scan Results
GET /api/compliance/scan/latest
Returns the most recent completed scan results.
Reports
Generate Compliance Report
POST /api/compliance/report
Request Body:
| Field | Type | Description |
|---|---|---|
type | string | summary, detailed, audit |
period | string | last_30_days, last_90_days, year, custom |
from | date | Start date for custom period |
to | date | End date for custom period |
format | string | json, pdf, csv |
Example Request:
POST /api/compliance/report
Content-Type: application/json
{
"type": "summary",
"period": "last_30_days",
"format": "json"
}
Example Response:
{
"reportId": "rpt_20250515_001",
"generatedAt": "2025-05-15T15:00:00Z",
"period": {
"from": "2025-04-15",
"to": "2025-05-15"
},
"summary": {
"overallScore": 92,
"dsrRequests": {
"received": 15,
"completed": 12,
"pending": 3,
"averageResponseDays": 8.5
},
"consentRate": 94.2,
"dataBreaches": 0,
"scansPerformed": 4,
"issuesFound": 7,
"issuesResolved": 5
}
}
Download Report
GET /api/compliance/report/{reportId}/download
Returns the report file in the requested format.
Data Deletion
Delete User Data
DELETE /api/compliance/user/{userId}/data
Query Parameters:
| Parameter | Type | Description |
|---|---|---|
scope | string | all, conversations, profile, analytics |
confirm | boolean | Must be true to execute |
Example Request:
DELETE /api/compliance/user/usr_abc123/data?scope=all&confirm=true
Example Response:
{
"success": true,
"userId": "usr_abc123",
"deletedAt": "2025-05-15T15:30:00Z",
"scope": "all",
"itemsDeleted": {
"profile": 1,
"conversations": 45,
"consents": 3,
"activityLogs": 234
},
"retainedForLegal": {
"auditLogs": 15
}
}
Error Responses
All errors follow this format:
{
"error": {
"code": "ERROR_CODE",
"message": "Human readable message",
"details": {}
}
}
Common Error Codes:
| Code | HTTP Status | Description |
|---|---|---|
UNAUTHORIZED | 401 | Invalid or missing API key |
FORBIDDEN | 403 | API key lacks required scope |
NOT_FOUND | 404 | Resource not found |
VALIDATION_ERROR | 400 | Invalid request parameters |
RATE_LIMITED | 429 | Too many requests |
INTERNAL_ERROR | 500 | Server error |
Rate Limits
| Endpoint | Limit |
|---|---|
| All endpoints | 100 requests/minute |
| Scan endpoints | 5 requests/hour |
| Report generation | 10 requests/hour |
Rate limit headers are included in responses:
X-RateLimit-Limit: 100
X-RateLimit-Remaining: 95
X-RateLimit-Reset: 1621234567
Webhooks
Configure webhooks to receive real-time notifications.
Available Events:
| Event | Description |
|---|---|
dsr.created | New DSR submitted |
dsr.completed | DSR marked complete |
dsr.due_soon | DSR due within 3 days |
consent.changed | User consent updated |
scan.completed | Compliance scan finished |
issue.critical | Critical issue detected |
Webhook Payload Example:
POST https://your-server.com/webhook
Content-Type: application/json
X-Signature: sha256=...
{
"event": "dsr.created",
"timestamp": "2025-05-15T14:00:00Z",
"data": {
"id": "DSR-2025-0143",
"type": "access",
"email": "user@example.com"
}
}
See Also
- Compliance App - User interface guide
- How To: Configure Compliance
- BASIC Compliance Keywords
How To… Tutorials
📖 Step-by-Step Guides for General Bots Suite
Clear instructions for common tasks, inspired by classic computer manuals
┌─────────────────────────────────────────────────────────────────────────┐
│ │
│ ╔═══════════════════════════════════════════════════════════════╗ │
│ ║ ║ │
│ ║ ██╗ ██╗ ██████╗ ██╗ ██╗ ████████╗ ██████╗ ║ │
│ ║ ██║ ██║██╔═══██╗██║ ██║ ╚══██╔══╝██╔═══██╗ ║ │
│ ║ ███████║██║ ██║██║ █╗ ██║ ██║ ██║ ██║ ║ │
│ ║ ██╔══██║██║ ██║██║███╗██║ ██║ ██║ ██║ ║ │
│ ║ ██║ ██║╚██████╔╝╚███╔███╔╝ ██║ ╚██████╔╝ ██╗ ║ │
│ ║ ╚═╝ ╚═╝ ╚═════╝ ╚══╝╚══╝ ╚═╝ ╚═════╝ ╚═╝ ║ │
│ ║ ║ │
│ ╚═══════════════════════════════════════════════════════════════╝ │
│ │
│ Your Guide to General Bots Suite │
│ │
└─────────────────────────────────────────────────────────────────────────┘
About These Tutorials
These tutorials are designed like the classic computer manuals of the early 1990s — clear, numbered steps with visual diagrams showing exactly what to do. Each tutorial follows the same format:
- Objective — What you’ll accomplish
- Time Required — How long it takes
- Prerequisites — What you need before starting
- Steps — Numbered instructions with screenshots
- Troubleshooting — Common problems and solutions
- Next Steps — What to learn next
🚀 Getting Started
| Tutorial | Description | Time |
|---|---|---|
| Create Your First Bot | Set up a working bot from scratch | 10 min |
| Your First Conversation | Talk to your bot and understand responses | 5 min |
| Understanding the Interface | Navigate the Suite like a pro | 10 min |
💬 Chat & Conversations
| Tutorial | Description | Time |
|---|---|---|
| Ask Questions Effectively | Get better answers from your AI assistant | 5 min |
| Use Voice Input | Talk instead of type | 3 min |
| Create Chat Shortcuts | Set up quick commands | 10 min |
| Export Conversations | Save chat history for reference | 5 min |
📁 File Management (Drive)
| Tutorial | Description | Time |
|---|---|---|
| Upload Your First File | Add documents to Drive | 3 min |
| Organize with Folders | Create a logical file structure | 10 min |
| Share Files Securely | Grant access to team members | 5 min |
| Search for Documents | Find files instantly | 5 min |
📚 Knowledge Base
| Tutorial | Description | Time |
|---|---|---|
| Add Documents to Knowledge Base | Teach your bot from files | 15 min |
| Import a Website | Crawl and learn from web pages | 10 min |
| Create FAQ Responses | Define question-answer pairs | 15 min |
| Manage Collections | Organize knowledge by topic | 10 min |
🔧 BASIC Dialogs
| Tutorial | Description | Time |
|---|---|---|
| Write Your First Dialog | Create a simple conversation script | 20 min |
| Use HEAR and TALK | Gather and display information | 10 min |
| Store User Information | Remember data between conversations | 15 min |
| Call External APIs | Connect to web services | 20 min |
| Send Automated Messages | Schedule broadcasts and reminders | 15 min |
📱 Messaging Channels
| Tutorial | Description | Time |
|---|---|---|
| Connect WhatsApp | Set up WhatsApp Business integration | 30 min |
| Configure Email | Enable email conversations | 15 min |
| Set Up SMS | Add text message support | 15 min |
| Embed Web Chat | Add chat to your website | 10 min |
📊 Analytics & Monitoring
| Tutorial | Description | Time |
|---|---|---|
| View Bot Statistics | Understand usage metrics | 10 min |
| Monitor Live Sessions | Watch conversations in real-time | 10 min |
| Create Custom Reports | Build dashboards for insights | 20 min |
| Export Analytics Data | Download metrics for external analysis | 10 min |
🎨 Customization
| Tutorial | Description | Time |
|---|---|---|
| Change Your Bot’s Theme | Customize colors and appearance | 10 min |
| Add a Custom Logo | Brand your bot interface | 5 min |
| Create Custom Cards | Design rich message layouts | 20 min |
| Modify the Welcome Message | Personalize the first interaction | 5 min |
🏢 Templates
| Tutorial | Description | Time |
|---|---|---|
| Install the CRM Template | Set up customer relationship management | 20 min |
| Use the HR Template | Deploy employee self-service | 15 min |
| Configure Compliance Bot | Enable privacy request handling | 25 min |
| Build from Template | Customize a template for your needs | 30 min |
🔒 Security & Administration
| Tutorial | Description | Time |
|---|---|---|
| Set Up User Authentication | Enable secure login | 20 min |
| Configure Permissions | Control who can do what | 15 min |
| Enable Audit Logging | Track all system activities | 10 min |
| Backup Your Bot | Protect your configuration and data | 15 min |
🐛 Troubleshooting
| Tutorial | Description | Time |
|---|---|---|
| Debug Dialog Errors | Fix common BASIC script problems | 15 min |
| Resolve Connection Issues | Troubleshoot network problems | 10 min |
| Fix Knowledge Base Gaps | Improve bot answers | 20 min |
| Performance Optimization | Make your bot faster | 15 min |
Quick Reference Card
┌─────────────────────────────────────────────────────────────────────────┐
│ QUICK REFERENCE CARD │
├─────────────────────────────────────────────────────────────────────────┤
│ │
│ KEYBOARD SHORTCUTS │
│ ───────────────── │
│ Ctrl + Enter ........ Send message │
│ Ctrl + K ............ Quick command palette │
│ Ctrl + / ............ Toggle voice input │
│ Ctrl + N ............ New item (context-aware) │
│ Ctrl + S ............ Save current work │
│ Escape .............. Close dialog/panel │
│ │
│ COMMON BASIC KEYWORDS │
│ ──────────────────── │
│ TALK "message" ...... Display message to user │
│ HEAR variable ....... Wait for user input │
│ SET BOT MEMORY ...... Store bot-wide data │
│ GET USER MEMORY ..... Retrieve user-specific data │
│ USE KB "name" ....... Activate knowledge base │
│ SEND MAIL ........... Send email notification │
│ │
│ WHERE TO GET HELP │
│ ───────────────── │
│ • Type "help" in Chat for assistance │
│ • Press F1 anywhere for context help │
│ • Visit community.pragmatismo.com for forums │
│ • Email support@pragmatismo.com for enterprise support │
│ │
└─────────────────────────────────────────────────────────────────────────┘
Tips for Following Tutorials
✅ Before You Start
- Bookmark this page — You’ll return to it often
- Have the Suite open — Follow along step-by-step
- Take notes — Write down customizations you make
- Don’t skip steps — Each step builds on the previous
✅ While Following Steps
- Read the entire step first — Then perform the action
- Match the screenshots — If your screen looks different, stop and check
- Use exact values — Type what the tutorial shows, then customize later
- Check results — Verify each step worked before moving on
✅ If Something Goes Wrong
- Don’t panic — Most problems have simple solutions
- Re-read the step — You may have missed something
- Check Troubleshooting — Each tutorial has a help section
- Ask for help — The Chat assistant can guide you
Icon Legend
Throughout these tutorials, you’ll see these indicators:
| Icon | Meaning |
|---|---|
| 💡 | Tip — Helpful suggestion to work more efficiently |
| ⚠️ | Warning — Important caution to avoid problems |
| 📝 | Note — Additional information or context |
| ✅ | Checkpoint — Verify your progress before continuing |
| 🔧 | Configuration — Settings you may need to adjust |
Version Information
These tutorials are written for:
- General Bots Suite version 5.0+
- Browser: Chrome, Firefox, Safari, or Edge (latest versions)
- Last Updated: 2025
If you’re using an older version, some screens may look different.
“The best way to learn is to do.”
Start with Create Your First Bot →
How To: Create Your First Bot
Tutorial 1 of the Getting Started Series
Follow these simple steps to create a working bot in 10 minutes
┌─────────────────────────────────────────────────────────────────────────┐
│ │
│ ┌─────────────────────────────────────────────────────────────────┐ │
│ │ │ │
│ │ 🤖 CREATE YOUR FIRST BOT │ │
│ │ │ │
│ │ ┌─────────┐ ┌─────────┐ ┌─────────┐ ┌─────────┐ │ │
│ │ │ Step │───▶│ Step │───▶│ Step │───▶│ Step │ │ │
│ │ │ 1 │ │ 2 │ │ 3 │ │ 4 │ │ │
│ │ │ Access │ │ Create │ │Configure│ │ Test │ │ │
│ │ │ Suite │ │ Bot │ │ Bot │ │ Bot │ │ │
│ │ └─────────┘ └─────────┘ └─────────┘ └─────────┘ │ │
│ │ │ │
│ └─────────────────────────────────────────────────────────────────┘ │
│ │
└─────────────────────────────────────────────────────────────────────────┘
Objective
By the end of this tutorial, you will have:
- Created a new bot instance
- Configured basic settings
- Written a simple greeting
- Tested your bot by talking to it
Time Required
⏱️ 10 minutes
Prerequisites
Before you begin, make sure you have:
- Access to General Bots Suite (URL provided by your administrator)
- A web browser (Chrome, Firefox, Safari, or Edge)
- Administrator or Bot Creator permissions
Step 1: Access the Suite
1.1 Open Your Browser
Launch your preferred web browser by clicking its icon.
┌─────────────────────────────────────────────────────────────────────────┐
│ 🌐 Browser [─][□][×]│
├─────────────────────────────────────────────────────────────────────────┤
│ ← → ↻ │ https://your-company.bot:8080 │ ☆ │ │
├─────────────────────────────────────────────────────────────────────────┤
│ │
│ Loading... │
│ │
└─────────────────────────────────────────────────────────────────────────┘
1.2 Navigate to Your General Bots URL
Type your General Bots address in the address bar and press Enter.
💡 Tip: Your URL will look something like:
http://localhost:8080(development)https://bots.yourcompany.com(production)https://app.pragmatismo.cloud(cloud hosted)
1.3 Log In (If Required)
If you see a login screen:
- Enter your username or email
- Enter your password
- Click Sign In
┌─────────────────────────────────────────────────────────────────────────┐
│ │
│ ┌────────────────────────────┐ │
│ │ 🤖 General Bots │ │
│ │ │ │
│ │ Username: │ │
│ │ ┌────────────────────┐ │ │
│ │ │ admin@company.com │ │ │
│ │ └────────────────────┘ │ │
│ │ │ │
│ │ Password: │ │
│ │ ┌────────────────────┐ │ │
│ │ │ •••••••••••• │ │ │
│ │ └────────────────────┘ │ │
│ │ │ │
│ │ ┌────────────────────┐ │ │
│ │ │ Sign In ──► │ │ │
│ │ └────────────────────┘ │ │
│ │ │ │
│ └────────────────────────────┘ │
│ │
└─────────────────────────────────────────────────────────────────────────┘
✅ Checkpoint: You should now see the General Bots Suite interface.
Step 2: Create a New Bot
2.1 Open the Apps Menu
Click the nine-dot grid icon (⋮⋮⋮) in the top-right corner of the screen.
┌─────────────────────────────────────────────────────────────────────────┐
│ 🤖 General Bots [⋮⋮⋮] ◄── Click here │
├─────────────────────────────────────────────────────────────────────────┤
│ │
│ │
2.2 Select “Sources”
From the apps menu that appears, click Sources.
┌─────────────────────────────────────────────────────────────────────────┐
│ │
│ ┌───────────────────┐ │
│ │ 💬 Chat │ │
│ │ 📁 Drive │ │
│ │ ✓ Tasks │ │
│ │ ✉ Mail │ │
│ │ 📝 Paper │ │
│ │ 📊 Analytics │ │
│ │ ▶ 📋 Sources ◀───┼─── Click here │
│ │ 🎨 Designer │ │
│ │ ⚙️ Settings │ │
│ └───────────────────┘ │
│ │
└─────────────────────────────────────────────────────────────────────────┘
2.3 Click “New Bot”
In the Sources application, locate and click the New Bot button.
┌─────────────────────────────────────────────────────────────────────────┐
│ Sources │
├─────────────────────────────────────────────────────────────────────────┤
│ ┌──────────────┐ ┌──────────────┐ ┌──────────────┐ │
│ │ Templates │ │ Prompts │ │ Bots │ ◄── Active Tab │
│ └──────────────┘ └──────────────┘ └──────────────┘ │
├─────────────────────────────────────────────────────────────────────────┤
│ │
│ Your Bots ┌─────────────────┐ │
│ ───────── │ ➕ New Bot │ ◄── Click │
│ └─────────────────┘ │
│ ┌─────────────────────────────────────────────────────────────────┐ │
│ │ No bots yet. Create your first bot! │ │
│ └─────────────────────────────────────────────────────────────────┘ │
│ │
└─────────────────────────────────────────────────────────────────────────┘
2.4 Enter Bot Details
A dialog box will appear. Fill in the following fields:
| Field | What to Enter | Example |
|---|---|---|
| Bot Name | A unique identifier (no spaces) | mycompany |
| Display Name | Friendly name shown to users | My Company Assistant |
| Description | What your bot does | Helps employees find information |
| Template | Starting point (select from dropdown) | default |
┌─────────────────────────────────────────────────────────────────────────┐
│ Create New Bot [×] │
├─────────────────────────────────────────────────────────────────────────┤
│ │
│ Bot Name * │
│ ┌─────────────────────────────────────────────────────────────────┐ │
│ │ mycompany │ │
│ └─────────────────────────────────────────────────────────────────┘ │
│ ⚠️ Use lowercase letters, numbers, and hyphens only │
│ │
│ Display Name * │
│ ┌─────────────────────────────────────────────────────────────────┐ │
│ │ My Company Assistant │ │
│ └─────────────────────────────────────────────────────────────────┘ │
│ │
│ Description │
│ ┌─────────────────────────────────────────────────────────────────┐ │
│ │ Helps employees find information and complete tasks │ │
│ └─────────────────────────────────────────────────────────────────┘ │
│ │
│ Template │
│ ┌─────────────────────────────────────────────────────────────────┐ │
│ │ default [▼] │ │
│ └─────────────────────────────────────────────────────────────────┘ │
│ │
│ ┌──────────┐ ┌──────────────────┐ │
│ │ Cancel │ │ Create Bot ──► │ │
│ └──────────┘ └──────────────────┘ │
│ │
└─────────────────────────────────────────────────────────────────────────┘
2.5 Click “Create Bot”
Click the Create Bot button to create your bot.
💡 Tip: The bot creation process takes a few seconds. You’ll see a progress indicator.
✅ Checkpoint: Your new bot should appear in the bot list.
Step 3: Configure Basic Settings
3.1 Open Bot Settings
Click on your new bot to select it, then click Settings (or the ⚙️ icon).
┌─────────────────────────────────────────────────────────────────────────┐
│ Your Bots │
├─────────────────────────────────────────────────────────────────────────┤
│ │
│ ┌─────────────────────────────────────────────────────────────────┐ │
│ │ 🤖 mycompany [⚙️] │◄──│
│ │ My Company Assistant │ │
│ │ Status: ● Active │ │
│ └─────────────────────────────────────────────────────────────────┘ │
│ │
└─────────────────────────────────────────────────────────────────────────┘
│
Click the ⚙️ icon
3.2 Set the Welcome Message
Find the Welcome Message field and enter a friendly greeting:
Welcome Message:
┌─────────────────────────────────────────────────────────────────────────┐
│ Hello! 👋 I'm your Company Assistant. I can help you with: │
│ │
│ • Finding documents and information │
│ • Answering questions about policies │
│ • Creating tasks and reminders │
│ │
│ How can I help you today? │
└─────────────────────────────────────────────────────────────────────────┘
3.3 Configure AI Model (Optional)
If you have API keys for AI services, configure them:
| Setting | Description | Example Value |
|---|---|---|
| LLM Provider | AI service to use | anthropic |
| Model | Specific model | claude-sonnet-4.5 |
| API Key | Your API key | sk-... |
⚠️ Warning: Keep your API keys secret. Never share them.
3.4 Save Settings
Click the Save button to save your configuration.
┌─────────────────────────────────────────────────────────────────────────┐
│ Bot Settings [×] │
├─────────────────────────────────────────────────────────────────────────┤
│ │
│ [General] [AI Model] [Channels] [Advanced] │
│ │
│ ───────────────────────────────────────────────────────────────── │
│ │
│ ┌────────────────────┐ │
│ │ 💾 Save │◄────│
│ └────────────────────┘ │
│ │
└─────────────────────────────────────────────────────────────────────────┘
Click Save
✅ Checkpoint: Your settings are saved. The bot is ready to test.
Step 4: Test Your Bot
4.1 Open Chat
Click the Chat app from the Apps Menu (⋮⋮⋮).
4.2 Select Your Bot
If you have multiple bots, select yours from the bot dropdown:
┌─────────────────────────────────────────────────────────────────────────┐
│ 💬 Chat [mycompany ▼] │
├─────────────────────────────────────────────────────────────────────────┤
│ │
│ ┌─────────────────────────────────────────────────────────────┐ │
│ │ 🤖 My Company Assistant │ │
│ │ │ │
│ │ Hello! 👋 I'm your Company Assistant. I can help │ │
│ │ you with: │ │
│ │ │ │
│ │ • Finding documents and information │ │
│ │ • Answering questions about policies │ │
│ │ • Creating tasks and reminders │ │
│ │ │ │
│ │ How can I help you today? │ │
│ └─────────────────────────────────────────────────────────────┘ │
│ │
├─────────────────────────────────────────────────────────────────────────┤
│ Type your message... [↑] │
└─────────────────────────────────────────────────────────────────────────┘
4.3 Send a Test Message
Type a simple message and press Enter:
You: Hello!
4.4 Verify the Response
Your bot should respond! If it does, congratulations — your bot is working!
┌─────────────────────────────────────────────────────────────────────────┐
│ │
│ ┌─────────────────────────────────────────────────────────────┐ │
│ │ 👤 You │ │
│ │ Hello! │ │
│ └─────────────────────────────────────────────────────────────┘ │
│ │
│ ┌─────────────────────────────────────────────────────────────┐ │
│ │ 🤖 My Company Assistant │ │
│ │ Hello! How can I assist you today? │ │
│ └─────────────────────────────────────────────────────────────┘ │
│ │
└─────────────────────────────────────────────────────────────────────────┘
✅ Checkpoint: Your bot responds to messages. Setup complete!
🎉 Congratulations!
You have successfully created your first bot! Here’s what you accomplished:
┌─────────────────────────────────────────────────────────────────────────┐
│ │
│ ✓ Accessed General Bots Suite │
│ ✓ Created a new bot instance │
│ ✓ Configured basic settings │
│ ✓ Tested the bot with a conversation │
│ │
│ Your bot "mycompany" is now ready to use! │
│ │
└─────────────────────────────────────────────────────────────────────────┘
Troubleshooting
Problem: “Create Bot” button is disabled
Cause: Required fields are empty or invalid.
Solution:
- Check that Bot Name contains only lowercase letters, numbers, and hyphens
- Ensure Display Name is not empty
- Verify a template is selected
Problem: Bot doesn’t respond
Cause: AI model not configured or API key invalid.
Solution:
- Open bot settings
- Verify AI model configuration
- Check that API key is correct
- Ensure you have API credits remaining
Problem: “Permission denied” error
Cause: Your account doesn’t have bot creation rights.
Solution:
- Contact your administrator
- Request “Bot Creator” or “Administrator” role
Problem: Page won’t load
Cause: Network or server issue.
Solution:
- Check your internet connection
- Try refreshing the page (F5 or Ctrl+R)
- Clear browser cache
- Contact your system administrator
What You Learned
In this tutorial, you learned:
| Concept | Description |
|---|---|
| Bot Instance | A unique bot with its own configuration |
| Bot Name | Technical identifier used internally |
| Display Name | Friendly name shown to users |
| Template | Pre-built starting point for your bot |
| Welcome Message | First message users see |
Next Steps
Now that you have a working bot, continue learning:
| Next Tutorial | What You’ll Learn |
|---|---|
| Your First Conversation | Understanding how conversations work |
| Add Knowledge Base Documents | Teaching your bot from files |
| Write Your First Dialog | Creating custom conversation flows |
Quick Reference
Bot Naming Rules
- ✅
mycompany— Good - ✅
hr-assistant— Good - ✅
support2024— Good - ❌
My Company— No spaces - ❌
HR_Bot— No underscores - ❌
Support@2024— No special characters
Essential Settings Checklist
- Bot Name (unique identifier)
- Display Name (user-friendly)
- Welcome Message (first impression)
- AI Model (for responses)
- Language (for localization)
Tutorial 1 of 30 • Back to How-To Index • Next: Your First Conversation →
How To: Write Your First Dialog
Tutorial 5 of the BASIC Dialogs Series
Create a simple conversation script in 20 minutes
┌─────────────────────────────────────────────────────────────────────────┐
│ │
│ ┌─────────────────────────────────────────────────────────────────┐ │
│ │ │ │
│ │ 📝 WRITE YOUR FIRST DIALOG │ │
│ │ │ │
│ │ ┌─────────┐ ┌─────────┐ ┌─────────┐ ┌─────────┐ │ │
│ │ │ Step │───▶│ Step │───▶│ Step │───▶│ Step │ │ │
│ │ │ 1 │ │ 2 │ │ 3 │ │ 4 │ │ │
│ │ │ Create │ │ Write │ │ Test │ │ Enhance │ │ │
│ │ │ File │ │ Code │ │ Dialog │ │ Logic │ │ │
│ │ └─────────┘ └─────────┘ └─────────┘ └─────────┘ │ │
│ │ │ │
│ └─────────────────────────────────────────────────────────────────┘ │
│ │
└─────────────────────────────────────────────────────────────────────────┘
Objective
By the end of this tutorial, you will have:
- Created a
.basdialog file - Written code using TALK and HEAR keywords
- Used conditional logic (IF/THEN/ELSE)
- Stored and retrieved user information
- Tested your dialog in the chat interface
Time Required
⏱️ 20 minutes
Prerequisites
Before you begin, make sure you have:
- A working bot (see Create Your First Bot)
- Access to the Designer or Drive app
- Basic understanding of the chat interface
What is a Dialog?
A dialog is a conversation script written in BASIC that controls how your bot talks with users. Think of it like a script for a play — you write what the bot should say and how it should respond to the user.
┌─────────────────────────────────────────────────────────────────────────┐
│ HOW DIALOGS WORK │
├─────────────────────────────────────────────────────────────────────────┤
│ │
│ User says: "Hello" │
│ │ │
│ ▼ │
│ ┌─────────────────┐ │
│ │ Dialog Script │ ◄── Your BASIC code runs here │
│ │ (greeting.bas) │ │
│ └────────┬────────┘ │
│ │ │
│ ▼ │
│ Bot says: "Hi there! What's your name?" │
│ │ │
│ ▼ │
│ User says: "Sarah" │
│ │ │
│ ▼ │
│ Bot says: "Nice to meet you, Sarah!" │
│ │
└─────────────────────────────────────────────────────────────────────────┘
Step 1: Create the Dialog File
1.1 Open the Drive App
Click the Apps Menu (⋮⋮⋮) and select Drive.
┌─────────────────────────────────────────────────────────────────────────┐
│ 📁 Drive │
├─────────────────────────────────────────────────────────────────────────┤
│ │
│ 📂 mycompany.gbai │
│ ├── 📂 mycompany.gbdialog ◄── Dialog files go here │
│ ├── 📂 mycompany.gbot │
│ ├── 📂 mycompany.gbkb │
│ └── 📂 mycompany.gbdrive │
│ │
└─────────────────────────────────────────────────────────────────────────┘
1.2 Navigate to the Dialog Folder
Double-click mycompany.gbai, then mycompany.gbdialog.
1.3 Create a New File
Click New File (or press Ctrl+N) and name it:
greeting.bas
⚠️ Warning: The file must end with .bas to be recognized as a dialog.
┌─────────────────────────────────────────────────────────────────────────┐
│ New File [×] │
├─────────────────────────────────────────────────────────────────────────┤
│ │
│ File Name: │
│ ┌─────────────────────────────────────────────────────────────────┐ │
│ │ greeting.bas │ │
│ └─────────────────────────────────────────────────────────────────┘ │
│ │
│ Location: mycompany.gbai / mycompany.gbdialog / │
│ │
│ ┌──────────┐ ┌──────────────────┐ │
│ │ Cancel │ │ Create ──► │ │
│ └──────────┘ └──────────────────┘ │
│ │
└─────────────────────────────────────────────────────────────────────────┘
✅ Checkpoint: You should see greeting.bas in your dialog folder.
Step 2: Write the Dialog Code
2.1 Open the File for Editing
Double-click greeting.bas to open it in the editor.
2.2 Write Your First Line
Type the following code:
TALK "Hello! Welcome to our service. 👋"
This is the simplest possible dialog — the bot just says one thing.
2.3 Add User Input
Now let’s ask for the user’s name:
TALK "Hello! Welcome to our service. 👋"
TALK "What is your name?"
HEAR name
TALK "Nice to meet you, " + name + "!"
Let’s break this down:
| Line | What It Does |
|---|---|
TALK "..." | Bot displays a message |
HEAR name | Bot waits for user input, stores it in name |
"..." + name + "..." | Combines text with the variable |
2.4 The Complete First Dialog
Here’s your complete greeting.bas:
' ============================================
' GREETING DIALOG
' A friendly welcome conversation
' ============================================
' Greet the user
TALK "Hello! Welcome to our service. 👋"
' Ask for their name
TALK "What is your name?"
HEAR name
' Respond with their name
TALK "Nice to meet you, " + name + "!"
TALK "How can I help you today?"
💡 Tip: Lines starting with ' are comments — they’re ignored by the bot but help you understand the code.
┌─────────────────────────────────────────────────────────────────────────┐
│ 📝 greeting.bas [Save] ⌘S │
├─────────────────────────────────────────────────────────────────────────┤
│ 1 │ ' ============================================ │
│ 2 │ ' GREETING DIALOG │
│ 3 │ ' A friendly welcome conversation │
│ 4 │ ' ============================================ │
│ 5 │ │
│ 6 │ ' Greet the user │
│ 7 │ TALK "Hello! Welcome to our service. 👋" │
│ 8 │ │
│ 9 │ ' Ask for their name │
│ 10 │ TALK "What is your name?" │
│ 11 │ HEAR name │
│ 12 │ │
│ 13 │ ' Respond with their name │
│ 14 │ TALK "Nice to meet you, " + name + "!" │
│ 15 │ TALK "How can I help you today?" │
│ │ │
└─────────────────────────────────────────────────────────────────────────┘
2.5 Save the File
Press Ctrl+S or click the Save button.
✅ Checkpoint: Your dialog file is saved and ready to test.
Step 3: Test Your Dialog
3.1 Open Chat
Click the Apps Menu (⋮⋮⋮) and select Chat.
3.2 Trigger the Dialog
Type the command to run your dialog:
/greeting
Or simply type something that matches “greeting” — the system will recognize it.
3.3 Have the Conversation
Watch your dialog run:
┌─────────────────────────────────────────────────────────────────────────┐
│ 💬 Chat │
├─────────────────────────────────────────────────────────────────────────┤
│ │
│ ┌─────────────────────────────────────────────────────────────┐ │
│ │ 👤 You │ │
│ │ /greeting │ │
│ └─────────────────────────────────────────────────────────────┘ │
│ │
│ ┌─────────────────────────────────────────────────────────────┐ │
│ │ 🤖 Bot │ │
│ │ Hello! Welcome to our service. 👋 │ │
│ │ What is your name? │ │
│ └─────────────────────────────────────────────────────────────┘ │
│ │
│ ┌─────────────────────────────────────────────────────────────┐ │
│ │ 👤 You │ │
│ │ Sarah │ │
│ └─────────────────────────────────────────────────────────────┘ │
│ │
│ ┌─────────────────────────────────────────────────────────────┐ │
│ │ 🤖 Bot │ │
│ │ Nice to meet you, Sarah! │ │
│ │ How can I help you today? │ │
│ └─────────────────────────────────────────────────────────────┘ │
│ │
├─────────────────────────────────────────────────────────────────────────┤
│ Type your message... [↑] │
└─────────────────────────────────────────────────────────────────────────┘
✅ Checkpoint: Your dialog runs and responds correctly!
Step 4: Enhance with Logic
Now let’s make our dialog smarter with conditional logic.
4.1 Add Input Validation
Update your dialog to handle different types of input:
' ============================================
' GREETING DIALOG (Enhanced)
' A friendly welcome with input validation
' ============================================
TALK "Hello! Welcome to our service. 👋"
TALK "What is your name?"
HEAR name
' Check if name was provided
IF name = "" THEN
TALK "I didn't catch your name. That's okay!"
name = "friend"
END IF
TALK "Nice to meet you, " + name + "!"
4.2 Add Menu Options
Let’s give the user choices:
' ============================================
' GREETING DIALOG (Full Version)
' Welcome with menu options
' ============================================
TALK "Hello! Welcome to our service. 👋"
TALK "What is your name?"
HEAR name
IF name = "" THEN
name = "friend"
END IF
TALK "Nice to meet you, " + name + "!"
TALK ""
TALK "How can I help you today?"
TALK "1. Learn about our services"
TALK "2. Contact support"
TALK "3. Check my account"
TALK ""
TALK "Please type 1, 2, or 3:"
HEAR choice
SELECT CASE choice
CASE "1"
TALK "Great! We offer AI-powered automation for businesses."
TALK "Would you like to schedule a demo?"
CASE "2"
TALK "I'll connect you with our support team."
TALK "Please describe your issue:"
HEAR issue
TALK "Thank you. A support agent will contact you about: " + issue
CASE "3"
TALK "To check your account, I'll need to verify your identity."
TALK "Please enter your email address:"
HEAR email
TALK "Looking up account for: " + email
CASE ELSE
TALK "I didn't understand that choice."
TALK "Please type 1, 2, or 3 next time."
END SELECT
TALK ""
TALK "Is there anything else I can help with, " + name + "?"
4.3 Understanding SELECT CASE
┌─────────────────────────────────────────────────────────────────────────┐
│ SELECT CASE EXPLAINED │
├─────────────────────────────────────────────────────────────────────────┤
│ │
│ User types: "2" │
│ │ │
│ ▼ │
│ ┌─────────────────────────────────────────────────────────────┐ │
│ │ SELECT CASE choice │ │
│ │ ┌─────────────┐ │ │
│ │ │ CASE "1" │──▶ Skip (not matched) │ │
│ │ └─────────────┘ │ │
│ │ ┌─────────────┐ │ │
│ │ │ CASE "2" ★ │──▶ EXECUTE! ───▶ "I'll connect you..." │ │
│ │ └─────────────┘ │ │
│ │ ┌─────────────┐ │ │
│ │ │ CASE "3" │──▶ Skip (not checked after match) │ │
│ │ └─────────────┘ │ │
│ │ ┌─────────────┐ │ │
│ │ │ CASE ELSE │──▶ Skip (only runs if nothing matched) │ │
│ │ └─────────────┘ │ │
│ │ END SELECT │ │
│ └─────────────────────────────────────────────────────────────┘ │
│ │
└─────────────────────────────────────────────────────────────────────────┘
Step 5: Remember User Information
5.1 Store User Data
Use SET USER MEMORY to remember information between conversations:
' After getting the name
SET USER MEMORY "name", name
' Later, in another dialog, retrieve it:
savedName = GET USER MEMORY "name"
IF savedName <> "" THEN
TALK "Welcome back, " + savedName + "!"
ELSE
TALK "Hello! I don't think we've met before."
END IF
5.2 Store Bot-Wide Data
Use SET BOT MEMORY for data that applies to all users:
' Store a bot-wide counter
visitorCount = GET BOT MEMORY "visitor_count"
IF visitorCount = "" THEN
visitorCount = 0
END IF
visitorCount = visitorCount + 1
SET BOT MEMORY "visitor_count", visitorCount
TALK "You are visitor number " + visitorCount + " today!"
Complete Example: Support Request Dialog
Here’s a complete, practical dialog you can use as a template:
' ============================================
' SUPPORT REQUEST DIALOG
' Collects support ticket information
' ============================================
' Check if we know this user
userName = GET USER MEMORY "name"
IF userName = "" THEN
TALK "Hello! I'm here to help you create a support request."
TALK "First, what's your name?"
HEAR userName
SET USER MEMORY "name", userName
ELSE
TALK "Welcome back, " + userName + "!"
END IF
' Get contact information
TALK "What email should we use to contact you?"
HEAR AS email
email
IF email = "" THEN
TALK "I'll need an email to send you updates."
HEAR AS email
email
END IF
' Get issue category
TALK ""
TALK "What type of issue are you experiencing?"
TALK ""
TALK "1. 🔧 Technical problem"
TALK "2. 💳 Billing question"
TALK "3. 📦 Order status"
TALK "4. ❓ General question"
TALK ""
HEAR category
SELECT CASE category
CASE "1"
categoryName = "Technical"
TALK "I'm sorry you're having technical difficulties."
CASE "2"
categoryName = "Billing"
TALK "I can help with billing questions."
CASE "3"
categoryName = "Orders"
TALK "Let me check on your order."
CASE ELSE
categoryName = "General"
TALK "I'll make sure the right team sees this."
END SELECT
' Get description
TALK ""
TALK "Please describe your issue in detail:"
HEAR description
' Get urgency
TALK ""
TALK "How urgent is this?"
TALK "1. 🔴 Critical - I can't work"
TALK "2. 🟡 High - Affecting my work"
TALK "3. 🟢 Normal - When you get a chance"
HEAR urgency
SELECT CASE urgency
CASE "1"
urgencyLevel = "Critical"
CASE "2"
urgencyLevel = "High"
CASE ELSE
urgencyLevel = "Normal"
END SELECT
' Confirm ticket
TALK ""
TALK "━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━"
TALK "📋 SUPPORT REQUEST SUMMARY"
TALK "━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━"
TALK "Name: " + userName
TALK "Email: " + email
TALK "Category: " + categoryName
TALK "Urgency: " + urgencyLevel
TALK "Issue: " + description
TALK "━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━"
TALK ""
TALK "Should I submit this request? (yes/no)"
HEAR confirm
IF confirm = "yes" OR confirm = "Yes" OR confirm = "YES" THEN
' Here you would typically save to a database
' For now, just confirm
TALK "✅ Your support request has been submitted!"
TALK "Ticket ID: SR-" + FORMAT(NOW, "yyyyMMddHHmm")
TALK "You'll receive a confirmation email at " + email
TALK "Our team typically responds within 24 hours."
ELSE
TALK "No problem! Your request was not submitted."
TALK "Feel free to start over when you're ready."
END IF
TALK ""
TALK "Is there anything else I can help with?"
🎉 Congratulations!
You’ve written your first dialog! Here’s what you learned:
┌─────────────────────────────────────────────────────────────────────────┐
│ │
│ ✓ Created a .bas dialog file │
│ ✓ Used TALK to display messages │
│ ✓ Used HEAR to get user input │
│ ✓ Combined text with variables │
│ ✓ Used IF/THEN/ELSE for decisions │
│ ✓ Used SELECT CASE for menus │
│ ✓ Stored data with SET USER MEMORY │
│ │
└─────────────────────────────────────────────────────────────────────────┘
Troubleshooting
Problem: Dialog doesn’t start
Cause: File name or location is incorrect.
Solution:
- Verify file ends with
.bas - Confirm file is in the
.gbdialogfolder - Check there are no syntax errors
Problem: “Unexpected token” error
Cause: Syntax error in your code.
Solution:
- Check all strings have opening and closing quotes
- Verify IF statements have matching END IF
- Ensure SELECT CASE has END SELECT
Problem: Variable is empty
Cause: User skipped the HEAR prompt.
Solution:
- Add validation:
IF variable = "" THEN - Provide a default value
- Ask again if needed
Problem: Bot doesn’t remember data
Cause: Not using memory keywords correctly.
Solution:
- Use
SET USER MEMORY "key", valueto save - Use
GET USER MEMORY "key"to retrieve - Ensure key names match exactly (case-sensitive)
Quick Reference
Essential Keywords
| Keyword | Purpose | Example |
|---|---|---|
TALK | Display message | TALK "Hello!" |
HEAR | Get user input | HEAR name |
HEAR AS type | Get typed input | HEAR AS email emailVar |
SET | Set variable | SET x = 5 |
IF/THEN/ELSE | Conditional | IF x > 5 THEN ... END IF |
SELECT CASE | Menu choice | SELECT CASE x ... END SELECT |
SET USER MEMORY | Save user data | SET USER MEMORY "key", value |
GET USER MEMORY | Load user data | x = GET USER MEMORY "key" |
SET BOT MEMORY | Save bot data | SET BOT MEMORY "key", value |
GET BOT MEMORY | Load bot data | x = GET BOT MEMORY "key" |
Common Patterns
Greeting with memory:
name = GET USER MEMORY "name"
IF name = "" THEN
TALK "What's your name?"
HEAR name
SET USER MEMORY "name", name
ELSE
TALK "Welcome back, " + name + "!"
END IF
Menu with validation:
TALK "Choose: 1, 2, or 3"
HEAR choice
IF choice < "1" OR choice > "3" THEN
TALK "Invalid choice, using default."
choice = "1"
END IF
Loop for retries:
attempts = 0
valid = FALSE
WHILE valid = FALSE AND attempts < 3
TALK "Enter your email:"
HEAR AS email input
IF input <> "" THEN
valid = TRUE
END IF
attempts = attempts + 1
WEND
Next Steps
| Next Tutorial | What You’ll Learn |
|---|---|
| Store User Information | Advanced memory patterns |
| Call External APIs | Connect to web services |
| Send Automated Messages | Scheduled broadcasts |
Best Practices
- Comment your code — Use
'for explanations - Validate all input — Never assume users type correctly
- Provide defaults — Handle empty responses gracefully
- Use clear prompts — Tell users exactly what to type
- Confirm important actions — Ask before submitting forms
- Use spaces in keywords —
SET BOT MEMORYnotSET_BOT_MEMORY - Test thoroughly — Try all menu options and edge cases
Tutorial 5 of 30 • Back to How-To Index • Next: Store User Information →
How To: Add Documents to Knowledge Base
Tutorial 3 of the Knowledge Base Series
Teach your bot from files in 15 minutes
┌─────────────────────────────────────────────────────────────────────────┐
│ │
│ ┌─────────────────────────────────────────────────────────────────┐ │
│ │ │ │
│ │ 📚 ADD DOCUMENTS TO KNOWLEDGE BASE │ │
│ │ │ │
│ │ ┌─────────┐ ┌─────────┐ ┌─────────┐ ┌─────────┐ │ │
│ │ │ Step │───▶│ Step │───▶│ Step │───▶│ Step │ │ │
│ │ │ 1 │ │ 2 │ │ 3 │ │ 4 │ │ │
│ │ │Prepare │ │ Upload │ │ Index │ │ Test │ │ │
│ │ │ Docs │ │ Files │ │ KB │ │ KB │ │ │
│ │ └─────────┘ └─────────┘ └─────────┘ └─────────┘ │ │
│ │ │ │
│ └─────────────────────────────────────────────────────────────────┘ │
│ │
└─────────────────────────────────────────────────────────────────────────┘
Objective
By the end of this tutorial, you will have:
- Prepared documents for the knowledge base
- Uploaded files to your bot’s
.gbkbfolder - Indexed documents for semantic search
- Tested that your bot can answer questions from the documents
Time Required
⏱️ 15 minutes
Prerequisites
Before you begin, make sure you have:
- A working bot (see Create Your First Bot)
- Access to the Drive app
- Documents to upload (PDF, Word, Text, or Markdown files)
What is a Knowledge Base?
A Knowledge Base (KB) is a collection of documents that your bot uses to answer questions. When a user asks something, the bot searches through these documents to find relevant information.
┌─────────────────────────────────────────────────────────────────────────┐
│ HOW KNOWLEDGE BASE WORKS │
├─────────────────────────────────────────────────────────────────────────┤
│ │
│ User asks: "What is our refund policy?" │
│ │ │
│ ▼ │
│ ┌─────────────────────────────────────────────────────────────┐ │
│ │ 🔍 Semantic Search │ │
│ │ Searches through all documents in the knowledge base │ │
│ └────────────────────────┬────────────────────────────────────┘ │
│ │ │
│ ┌───────────────────┼───────────────────┐ │
│ ▼ ▼ ▼ │
│ ┌─────────┐ ┌─────────┐ ┌─────────┐ │
│ │policies │ │ FAQ │ │ terms │ │
│ │ .pdf │ │ .docx │ │ .md │ │
│ └────┬────┘ └─────────┘ └─────────┘ │
│ │ │
│ ▼ Found match! │
│ ┌─────────────────────────────────────────────────────────────┐ │
│ │ "Refunds are available within 30 days of purchase..." │ │
│ └─────────────────────────────────────────────────────────────┘ │
│ │ │
│ ▼ │
│ Bot answers with context from the document │
│ │
└─────────────────────────────────────────────────────────────────────────┘
Supported File Formats
| Format | Extension | Best For |
|---|---|---|
.pdf | Manuals, reports, official documents | |
| Word | .docx, .doc | Policies, procedures, articles |
| Text | .txt | Simple content, FAQs |
| Markdown | .md | Technical documentation |
| Excel | .xlsx, .xls | FAQs, structured data |
| PowerPoint | .pptx | Training materials |
| HTML | .html | Web content |
Step 1: Prepare Your Documents
1.1 Gather Your Files
Collect the documents you want your bot to learn from. Good candidates include:
- ✅ Product manuals
- ✅ FAQ documents
- ✅ Company policies
- ✅ Help articles
- ✅ Training materials
1.2 Review Document Quality
Before uploading, check that your documents:
| Check | Why It Matters |
|---|---|
| Text is selectable | Scanned images can’t be indexed |
| Content is accurate | Bot will repeat incorrect info |
| Information is current | Outdated docs confuse users |
| No sensitive data | Protect confidential information |
⚠️ Warning: The bot will use exactly what’s in your documents. Remove any outdated or incorrect information first.
1.3 Organize Files (Optional)
For large knowledge bases, organize files into folders by topic:
mycompany.gbkb/
├── 📁 products/
│ ├── product-guide.pdf
│ └── specifications.docx
├── 📁 policies/
│ ├── refund-policy.pdf
│ └── privacy-policy.md
├── 📁 support/
│ ├── faq.docx
│ └── troubleshooting.pdf
└── 📁 training/
└── onboarding-guide.pptx
✅ Checkpoint: You have documents ready to upload.
Step 2: Upload Files to Knowledge Base
2.1 Open the Drive App
Click the Apps Menu (⋮⋮⋮) and select Drive.
2.2 Navigate to Your Bot’s KB Folder
Navigate to your bot’s knowledge base folder:
📂 mycompany.gbai
└── 📂 mycompany.gbkb ◄── Open this folder
┌─────────────────────────────────────────────────────────────────────────┐
│ 📁 Drive │
├─────────────────────────────────────────────────────────────────────────┤
│ 📂 mycompany.gbai │
│ ├── 📂 mycompany.gbdialog │
│ ├── 📂 mycompany.gbot │
│ ├── 📂 mycompany.gbkb ◄── Knowledge base folder │
│ │ └── (your documents go here) │
│ └── 📂 mycompany.gbdrive │
│ │
└─────────────────────────────────────────────────────────────────────────┘
2.3 Upload Your Documents
Method A: Drag and Drop
- Open your file explorer
- Select the documents you want to upload
- Drag them into the Drive window
Method B: Upload Button
- Click the Upload button (📤)
- Select files from your computer
- Click Open
┌─────────────────────────────────────────────────────────────────────────┐
│ 📁 Drive > mycompany.gbai > mycompany.gbkb │
├─────────────────────────────────────────────────────────────────────────┤
│ ┌─────────────────┐ ┌─────────────────┐ │
│ │ 📤 Upload │ │ 📁 New Folder │ │
│ └─────────────────┘ └─────────────────┘ │
├─────────────────────────────────────────────────────────────────────────┤
│ │
│ 📄 company-faq.pdf 2.3 MB Just now │
│ 📄 product-manual.docx 1.1 MB Just now │
│ 📄 refund-policy.pdf 0.5 MB Just now │
│ │
│ ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ │
│ ↑ Drag files here to upload │
│ │
└─────────────────────────────────────────────────────────────────────────┘
2.4 Wait for Upload to Complete
You’ll see a progress indicator for each file. Wait until all uploads finish.
💡 Tip: Large files may take longer. PDF files typically upload fastest.
✅ Checkpoint: Your documents appear in the .gbkb folder.
Step 3: Index the Knowledge Base
After uploading, you need to index the documents so the bot can search them.
3.1 Automatic Indexing
In most cases, indexing happens automatically when files are uploaded. Look for:
- A “Processing…” indicator
- Files changing from gray to normal color
- A completion notification
3.2 Manual Indexing (If Needed)
If automatic indexing doesn’t start, trigger it manually:
From Chat:
/reindex
From a BASIC Dialog:
' Clear and rebuild the knowledge base
CLEAR KB
USE KB "mycompany"
3.3 Check Indexing Status
You can check how many documents are indexed:
From Chat:
/kb stats
Expected Output:
┌─────────────────────────────────────────────────────────────────────────┐
│ 📊 Knowledge Base Statistics │
├─────────────────────────────────────────────────────────────────────────┤
│ │
│ Collection: mycompany │
│ Documents: 3 │
│ Vectors: 847 │
│ Status: ● Ready │
│ Last Index: 2 minutes ago │
│ │
└─────────────────────────────────────────────────────────────────────────┘
✅ Checkpoint: Documents are indexed and ready to search.
Step 4: Test the Knowledge Base
4.1 Open Chat
Click the Apps Menu (⋮⋮⋮) and select Chat.
4.2 Ask a Question from Your Documents
Type a question that can be answered by your uploaded documents:
You: What is the refund policy?
4.3 Verify the Response
The bot should answer using information from your documents:
┌─────────────────────────────────────────────────────────────────────────┐
│ 💬 Chat │
├─────────────────────────────────────────────────────────────────────────┤
│ │
│ ┌─────────────────────────────────────────────────────────────┐ │
│ │ 👤 You │ │
│ │ What is the refund policy? │ │
│ └─────────────────────────────────────────────────────────────┘ │
│ │
│ ┌─────────────────────────────────────────────────────────────┐ │
│ │ 🤖 Bot │ │
│ │ │ │
│ │ Based on our refund policy document: │ │
│ │ │ │
│ │ Customers may request a full refund within 30 days of │ │
│ │ purchase. After 30 days, refunds are prorated based on │ │
│ │ usage. To request a refund, contact support@company.com │ │
│ │ with your order number. │ │
│ │ │ │
│ │ 📄 Source: refund-policy.pdf │ │
│ └─────────────────────────────────────────────────────────────┘ │
│ │
└─────────────────────────────────────────────────────────────────────────┘
4.4 Test Different Questions
Try several questions to ensure the knowledge base is working:
| Test Question | Expected Source |
|---|---|
| “How do I return a product?” | refund-policy.pdf |
| “What are the product specs?” | product-manual.docx |
| “How do I contact support?” | company-faq.pdf |
✅ Checkpoint: Your bot answers questions using the uploaded documents!
🎉 Congratulations!
You’ve successfully added documents to your knowledge base! Here’s what you accomplished:
┌─────────────────────────────────────────────────────────────────────────┐
│ │
│ ✓ Prepared documents for upload │
│ ✓ Uploaded files to the .gbkb folder │
│ ✓ Indexed documents for semantic search │
│ ✓ Tested that the bot can answer from documents │
│ │
│ Your bot can now answer questions from your documents! │
│ │
└─────────────────────────────────────────────────────────────────────────┘
Troubleshooting
Problem: Bot doesn’t find information from documents
Cause: Documents may not be indexed yet.
Solution:
- Check indexing status with
/kb stats - Wait a few minutes for processing to complete
- Try
/reindexto force re-indexing
Problem: Bot gives wrong information
Cause: Document contains outdated or incorrect content.
Solution:
- Review the source document
- Update or replace the incorrect document
- Re-index the knowledge base
Problem: “No relevant information found”
Cause: Question doesn’t match document content well enough.
Solution:
- Try rephrasing the question
- Use keywords that appear in your documents
- Check that the document actually contains the answer
Problem: Upload fails
Cause: File too large or unsupported format.
Solution:
- Check file size (max 50MB per file)
- Verify file format is supported
- Try converting to PDF if format issues persist
Problem: PDF text not extracted
Cause: PDF contains scanned images, not selectable text.
Solution:
- Use OCR software to convert image-based PDFs
- Or recreate the document as a text-based PDF
- Consider using Word format instead
Best Practices
Document Organization
┌─────────────────────────────────────────────────────────────────────────┐
│ RECOMMENDED KB STRUCTURE │
├─────────────────────────────────────────────────────────────────────────┤
│ │
│ mycompany.gbkb/ │
│ │ │
│ ├── 📁 policies/ ◄── Company policies │
│ │ ├── refund-policy.pdf │
│ │ ├── privacy-policy.pdf │
│ │ └── terms-of-service.pdf │
│ │ │
│ ├── 📁 products/ ◄── Product documentation │
│ │ ├── product-guide.pdf │
│ │ ├── user-manual.pdf │
│ │ └── specifications.xlsx │
│ │ │
│ ├── 📁 support/ ◄── Support resources │
│ │ ├── faq.docx │
│ │ └── troubleshooting.pdf │
│ │ │
│ └── 📁 internal/ ◄── Internal documentation │
│ ├── processes.docx │
│ └── guidelines.pdf │
│ │
└─────────────────────────────────────────────────────────────────────────┘
Content Guidelines
- Be specific — Clear, detailed content produces better answers
- Use headings — Helps the bot find relevant sections
- Include keywords — Use terms users are likely to search for
- Update regularly — Keep documents current
- Remove duplicates — Avoid conflicting information
Naming Conventions
| ✅ Good Names | ❌ Bad Names |
|---|---|
refund-policy-2024.pdf | doc1.pdf |
product-manual-v2.docx | final final (2).docx |
employee-handbook.pdf | new document.pdf |
Advanced: Using KB in Dialogs
You can reference the knowledge base in your BASIC dialogs:
' Activate a specific knowledge base
USE KB "mycompany"
' Ask the user what they want to know
TALK "What would you like to know about?"
HEAR question
' The bot will automatically search the KB and respond
Multiple Knowledge Bases
You can have different knowledge bases for different purposes:
' Switch between knowledge bases based on topic
TALK "Are you asking about Products or Policies?"
HEAR topic
IF topic = "Products" THEN
USE KB "products"
ELSE IF topic = "Policies" THEN
USE KB "policies"
END IF
TALK "What would you like to know?"
HEAR question
Next Steps
| Next Tutorial | What You’ll Learn |
|---|---|
| Import a Website | Crawl web pages into your KB |
| Create FAQ Responses | Define question-answer pairs |
| Manage Collections | Organize knowledge by topic |
Quick Reference
Chat Commands
| Command | Description |
|---|---|
/kb stats | Show knowledge base statistics |
/reindex | Rebuild the search index |
/kb list | List all KB collections |
BASIC Keywords
| Keyword | Description | Example |
|---|---|---|
USE KB | Activate a KB | USE KB "mycompany" |
CLEAR KB | Clear current KB | CLEAR KB |
KB STATISTICS | Get KB info | stats = KB STATISTICS |
File Size Limits
| File Type | Max Size |
|---|---|
| 50 MB | |
| Word | 25 MB |
| Excel | 25 MB |
| Text/MD | 10 MB |
Tutorial 3 of 30 • Back to How-To Index • Next: Import a Website →
How To: Connect WhatsApp
Tutorial 5 of the Channels Series
Connect your bot to WhatsApp in 20 minutes
┌─────────────────────────────────────────────────────────────────────────┐
│ │
│ ┌─────────────────────────────────────────────────────────────────┐ │
│ │ │ │
│ │ 📱 CONNECT WHATSAPP TO YOUR BOT │ │
│ │ │ │
│ │ ┌─────────┐ ┌─────────┐ ┌─────────┐ ┌─────────┐ │ │
│ │ │ Step │───▶│ Step │───▶│ Step │───▶│ Step │ │ │
│ │ │ 1 │ │ 2 │ │ 3 │ │ 4 │ │ │
│ │ │ Meta │ │ Create │ │Configure│ │ Test │ │ │
│ │ │ Account │ │ App │ │ Bot │ │ Channel │ │ │
│ │ └─────────┘ └─────────┘ └─────────┘ └─────────┘ │ │
│ │ │ │
│ └─────────────────────────────────────────────────────────────────┘ │
│ │
└─────────────────────────────────────────────────────────────────────────┘
Objective
By the end of this tutorial, you will have:
- Created a Meta Business account
- Set up a WhatsApp Business App
- Connected WhatsApp to your General Bots instance
- Tested the connection with a real message
Time Required
⏱️ 20 minutes
Prerequisites
Before you begin, make sure you have:
- A working bot (see Create Your First Bot)
- A phone number for WhatsApp Business (cannot be used with regular WhatsApp)
- A Facebook account
- Administrator access to General Bots
Understanding WhatsApp Integration
┌─────────────────────────────────────────────────────────────────────────┐
│ HOW WHATSAPP INTEGRATION WORKS │
├─────────────────────────────────────────────────────────────────────────┤
│ │
│ User sends message on WhatsApp │
│ │ │
│ ▼ │
│ ┌─────────────────────────────────────────────────────────────┐ │
│ │ WhatsApp Cloud API │ │
│ │ (Meta's servers receive message) │ │
│ └────────────────────────┬────────────────────────────────────┘ │
│ │ │
│ │ Webhook │
│ ▼ │
│ ┌─────────────────────────────────────────────────────────────┐ │
│ │ General Bots Server │ │
│ │ (Your bot processes the message) │ │
│ └────────────────────────┬────────────────────────────────────┘ │
│ │ │
│ │ API Call │
│ ▼ │
│ ┌─────────────────────────────────────────────────────────────┐ │
│ │ WhatsApp Cloud API │ │
│ │ (Sends reply to user) │ │
│ └────────────────────────┬────────────────────────────────────┘ │
│ │ │
│ ▼ │
│ User receives bot response on WhatsApp │
│ │
└─────────────────────────────────────────────────────────────────────────┘
Step 1: Set Up Meta Business Account
1.1 Go to Meta for Developers
Open your browser and navigate to:
https://developers.facebook.com
┌─────────────────────────────────────────────────────────────────────────┐
│ 🌐 Browser [─][□][×]│
├─────────────────────────────────────────────────────────────────────────┤
│ ← → ↻ │ https://developers.facebook.com │ ☆ │ │
├─────────────────────────────────────────────────────────────────────────┤
│ │
│ Meta for Developers │
│ │
│ ┌─────────────────────┐ │
│ │ Log In │ │
│ └─────────────────────┘ │
│ │
└─────────────────────────────────────────────────────────────────────────┘
1.2 Log In with Facebook
- Click Log In
- Enter your Facebook credentials
- Click Log In
1.3 Create a Meta Business Account (If Needed)
If you don’t have a business account:
- Go to https://business.facebook.com
- Click Create Account
- Enter your business name
- Enter your name and business email
- Click Submit
💡 Note: You can use your personal Facebook account, but a business account is recommended for production use.
✅ Checkpoint: You should now be logged into Meta for Developers.
Step 2: Create a WhatsApp App
2.1 Go to My Apps
Click My Apps in the top navigation.
┌─────────────────────────────────────────────────────────────────────────┐
│ Meta for Developers [My Apps ▼] [👤 Account] │
├─────────────────────────────────────────────────────────────────────────┤
│ │
│ My Apps │
│ ─────── │
│ │
│ ┌─────────────────────────┐ │
│ │ + Create App │ ◄── Click here │
│ └─────────────────────────┘ │
│ │
│ You don't have any apps yet. │
│ │
└─────────────────────────────────────────────────────────────────────────┘
2.2 Create a New App
- Click Create App
- Select Business as the app type
- Click Next
┌─────────────────────────────────────────────────────────────────────────┐
│ Create an App [×] │
├─────────────────────────────────────────────────────────────────────────┤
│ │
│ Select an app type: │
│ │
│ ┌─────────────────┐ ┌─────────────────┐ ┌─────────────────┐ │
│ │ Consumer │ │ ● Business │ │ Gaming │ │
│ │ │ │ ◄── Select │ │ │ │
│ │ For consumer │ │ │ │ For game │ │
│ │ apps │ │ For business │ │ integrations │ │
│ │ │ │ integrations │ │ │ │
│ └─────────────────┘ └─────────────────┘ └─────────────────┘ │
│ │
│ [Next] │
│ │
└─────────────────────────────────────────────────────────────────────────┘
2.3 Fill In App Details
| Field | What to Enter | Example |
|---|---|---|
| App Name | Your bot’s name | My Company Bot |
| App Contact Email | Your email | admin@company.com |
| Business Account | Select or create | My Company |
┌─────────────────────────────────────────────────────────────────────────┐
│ Add App Details [×] │
├─────────────────────────────────────────────────────────────────────────┤
│ │
│ App Name: │
│ ┌─────────────────────────────────────────────────────────────────┐ │
│ │ My Company Bot │ │
│ └─────────────────────────────────────────────────────────────────┘ │
│ │
│ App Contact Email: │
│ ┌─────────────────────────────────────────────────────────────────┐ │
│ │ admin@company.com │ │
│ └─────────────────────────────────────────────────────────────────┘ │
│ │
│ Business Account: │
│ ┌─────────────────────────────────────────────────────────────────┐ │
│ │ My Company [▼] │ │
│ └─────────────────────────────────────────────────────────────────┘ │
│ │
│ [Create App] │
│ │
└─────────────────────────────────────────────────────────────────────────┘
- Click Create App
- Complete the security check if prompted
2.4 Add WhatsApp to Your App
- In the app dashboard, scroll to Add Products
- Find WhatsApp and click Set Up
┌─────────────────────────────────────────────────────────────────────────┐
│ Add Products to Your App │
├─────────────────────────────────────────────────────────────────────────┤
│ │
│ ┌─────────────────┐ ┌─────────────────┐ ┌─────────────────┐ │
│ │ Messenger │ │ 📱 WhatsApp │ │ Instagram │ │
│ │ │ │ │ │ │ │
│ │ [Set Up] │ │ [Set Up] ◄── │ │ [Set Up] │ │
│ │ │ │ Click here │ │ │ │
│ └─────────────────┘ └─────────────────┘ └─────────────────┘ │
│ │
└─────────────────────────────────────────────────────────────────────────┘
✅ Checkpoint: WhatsApp should now appear in your app’s left sidebar.
Step 3: Configure WhatsApp Settings
3.1 Get Your API Credentials
In the left sidebar, click WhatsApp → API Setup.
You’ll see:
- Phone number ID - Identifies your WhatsApp number
- WhatsApp Business Account ID - Your business account
- Temporary access token - For testing (expires in 24 hours)
┌─────────────────────────────────────────────────────────────────────────┐
│ WhatsApp > API Setup │
├─────────────────────────────────────────────────────────────────────────┤
│ │
│ STEP 1: Select Phone Numbers │
│ ──────────────────────────── │
│ │
│ From: [Test Number - 15550001234 ▼] │
│ │
│ To: (Add a recipient phone number for testing) │
│ ┌─────────────────────────────────────────────────────────────────┐ │
│ │ +1 555 123 4567 │ │
│ └─────────────────────────────────────────────────────────────────┘ │
│ │
│ ───────────────────────────────────────────────────────────────────── │
│ │
│ STEP 2: Send Messages with the API │
│ ────────────────────────────────── │
│ │
│ Temporary Access Token: │
│ ┌─────────────────────────────────────────────────────────────────┐ │
│ │ EAAGm0PX4ZCp... [Copy] │ │
│ └─────────────────────────────────────────────────────────────────┘ │
│ ⚠️ This token expires in 24 hours. Use System User for production. │
│ │
│ Phone Number ID: 123456789012345 [Copy] │
│ WhatsApp Business Account ID: 987654321098765 [Copy] │
│ │
└─────────────────────────────────────────────────────────────────────────┘
📝 Write down these values - You’ll need them in the next step:
- Phone Number ID:
_______________ - Access Token:
_______________
3.2 Create a Permanent Access Token
For production, you need a permanent token:
- Go to Business Settings → System Users
- Click Add to create a system user
- Name it (e.g., “WhatsApp Bot”)
- Set role to Admin
- Click Generate Token
- Select your app and the
whatsapp_business_messagingpermission - Click Generate Token
💡 Important: Save this token securely! You won’t be able to see it again.
3.3 Configure the Webhook
The webhook tells Meta where to send incoming messages.
- In the left sidebar, click WhatsApp → Configuration
- Under Webhook, click Edit
┌─────────────────────────────────────────────────────────────────────────┐
│ Webhook Configuration [×] │
├─────────────────────────────────────────────────────────────────────────┤
│ │
│ Callback URL: │
│ ┌─────────────────────────────────────────────────────────────────┐ │
│ │ https://your-bot-server.com/webhook/whatsapp │ │
│ └─────────────────────────────────────────────────────────────────┘ │
│ │
│ Verify Token: │
│ ┌─────────────────────────────────────────────────────────────────┐ │
│ │ your-custom-verify-token-here │ │
│ └─────────────────────────────────────────────────────────────────┘ │
│ │
│ ⚠️ Your server must respond to Meta's verification request │
│ │
│ ┌─────────────────────────────────────────────────────────────────┐ │
│ │ Verify and Save │ │
│ └─────────────────────────────────────────────────────────────────┘ │
│ │
└─────────────────────────────────────────────────────────────────────────┘
Enter these values:
| Field | Value |
|---|---|
| Callback URL | https://your-server.com/webhook/whatsapp |
| Verify Token | A secret string you create (e.g., my_bot_verify_123) |
- Click Verify and Save
3.4 Subscribe to Webhook Events
After verifying, select which events to receive:
┌─────────────────────────────────────────────────────────────────────────┐
│ Webhook Fields │
├─────────────────────────────────────────────────────────────────────────┤
│ │
│ ☑ messages ◄── Required! Receive incoming messages │
│ ☐ message_template_status_update │
│ ☐ phone_number_name_update │
│ ☐ phone_number_quality_update │
│ ☑ account_review_update │
│ ☐ account_update │
│ ☐ business_capability_update │
│ ☐ flows │
│ ☑ security │
│ ☑ message_echoes │
│ │
└─────────────────────────────────────────────────────────────────────────┘
At minimum, select:
- messages (required - to receive user messages)
✅ Checkpoint: Webhook should show as “Active” with a green indicator.
Step 4: Configure General Bots
4.1 Open Bot Settings
- In General Bots, go to Sources
- Click ⚙️ on your bot
- Go to the Channels tab
┌─────────────────────────────────────────────────────────────────────────┐
│ Bot Settings: support [×] │
├─────────────────────────────────────────────────────────────────────────┤
│ │
│ ┌──────────┬──────────┬──────────┬──────────┬──────────┐ │
│ │ General │ AI │*Channels*│ Security │ Advanced │ │
│ └──────────┴──────────┴──────────┴──────────┴──────────┘ │
│ │
│ CONNECTED CHANNELS │
│ ────────────────── │
│ │
│ ┌─────────────────────────────────────────────────────────────────┐ │
│ │ 🌐 Web Chat Status: ● On │ │
│ │ Embedded widget on your website │ │
│ │ [Configure] │ │
│ └─────────────────────────────────────────────────────────────────┘ │
│ │
│ ┌─────────────────────────────────────────────────────────────────┐ │
│ │ 📱 WhatsApp Status: ○ Off │ │
│ │ Not configured │ │
│ │ [Configure] ◄── Click here │ │
│ └─────────────────────────────────────────────────────────────────┘ │
│ │
└─────────────────────────────────────────────────────────────────────────┘
4.2 Enter WhatsApp Credentials
Click Configure for WhatsApp and enter your credentials:
┌─────────────────────────────────────────────────────────────────────────┐
│ WhatsApp Configuration [×] │
├─────────────────────────────────────────────────────────────────────────┤
│ │
│ META API CREDENTIALS │
│ ──────────────────── │
│ │
│ Phone Number ID: │
│ ┌─────────────────────────────────────────────────────────────────┐ │
│ │ 123456789012345 │ │
│ └─────────────────────────────────────────────────────────────────┘ │
│ │
│ Access Token: │
│ ┌─────────────────────────────────────────────────────────────────┐ │
│ │ •••••••••••••••••••••••••••••••••••••• │ │
│ └─────────────────────────────────────────────────────────────────┘ │
│ │
│ Verify Token: │
│ ┌─────────────────────────────────────────────────────────────────┐ │
│ │ my_bot_verify_123 │ │
│ └─────────────────────────────────────────────────────────────────┘ │
│ ⚠️ Must match the token you set in Meta Developer Portal │
│ │
│ ───────────────────────────────────────────────────────────────────── │
│ │
│ WEBHOOK URL (provide this to Meta) │
│ ────────────────────────────────── │
│ ┌─────────────────────────────────────────────────────────────────┐ │
│ │ https://your-server.com/webhook/whatsapp [Copy] │ │
│ └─────────────────────────────────────────────────────────────────┘ │
│ │
│ ┌─────────────────┐ ┌─────────────────┐ │
│ │ Test Connection │ │ Save │ │
│ └─────────────────┘ └─────────────────┘ │
│ │
└─────────────────────────────────────────────────────────────────────────┘
4.3 Test the Connection
- Click Test Connection
- You should see “Connection successful!”
If the test fails, check:
- Token is correct and not expired
- Phone Number ID is correct
- Your server is accessible from the internet
4.4 Save and Enable
- Click Save
- Toggle WhatsApp to On
✅ Checkpoint: WhatsApp should now show Status: ● On
Step 5: Test Your WhatsApp Bot
5.1 Add Test Phone Number
In Meta Developer Portal:
- Go to WhatsApp → API Setup
- Under “To”, add your phone number
- Click Send to receive a test message
5.2 Send a Test Message
- Open WhatsApp on your phone
- Message the bot’s number (the test number from Meta)
- Send: “Hello”
┌─────────────────────────────────────────────────────────────────────────┐
│ │
│ WhatsApp │
│ │
│ ┌─────────────────────────────────────────────────────────────────┐ │
│ │ │ │
│ │ My Company Bot │ │
│ │ +1 555 000 1234 │ │
│ │ │ │
│ │ ───────────────────────────────────────────────────────────── │ │
│ │ │ │
│ │ ┌───────────────┐ │ │
│ │ │ Hello │ │ │
│ │ │ 10:30 │ │ │
│ │ └───────────────┘ │ │
│ │ │ │
│ │ ┌─────────────────────────────────────────┐ │ │
│ │ │ Hello! How can I help you today? │ │ │
│ │ │ 10:30 │ │ │
│ │ └─────────────────────────────────────────┘ │ │
│ │ │ │
│ │ ┌───────────────┐ │ │
│ │ │ I need help │ │ │
│ │ │ with my order │ │ │
│ │ │ 10:31 │ │ │
│ │ └───────────────┘ │ │
│ │ │ │
│ │ ┌─────────────────────────────────────────┐ │ │
│ │ │ I'd be happy to help with your order! │ │ │
│ │ │ What's your order number? │ │ │
│ │ │ 10:31 │ │ │
│ │ └─────────────────────────────────────────┘ │ │
│ │ │ │
│ └─────────────────────────────────────────────────────────────────┘ │
│ │
│ ┌─────────────────────────────────────────────────────────────────┐ │
│ │ Type a message... 📎 │ │
│ └─────────────────────────────────────────────────────────────────┘ │
│ │
└─────────────────────────────────────────────────────────────────────────┘
5.3 Verify in General Bots
Check that the conversation appears in Analytics:
- Go to Analytics → Sessions
- You should see a new session with channel “WhatsApp”
🎉 Congratulations!
Your bot is now connected to WhatsApp! Users can message your WhatsApp Business number and receive responses from your bot.
┌─────────────────────────────────────────────────────────────────────────┐
│ │
│ ✓ SUCCESS! │
│ │
│ Your WhatsApp bot is live! │
│ │
│ ┌───────────┐ ┌───────────┐ ┌──────────┐ │
│ │ │ │ │ │ │ │
│ │ 📱 │ ────────▶ │ 🤖 │ ────────▶ │ 💬 │ │
│ │ WhatsApp │ │ General │ │ Bot │ │
│ │ User │ ◀──────── │ Bots │ ◀──────── │ Response │ │
│ │ │ │ │ │ │ │
│ └───────────┘ └───────────┘ └──────────┘ │
│ │
└─────────────────────────────────────────────────────────────────────────┘
Going to Production
Add a Real Phone Number
The test number has limitations. For production:
- Go to WhatsApp → Phone Numbers in Meta Developer Portal
- Click Add Phone Number
- Enter your business phone number
- Verify via SMS or voice call
- Update the Phone Number ID in General Bots settings
Submit for App Review
Before going live with real customers:
- Go to App Review in Meta Developer Portal
- Request
whatsapp_business_messagingpermission - Complete the review process (takes 2-5 business days)
Set Up Message Templates
For proactive messages (not replies), you need approved templates:
- Go to WhatsApp → Message Templates
- Create templates for notifications, alerts, etc.
- Wait for Meta approval (usually 24 hours)
Troubleshooting
Problem: “Webhook verification failed”
Possible causes:
- Verify token doesn’t match
- Server not responding
- HTTPS not configured
Solution:
- Check the verify token matches exactly in both places
- Verify your server is accessible:
curl https://your-server.com/webhook/whatsapp - Ensure you have a valid SSL certificate
Problem: Messages not arriving
Possible causes:
- Webhook not subscribed to “messages”
- App in development mode with unlisted numbers
- Access token expired
Solution:
- Check webhook subscriptions include “messages”
- Add test phone numbers in API Setup
- Generate a new access token
Problem: Bot responds slowly
Possible causes:
- Network latency
- Bot processing time
- Meta rate limits
Solution:
- Ensure server is geographically close to users
- Optimize bot dialog processing
- Check rate limit status in Meta dashboard
Problem: “Error 190: Invalid OAuth access token”
Possible causes:
- Token expired (temporary tokens last 24 hours)
- Token was revoked
- Wrong token used
Solution:
- Generate a new System User token (permanent)
- Update the token in General Bots settings
- Verify you’re using the WhatsApp access token, not a Facebook token
Problem: Phone number shows as unverified
Possible causes:
- Verification not completed
- Two-factor authentication issue
- Number already in use
Solution:
- Re-request verification code
- Check business verification status
- Contact Meta support if number was previously registered
What You Learned
In this tutorial, you:
- ✅ Created a Meta Developer account and app
- ✅ Configured WhatsApp Cloud API
- ✅ Set up webhook for incoming messages
- ✅ Connected WhatsApp to General Bots
- ✅ Tested the integration with real messages
Next Steps
Now that WhatsApp is connected, try these:
| Next Tutorial | What You’ll Learn |
|---|---|
| Write Your First Dialog | Create custom conversation flows |
| Add KB Documents | Make your bot smarter |
| Monitor Sessions | Track WhatsApp conversations |
Quick Reference
WhatsApp Message Limits
| Tier | Messages/Day | How to Qualify |
|---|---|---|
| Unverified | 250 | New accounts |
| Verified | 1,000 | Complete business verification |
| Tier 1 | 10,000 | Good quality rating |
| Tier 2 | 100,000 | Maintain quality |
| Tier 3 | Unlimited | High volume, good quality |
Quality Rating
Meta monitors your WhatsApp quality based on:
- User blocks and reports
- Message template quality
- Response time
Keep quality high by:
- Responding to all messages
- Not spamming users
- Using approved templates for outbound messages
Key URLs
| Resource | URL |
|---|---|
| Meta for Developers | https://developers.facebook.com |
| Meta Business Suite | https://business.facebook.com |
| WhatsApp Cloud API Docs | https://developers.facebook.com/docs/whatsapp |
| API Status | https://metastatus.com |
Configuration via config.csv
Configure WhatsApp in your bot’s config.csv:
name,value
whatsapp-phone-number-id,123456789012345
whatsapp-access-token,EAAGm0PX4ZCp...
whatsapp-verify-token,my_bot_verify_123
whatsapp-business-account-id,987654321098765
Note: Do not use environment variables for WhatsApp configuration. All settings belong in config.csv.
See Also
- Chat App - Web chat interface
- Sources App - Bot configuration
- Compliance App - Data privacy for WhatsApp
- BASIC Keywords - WhatsApp-specific keywords
How To: Monitor Your Bot
Tutorial 12 of the Analytics & Monitoring Series
Watch conversations and system health in real-time
┌─────────────────────────────────────────────────────────────────────────┐
│ │
│ ┌─────────────────────────────────────────────────────────────────┐ │
│ │ │ │
│ │ 📊 MONITOR YOUR BOT │ │
│ │ │ │
│ │ ┌─────────┐ ┌─────────┐ ┌─────────┐ ┌─────────┐ │ │
│ │ │ Step │───▶│ Step │───▶│ Step │───▶│ Step │ │ │
│ │ │ 1 │ │ 2 │ │ 3 │ │ 4 │ │ │
│ │ │ Access │ │ View │ │ Check │ │ Set │ │ │
│ │ │Dashboard│ │Sessions │ │ Health │ │ Alerts │ │ │
│ │ └─────────┘ └─────────┘ └─────────┘ └─────────┘ │ │
│ │ │ │
│ └─────────────────────────────────────────────────────────────────┘ │
│ │
└─────────────────────────────────────────────────────────────────────────┘
Objective
By the end of this tutorial, you will have:
- Accessed the monitoring dashboard
- Viewed active sessions and conversations
- Checked system health and resources
- Understood the live system architecture
- Configured alerts for important events
Time Required
⏱️ 10 minutes
Prerequisites
Before you begin, make sure you have:
- A running bot with some activity
- Administrator or Monitor role permissions
- Access to the General Bots Suite
Understanding the System Architecture
Your General Bots deployment is a living system of interconnected components. Understanding how they work together helps you monitor effectively.
Component Overview
| Component | Purpose | Status Indicators |
|---|---|---|
| BotServer | Core application, handles all requests | Response time, active sessions |
| PostgreSQL | Primary database, stores users & config | Connections, query rate |
| Qdrant | Vector database, powers semantic search | Vector count, search latency |
| MinIO | File storage, manages documents | Storage used, object count |
| BotModels | LLM server, generates AI responses | Tokens/hour, model latency |
| Vault | Secrets manager, stores API keys | Sealed status, policy count |
| Cache | Cache layer, speeds up responses | Hit rate, memory usage |
| InfluxDB | Metrics database, stores analytics | Points/sec, retention |
Step 1: Access the Monitoring Dashboard
1.1 Open the Apps Menu
Click the nine-dot grid (⋮⋮⋮) in the top-right corner.
1.2 Select Monitoring
Click Analytics or Monitoring (depending on your configuration).
┌─────────────────────────────────────────────────────────────────────────┐
│ │
│ ┌───────────────────┐ │
│ │ 💬 Chat │ │
│ │ 📁 Drive │ │
│ │ 📊 Analytics │ ◄── May be here │
│ │ 📈 Monitoring │ ◄── Or here │
│ │ ⚙️ Settings │ │
│ └───────────────────┘ │
│ │
└─────────────────────────────────────────────────────────────────────────┘
1.3 View the Dashboard
The monitoring dashboard displays real-time metrics:
┌─────────────────────────────────────────────────────────────────────────┐
│ 📊 Monitoring Dashboard 🔴 LIVE │
├─────────────────────────────────────────────────────────────────────────┤
│ │
│ ┌─────────────────┐ ┌─────────────────┐ ┌─────────────────┐ │
│ │ SESSIONS │ │ MESSAGES │ │ RESPONSE │ │
│ │ │ │ │ │ │ │
│ │ 247 │ │ 12.4K │ │ 1.2s │ │
│ │ ● Active │ │ Today │ │ Average │ │
│ └─────────────────┘ └─────────────────┘ └─────────────────┘ │
│ │
│ ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ │
│ │
│ SYSTEM RESOURCES │
│ ───────────────── │
│ CPU [████████████████░░░░░░░░░░░░░░] 70% │
│ MEM [████████████████████░░░░░░░░░░] 60% │
│ GPU [████████████░░░░░░░░░░░░░░░░░░] 40% │
│ DISK [████████░░░░░░░░░░░░░░░░░░░░░░] 28% │
│ │
└─────────────────────────────────────────────────────────────────────────┘
✅ Checkpoint: You can see the monitoring dashboard with live metrics.
Step 2: View Active Sessions
2.1 Navigate to Sessions Panel
Look for the Sessions or Conversations section:
┌─────────────────────────────────────────────────────────────────────────┐
│ Active Sessions (247) [Refresh 🔄] │
├─────────────────────────────────────────────────────────────────────────┤
│ │
│ ID │ User │ Channel │ Started │ Messages │
│ ──────────┼───────────────┼───────────┼──────────────┼──────────── │
│ a1b2c3d4 │ +5511999... │ WhatsApp │ 2 min ago │ 12 │
│ e5f6g7h8 │ john@acme... │ Web │ 5 min ago │ 8 │
│ i9j0k1l2 │ +5521888... │ WhatsApp │ 8 min ago │ 23 │
│ m3n4o5p6 │ support@... │ Email │ 15 min ago │ 4 │
│ q7r8s9t0 │ jane@... │ Web │ 18 min ago │ 15 │
│ │
│ ◀ 1 2 3 4 5 ... 25 ▶ │
│ │
└─────────────────────────────────────────────────────────────────────────┘
2.2 View Session Details
Click on a session to see the full conversation:
┌─────────────────────────────────────────────────────────────────────────┐
│ Session: a1b2c3d4 [×] │
├─────────────────────────────────────────────────────────────────────────┤
│ │
│ User: +5511999888777 │
│ Channel: WhatsApp │
│ Started: 2024-01-15 14:32:00 │
│ Duration: 2 min 34 sec │
│ Bot: mycompany │
│ │
│ ── Conversation ──────────────────────────────────────────────────────│
│ │
│ [14:32:00] 👤 User: Hello │
│ [14:32:01] 🤖 Bot: Hello! How can I help you today? │
│ [14:32:15] 👤 User: I want to check my order status │
│ [14:32:17] 🤖 Bot: I can help with that! What's your order number? │
│ [14:32:45] 👤 User: ORD-12345 │
│ [14:32:48] 🤖 Bot: Order ORD-12345 is being prepared for shipping... │
│ │
└─────────────────────────────────────────────────────────────────────────┘
2.3 Session Metrics
Understand key session metrics:
| Metric | Description | Good Value |
|---|---|---|
| Active Sessions | Currently open conversations | Depends on load |
| Peak Today | Maximum concurrent sessions | Track trends |
| Avg Duration | Average conversation length | 3-5 minutes typical |
| Messages/Session | Average messages per conversation | 5-10 typical |
✅ Checkpoint: You can view active sessions and their conversations.
Step 3: Check System Health
3.1 View Service Status
The dashboard shows the health of all components:
┌─────────────────────────────────────────────────────────────────────────┐
│ Service Health │
├─────────────────────────────────────────────────────────────────────────┤
│ │
│ ● PostgreSQL Running v16.2 24/100 connections │
│ ● Qdrant Running v1.9.2 1.2M vectors │
│ ● MinIO Running v2024.01 45.2 GB stored │
│ ● BotModels Running v2.1.0 LLM active │
│ ● Vault Sealed v1.15.0 156 secrets │
│ ● Cache Running v7.2.4 94.2% hit rate │
│ ● InfluxDB Running v2.7.3 2,450 pts/sec │
│ │
│ Legend: ● Running ● Warning ● Stopped │
│ │
└─────────────────────────────────────────────────────────────────────────┘
3.2 Understanding Status Colors
| Color | Status | Action Needed |
|---|---|---|
| 🟢 Green | Healthy/Running | None |
| 🟡 Yellow | Warning/Degraded | Investigate soon |
| 🔴 Red | Error/Stopped | Immediate action |
3.3 Check Resource Usage
Monitor resource utilization to prevent issues:
┌─────────────────────────────────────────────────────────────────────────┐
│ Resource Usage Last 24 Hours │
├─────────────────────────────────────────────────────────────────────────┤
│ │
│ CPU Usage │
│ 100%│ ╭──╮ │
│ 75%│ ╭──╮ ╭──╮ │ │ ╭──╮ │
│ 50%│╭──╮│ │╭─╯ ╰─╮╭──╯ ╰──╯ ╰──╮ │
│ 25%│ ╰──╯ ╰╯ ╰────────── │
│ 0%└──────────────────────────────────────────── │
│ 00:00 04:00 08:00 12:00 16:00 20:00 Now │
│ │
│ Memory Usage │
│ 100%│ │
│ 75%│ │
│ 50%│──────────────────────────────────────────── │
│ 25%│ │
│ 0%└──────────────────────────────────────────── │
│ 00:00 04:00 08:00 12:00 16:00 20:00 Now │
│ │
└─────────────────────────────────────────────────────────────────────────┘
3.4 Resource Thresholds
Take action when resources approach these limits:
| Resource | Warning | Critical | Action |
|---|---|---|---|
| CPU | > 80% | > 95% | Scale up or optimize |
| Memory | > 85% | > 95% | Add RAM or reduce cache |
| Disk | > 80% | > 90% | Clean up or add storage |
| GPU | > 90% | > 98% | Queue requests or scale |
✅ Checkpoint: You can view system health and resource usage.
Step 4: Set Up Alerts
4.1 Access Alert Settings
Navigate to Settings > Alerts or Monitoring > Configure Alerts.
4.2 Configure Alert Rules
Set up alerts for important events:
┌─────────────────────────────────────────────────────────────────────────┐
│ Alert Configuration │
├─────────────────────────────────────────────────────────────────────────┤
│ │
│ ☑ CPU Usage │
│ Threshold: [80] % For: [5] minutes │
│ Notify: ☑ Email ☑ Slack ☐ SMS │
│ │
│ ☑ Memory Usage │
│ Threshold: [85] % For: [5] minutes │
│ Notify: ☑ Email ☐ Slack ☐ SMS │
│ │
│ ☑ Response Time │
│ Threshold: [5000] ms For: [3] minutes │
│ Notify: ☑ Email ☑ Slack ☐ SMS │
│ │
│ ☑ Service Down │
│ Services: ☑ PostgreSQL ☑ Qdrant ☑ BotModels │
│ Notify: ☑ Email ☑ Slack ☑ SMS │
│ │
│ ┌─────────────────┐ │
│ │ 💾 Save │ │
│ └─────────────────┘ │
│ │
└─────────────────────────────────────────────────────────────────────────┘
4.3 Configure via config.csv
You can also set alerts in your bot’s configuration file:
key,value
alert-cpu-threshold,80
alert-memory-threshold,85
alert-disk-threshold,90
alert-response-time-ms,5000
alert-email,admin@company.com
alert-slack-webhook,https://hooks.slack.com/...
4.4 Test Alerts
Verify your alerts are working:
- Set a low threshold temporarily (e.g., CPU > 1%)
- Wait for the alert to trigger
- Check your email/Slack for the notification
- Reset the threshold to normal
✅ Checkpoint: Alerts are configured and tested.
🎉 Congratulations!
You can now monitor your bot effectively! Here’s what you learned:
┌─────────────────────────────────────────────────────────────────────────┐
│ │
│ ✓ Accessed the monitoring dashboard │
│ ✓ Viewed active sessions and conversations │
│ ✓ Checked system health and services │
│ ✓ Understood resource usage metrics │
│ ✓ Configured alerts for important events │
│ │
│ You're now equipped to keep your bot healthy! │
│ │
└─────────────────────────────────────────────────────────────────────────┘
Troubleshooting
Problem: Dashboard shows no data
Cause: Monitoring services may not be collecting data.
Solution:
- Check that InfluxDB is running
- Verify the monitoring agent is enabled
- Wait a few minutes for data collection
Problem: Sessions show as “Unknown User”
Cause: User identification not configured.
Solution:
- Enable user tracking in bot settings
- Request user info at conversation start
- Check privacy settings
Problem: Alerts not being sent
Cause: Notification channels not configured correctly.
Solution:
- Verify email/Slack settings
- Check spam folders
- Test webhook URLs manually
Problem: High CPU but few sessions
Cause: Possible memory leak or inefficient code.
Solution:
- Check for infinite loops in dialogs
- Review LLM call frequency
- Restart the bot service
Monitoring API
Access monitoring data programmatically:
Get System Status
GET /api/monitoring/status
Response:
{
"sessions": {
"active": 247,
"peak_today": 312,
"avg_duration_seconds": 245
},
"messages": {
"today": 12400,
"this_hour": 890,
"avg_response_ms": 1200
},
"resources": {
"cpu_percent": 70,
"memory_percent": 60,
"gpu_percent": 40,
"disk_percent": 28
},
"services": {
"postgresql": "running",
"qdrant": "running",
"minio": "running",
"botmodels": "running",
"vault": "sealed",
"redis": "running",
"influxdb": "running"
}
}
Get Historical Metrics
GET /api/monitoring/history?period=24h
Get Session Details
GET /api/monitoring/sessions/{session_id}
Quick Reference
Dashboard Keyboard Shortcuts
| Shortcut | Action |
|---|---|
R | Refresh data |
F | Toggle fullscreen |
S | Show/hide sidebar |
1-7 | Switch dashboard tabs |
Important Metrics to Watch
| Metric | Normal | Warning | Critical |
|---|---|---|---|
| Response Time | < 2s | 2-5s | > 5s |
| Error Rate | < 1% | 1-5% | > 5% |
| CPU Usage | < 70% | 70-85% | > 85% |
| Memory Usage | < 75% | 75-85% | > 85% |
| Queue Depth | < 100 | 100-500 | > 500 |
Console Monitoring
For server-side monitoring:
# Start with monitoring output
./botserver --console --monitor
# Output:
# [MONITOR] 2024-01-15 14:32:00
# Sessions: 247 active (peak: 312)
# Messages: 12,400 today (890/hour)
# CPU: 70% | MEM: 60% | GPU: 40%
# Services: 7/7 running
Next Steps
| Next Tutorial | What You’ll Learn |
|---|---|
| Create Custom Reports | Build dashboards for insights |
| Export Analytics Data | Download metrics for analysis |
| Performance Optimization | Make your bot faster |
Tutorial 12 of 30 • Back to How-To Index • Next: Create Custom Reports →
Chapter 05: Themes and Styling
Customize your bot’s appearance with .gbtheme packages.
Overview
Themes control colors, fonts, logos, and overall visual presentation of your bot interface.
Quick Start
# In config.csv
name,value
theme-color1,#0d2b55
theme-color2,#fff9c2
theme-title,My Bot
theme-logo,https://example.com/logo.svg
Theme Structure
mybot.gbai/
└── mybot.gbtheme/
└── style.css
Configuration Options
| Setting | Description | Example |
|---|---|---|
theme-color1 | Primary color | #0d2b55 |
theme-color2 | Secondary color | #fff9c2 |
theme-title | Bot name in header | My Assistant |
theme-logo | Logo URL | https://... |
CSS Customization
Create style.css in your .gbtheme folder:
:root {
--primary: #0d2b55;
--secondary: #fff9c2;
}
.chat-header {
background: var(--primary);
}
.user-message {
background: var(--secondary);
}
Chapter Contents
- Theme Structure - File organization
- CSS Customization - Styling reference
See Also
- UI Reference - Interface options
- .gbot Configuration - All settings
Theme Structure
The gbtheme package is simply CSS files that style the bot’s UI. Themes don’t include HTML or JavaScript - they only control appearance.
theme-name.gbtheme/
├── default.css # Main theme file (required)
├── dark.css # Optional dark mode variant
├── print.css # Optional print styles
└── assets/ # Optional theme resources
├── images/
├── fonts/
└── icons/
Design Principles
- CSS-only theming – Themes are pure CSS files, no HTML or JavaScript modifications
- CSS Variables – Use CSS custom properties for colors, spacing, and other values
- Responsive design – Use media queries within your CSS for mobile-first layouts
- Asset locality – Optional
assets/folder for theme-specific images, fonts, and icons
Creating Your Theme
- Create a
.gbthemefolder in your bot package - Add a
default.cssfile with your styles - Override CSS variables to change colors and spacing
- Add optional assets like fonts or background images
The system automatically picks up any theme placed under @/templates/… when the bot’s configuration (.gbtheme entry in config.csv) points to the folder name.
Theme Loading Process
- Discovery: Bot looks for theme folder in
work/{bot_name}/{bot_name}.gbtheme/ - Validation: Checks for required files (at least one CSS file)
- Registration: Theme becomes available in theme selector
- Activation: User selects theme or bot loads default
- Hot Reload: Changes apply immediately without restart
File Organization Best Practices
CSS File Options
You can have multiple CSS files in your theme:
mybot.gbtheme/
├── default.css # Main theme (loaded automatically)
├── dark.css # Dark mode variant
├── mobile.css # Mobile-specific overrides
└── print.css # Print media styles
Or keep everything in a single file - your choice!
Asset Management
assets/
├── images/
│ ├── logo.svg # Vector graphics preferred
│ ├── bg.webp # Modern formats for performance
│ └── icons/ # Icon set
├── fonts/
│ └── custom.woff2 # Web fonts if needed
└── data/
└── theme.json # Theme metadata
Creating a Custom Theme
Step 1: Create Theme Folder
mkdir -p work/mybot/mybot.gbtheme
Step 2: Create Your CSS
Create default.css with CSS variables:
:root {
/* Brand Colors */
--brand-primary: #your-color;
--brand-secondary: #your-color;
/* Semantic Colors */
--color-success: #10b981;
--color-warning: #f59e0b;
--color-error: #ef4444;
/* Typography */
--font-family: 'Inter', system-ui, sans-serif;
--font-size-base: 16px;
--line-height: 1.5;
/* Spacing Scale */
--space-xs: 0.25rem;
--space-sm: 0.5rem;
--space-md: 1rem;
--space-lg: 2rem;
--space-xl: 4rem;
}
Step 3: Style Components
Add your component styles in the same file:
/* Custom message bubbles */
.message-user {
background: var(--brand-primary);
color: white;
border-radius: 18px 18px 4px 18px;
}
.message-bot {
background: #f3f4f6;
border: 1px solid #e5e7eb;
border-radius: 18px 18px 18px 4px;
}
Using the Theme
Once you’ve created your CSS file, the bot will automatically load it. You can switch between themes using BASIC:
' Switch to a different theme
CHANGE THEME "dark"
' Back to default
CHANGE THEME "default"
Performance Optimization
CSS Loading Strategy
- Critical CSS: Inline essential styles in HTML
- Async Loading: Load non-critical CSS asynchronously
- Minification: Minify CSS for production
- Purging: Remove unused CSS rules
Asset Optimization
- Use SVG for logos and icons
- Implement lazy loading for images
- Serve WebP with fallbacks
- Enable gzip compression
Theme Selection
Themes are switched via BASIC commands, not JavaScript. The system handles the CSS file swapping automatically.
Accessibility Considerations
- Maintain WCAG 2.1 AA contrast ratios
- Support high contrast mode
- Include focus indicators
- Test with screen readers
Advanced Personalization Options
Beyond CSS Theming
While themes handle visual styling, you have more options for deeper UI customization:
-
Extend default.gbui - The UI templates in
.gbuipackages can be modified:- Copy the default UI templates to your bot’s
.gbuifolder - Modify the HTML structure to fit your needs
- Add custom components and layouts
- The system will use your UI instead of the default
- Copy the default UI templates to your bot’s
-
Create Your Own UI Type - Build a completely custom interface:
- Design your own UI framework
- Implement custom WebSocket handlers
- Create unique interaction patterns
- Full control over the user experience
Join the Community
We encourage you to contribute! The General Bots project welcomes:
- UI Improvements - Submit pull requests with better default UIs
- Theme Collections - Share your creative themes
- Custom UI Types - Develop new interaction paradigms
- Documentation - Help improve these guides
Using General Bots as a Foundation
General Bots is designed to be a starting point for your own projects:
Fork the project → Customize the UI → Build your product
You can:
- Use it as a base for commercial products
- Create industry-specific bot interfaces
- Develop specialized UI frameworks
- Build on top of the core engine
The architecture is intentionally modular - take what you need, replace what you don’t.
Getting Started with UI Development
- Study the default.gbui - Understand the current structure
- Fork the repository - Create your own version
- Experiment freely - The UI layer is independent
- Share your work - Help others learn from your innovations
Remember: The UI is just HTML/CSS/JS talking to the bot via WebSocket. You have complete freedom to reimagine how users interact with your bot!
See Also
- CSS Customization - Detailed CSS guide
- Chapter 4: User Interface - UI templates
- Chapter 6: BASIC - Theme switching in dialogs
- GitHub Repository - Contribute to the project
Next Step
Continue to CSS Customization for detailed styling techniques.
CSS Customization
The gbtheme CSS files define the visual style of the bot UI. They are split into three layers to make them easy to extend.
Files
| File | Role |
|---|---|
main.css | Core layout, typography, and global variables. |
components.css | Styles for reusable UI components (buttons, cards, modals). |
responsive.css | Media queries for mobile, tablet, and desktop breakpoints. |
CSS Variables (in main.css)
:root {
--primary-color: #2563eb;
--secondary-color: #64748b;
--background-color: #ffffff;
--text-color: #1e293b;
--border-radius: 8px;
--spacing-unit: 8px;
}
Changing a variable updates the entire theme without editing individual rules.
Extending the Theme
- Add a new variable – Append to
:rootand reference it in any selector. - Override a component – Duplicate the selector in
components.cssafter the original definition; the later rule wins. - Create a dark mode – Add a
@media (prefers-color-scheme: dark)block that redefines the variables.
@media (prefers-color-scheme: dark) {
:root {
--primary-color: #3b82f6;
--background-color: #111827;
--text-color: #f9fafb;
}
}
Best Practices
- Keep the file size small – avoid large image data URIs; store images in
assets/. - Use
remunits for font sizes; they scale with the rootfont-size. - Limit the depth of nesting; flat selectors improve performance.
All CSS files are loaded in index.html in the order: main.css, components.css, responsive.css.
Component Styling Guide
Message Bubbles
Customize chat message appearance:
/* User messages */
.message-user {
background: linear-gradient(135deg, #667eea 0%, #764ba2 100%);
color: white;
padding: 12px 16px;
border-radius: 18px 18px 4px 18px;
max-width: 70%;
margin-left: auto;
}
/* Bot messages */
.message-bot {
background: #f7fafc;
color: #2d3748;
padding: 12px 16px;
border-radius: 18px 18px 18px 4px;
max-width: 70%;
border: 1px solid #e2e8f0;
}
/* Typing indicator */
.typing-indicator {
display: inline-flex;
padding: 16px;
background: #edf2f7;
border-radius: 18px;
}
.typing-indicator span {
height: 8px;
width: 8px;
background: #718096;
border-radius: 50%;
margin: 0 2px;
animation: typing 1.4s infinite;
}
Input Field
Style the message input area:
.input-container {
padding: 16px;
background: white;
border-top: 1px solid #e2e8f0;
}
.input-wrapper {
display: flex;
align-items: center;
background: #f7fafc;
border: 2px solid #e2e8f0;
border-radius: 24px;
padding: 8px 16px;
transition: all 0.2s;
}
.input-wrapper:focus-within {
border-color: var(--primary-color);
background: white;
box-shadow: 0 0 0 3px rgba(66, 153, 225, 0.1);
}
.message-input {
flex: 1;
border: none;
background: transparent;
outline: none;
font-size: 16px;
}
.send-button {
background: var(--primary-color);
color: white;
border: none;
border-radius: 50%;
width: 36px;
height: 36px;
display: flex;
align-items: center;
justify-content: center;
cursor: pointer;
transition: transform 0.2s;
}
.send-button:hover {
transform: scale(1.1);
}
.send-button:active {
transform: scale(0.95);
}
Buttons
Consistent button styling:
/* Primary button */
.btn-primary {
background: var(--primary-color);
color: white;
border: none;
padding: 10px 20px;
border-radius: 8px;
font-weight: 500;
cursor: pointer;
transition: all 0.2s;
}
.btn-primary:hover {
filter: brightness(110%);
box-shadow: 0 4px 6px rgba(0, 0, 0, 0.1);
}
/* Secondary button */
.btn-secondary {
background: transparent;
color: var(--primary-color);
border: 2px solid var(--primary-color);
padding: 8px 18px;
border-radius: 8px;
font-weight: 500;
cursor: pointer;
transition: all 0.2s;
}
.btn-secondary:hover {
background: var(--primary-color);
color: white;
}
/* Icon button */
.btn-icon {
background: transparent;
border: none;
width: 40px;
height: 40px;
border-radius: 50%;
display: flex;
align-items: center;
justify-content: center;
cursor: pointer;
transition: background 0.2s;
}
.btn-icon:hover {
background: rgba(0, 0, 0, 0.05);
}
Animation Library
Entrance Animations
@keyframes slideInUp {
from {
transform: translateY(20px);
opacity: 0;
}
to {
transform: translateY(0);
opacity: 1;
}
}
@keyframes fadeIn {
from { opacity: 0; }
to { opacity: 1; }
}
@keyframes scaleIn {
from {
transform: scale(0.95);
opacity: 0;
}
to {
transform: scale(1);
opacity: 1;
}
}
/* Apply animations */
.message {
animation: slideInUp 0.3s ease-out;
}
.modal {
animation: scaleIn 0.2s ease-out;
}
Loading States
/* Spinner */
.spinner {
width: 40px;
height: 40px;
border: 3px solid #e2e8f0;
border-top-color: var(--primary-color);
border-radius: 50%;
animation: spin 0.8s linear infinite;
}
@keyframes spin {
to { transform: rotate(360deg); }
}
/* Skeleton loader */
.skeleton {
background: linear-gradient(90deg, #f0f0f0 25%, #e0e0e0 50%, #f0f0f0 75%);
background-size: 200% 100%;
animation: loading 1.5s infinite;
}
@keyframes loading {
0% { background-position: 200% 0; }
100% { background-position: -200% 0; }
}
Responsive Design Patterns
Mobile-First Approach
/* Base mobile styles */
.container {
padding: 16px;
width: 100%;
}
/* Tablet and up */
@media (min-width: 768px) {
.container {
padding: 24px;
max-width: 768px;
margin: 0 auto;
}
}
/* Desktop */
@media (min-width: 1024px) {
.container {
padding: 32px;
max-width: 1024px;
}
}
/* Wide screens */
@media (min-width: 1440px) {
.container {
max-width: 1280px;
}
}
Touch-Friendly Styles
/* Increase touch targets on mobile */
@media (pointer: coarse) {
button, a, input, select {
min-height: 44px;
min-width: 44px;
}
.btn-primary, .btn-secondary {
padding: 12px 24px;
font-size: 16px;
}
}
/* Disable hover effects on touch devices */
@media (hover: none) {
.btn-primary:hover {
filter: none;
box-shadow: none;
}
}
Theme Variants
Dark Mode
@media (prefers-color-scheme: dark) {
:root {
--primary-color: #60a5fa;
--secondary-color: #94a3b8;
--background-color: #0f172a;
--text-color: #f1f5f9;
--border-color: #334155;
}
.message-bot {
background: #1e293b;
color: #f1f5f9;
border-color: #334155;
}
.input-wrapper {
background: #1e293b;
border-color: #334155;
}
}
High Contrast
@media (prefers-contrast: high) {
:root {
--primary-color: #0066cc;
--text-color: #000000;
--background-color: #ffffff;
}
* {
border-width: 2px !important;
}
button:focus, input:focus {
outline: 3px solid #000000 !important;
outline-offset: 2px !important;
}
}
Performance Tips
- Use CSS Variables: Change themes by updating variables, not entire stylesheets
- Minimize Specificity: Keep selectors simple for faster parsing
- Avoid Deep Nesting: Maximum 3 levels deep
- Use Transform/Opacity: For animations instead of layout properties
- Lazy Load Non-Critical CSS: Load theme variations on demand
Browser Compatibility
/* Provide fallbacks for older browsers */
.gradient-bg {
background: #3b82f6; /* Fallback */
background: linear-gradient(135deg, #3b82f6 0%, #8b5cf6 100%);
}
/* Use @supports for progressive enhancement */
@supports (backdrop-filter: blur(10px)) {
.modal-backdrop {
backdrop-filter: blur(10px);
}
}
See Also
- Theme Structure - File organization
- Chapter 4: User Interface - Applying themes to templates
- Chapter 6: BASIC - Dynamic theme switching
Next Step
Return to Chapter 5 Overview or continue to Chapter 6: BASIC Dialogs.
Chapter 06: BASIC + LLM - The Perfect Match
Why BASIC?
In 1964, John Kemeny and Thomas Kurtz created BASIC with a revolutionary idea: programming should be for everyone. Today, General Bots brings this philosophy to the AI era.
With BASIC + LLM, you write:
TALK "What's your name?"
HEAR name
poem = LLM "Create a heartfelt poem for " + name
TALK poem
Not 50 lines of boilerplate.
The Core Keywords
Just SEVEN main keywords power everything:
| Keyword | Purpose | Example |
|---|---|---|
| TALK | Output | TALK "Hello!" |
| HEAR | Input | HEAR name AS NAME |
| USE KB | Knowledge | USE KB "docs" |
| USE TOOL | Functions | USE TOOL "weather" |
| GET | Data | GET "api/users" |
| IF/THEN | Logic | IF age >= 18 THEN ... |
| FOR/NEXT | Loops | FOR i = 1 TO 10 ... |
Your First Tool
In the LLM world, you write tools that AI can use:
' enrollment.bas - An LLM-callable tool
PARAM name AS STRING LIKE "John Smith" DESCRIPTION "Full name"
PARAM email AS STRING LIKE "john@example.com" DESCRIPTION "Email"
PARAM course AS STRING LIKE "Introduction to AI" DESCRIPTION "Course"
DESCRIPTION "Enrolls a student in a course"
SAVE "enrollments.csv", name, email, course, NOW()
TALK "Enrolled " + name + " in " + course
The LLM handles the conversation. You define the action.
Everyone Can Program
You don’t need:
- A computer science degree
- Years of experience
- Understanding of algorithms
You just need:
- An idea
- 10 minutes to learn BASIC
- Creativity
Getting Started
| Time | Goal |
|---|---|
| Minute 1 | TALK "Hello, world!" |
| Minute 5 | Add HEAR and LLM |
| Day 1 | Production-ready bot |
Documentation Guide
| Document | Purpose |
|---|---|
| Basics | Core LLM-first concepts |
| Keywords Reference | Complete keyword list |
| Templates | Real-world examples |
| Universal Messaging | Multi-channel support |
Keyword Categories
- Core: TALK, HEAR
- Context: SET CONTEXT, USE KB
- Memory: GET/SET BOT MEMORY, GET/SET USER MEMORY
- Data: GET, SAVE, FIND
- HTTP: POST, PUT, DELETE
- Files: READ, WRITE, UPLOAD
The Philosophy
BASIC in General Bots isn’t about controlling conversation flow - it’s about providing tools and context that LLMs use intelligently.
Write simple tools. Let AI handle the complexity.
“The beauty of BASIC lies not in what it can do, but in who it enables to do it.”
Dialog Basics
BASIC dialogs in General Bots are designed for the LLM era - you write tools and context setters, not complex conversation flows.
Core Concepts
| Concept | Description |
|---|---|
| LLM Tools | BASIC scripts that become callable tools for the LLM |
| Context | SET CONTEXT provides knowledge to the LLM |
| Suggestions | ADD SUGGESTION guides conversations |
| Memory | GET/SET BOT/USER MEMORY for persistent data |
LLM-First Example
' Load context from memory
resume = GET BOT MEMORY "announcements"
context = GET BOT MEMORY "company_info"
' Give LLM the context it needs
SET CONTEXT "announcements" AS resume
SET CONTEXT "company" AS context
' Guide the conversation
CLEAR SUGGESTIONS
ADD SUGGESTION "announcements" AS "Show me this week's updates"
ADD SUGGESTION "company" AS "Tell me about the company"
' Start conversation
TALK "What would you like to know?"
Creating LLM Tools
Instead of parsing user input, create tools the LLM can call:
' update-summary.bas - A tool the LLM can invoke
PARAM topic AS STRING LIKE "Q4 Results" DESCRIPTION "Topic to summarize"
PARAM length AS STRING LIKE "brief" DESCRIPTION "brief or detailed"
DESCRIPTION "Creates a summary of the requested topic"
data = GET BOT MEMORY topic
summary = LLM "Summarize this " + length + ": " + data
TALK summary
Traditional vs LLM Approach
| Traditional | LLM + BASIC |
|---|---|
| Parse user input manually | LLM understands naturally |
| Complex IF/ELSE trees | Tools with PARAMs |
| Validate every field | LLM handles validation |
| Design conversation flows | LLM manages conversation |
Tool Pattern Example
' schedule-appointment.bas
PARAM service AS STRING LIKE "consultation" DESCRIPTION "Type of appointment"
PARAM date AS DATE LIKE "tomorrow at 3pm" DESCRIPTION "Preferred date/time"
DESCRIPTION "Schedules an appointment and sends confirmation"
appointment = GET "api/appointments/available" WITH service, date
IF appointment.available THEN
SET BOT MEMORY "last_appointment" AS appointment.id
SEND EMAIL TO user.email WITH appointment.details
TALK "Scheduled your " + service + " for " + date
ELSE
alternatives = GET "api/appointments/suggest" WITH service, date
TALK "That time isn't available. Alternatives: " + alternatives
END IF
Best Practices
| Do | Don’t |
|---|---|
| Write focused tools | Create complex conversation flows |
| Use context wisely | Micromanage the LLM |
| Trust the LLM | Parse user input manually |
| Use suggestions | Force rigid paths |
See Also
- Keywords Reference - Complete keyword list
- Chapter Overview - Philosophy and introduction
- Templates - Real-world examples
Universal Messaging & Multi-Channel
BotServer automatically handles conversations across different channels (Web, WhatsApp, Email, etc.) using the same BASIC scripts. Write once, deploy everywhere.
How It Works
Your BASIC scripts don’t need to know which channel they’re running on. The same TALK and HEAR commands work universally:
TALK "Hello! How can I help you?"
HEAR response
TALK "You said: " + response
This script works identically whether the user is:
- Chatting via web browser
- Messaging on WhatsApp
- Sending emails
- Using Microsoft Teams
Supported Channels
Web (Default)
The primary channel. Users access via browser at http://localhost:8080.
WhatsApp Business
Requires WhatsApp Business API configuration. Messages are automatically formatted for WhatsApp’s constraints.
Bots can receive and respond to emails. Each email thread becomes a conversation session.
Microsoft Teams
Deploy bots directly to Teams channels and direct messages.
Channel Detection
BotServer automatically detects the channel based on the session context. No special code needed:
' This works on ALL channels
TALK "Welcome to our service!"
TALK "What's your name?"
HEAR name
TALK "Nice to meet you, " + name
Channel-Specific Formatting
While your code stays the same, BotServer automatically handles channel-specific formatting:
Web
- Full HTML support
- Rich formatting
- Images and media
- Interactive elements
- Plain text with emoji
- Media as attachments
- Quick reply buttons
- 1024 character limit per message
- HTML email format
- Subject line handling
- Attachments
- Proper threading
Teams
- Adaptive cards
- @mentions
- Channel vs DM detection
- Teams-specific formatting
Media Handling
Send files and media universally:
' Works on all channels that support files
SEND FILE "report.pdf"
TALK "I've sent you the report."
Each channel handles files appropriately:
- Web: Download link
- WhatsApp: Document attachment
- Email: Email attachment
- Teams: File card
Session Management
Each channel maintains its own session handling:
- Web: Cookie-based sessions
- WhatsApp: Phone number as session ID
- Email: Thread ID as session
- Teams: User/channel context
Configuration
Channel configuration is done in the bot’s config.csv:
channel-web,enabled
channel-whatsapp,enabled
channel-email,enabled
channel-teams,disabled
Best Practices
- Keep messages concise - Some channels have length limits
- Use simple formatting - Not all channels support rich text
- Test on target channels - Ensure your bot works well on each
- Handle media gracefully - Not all channels support all file types
- Consider response times - Email is async, chat is real-time
Channel Limitations
| Channel | Message Length | Media Support | Rich Text | Real-time |
|---|---|---|---|---|
| Web | Unlimited | Full | Yes | Yes |
| 1024 chars | Images, Docs | Limited | Yes | |
| Unlimited | Attachments | HTML | No | |
| Teams | 28KB | Full | Adaptive Cards | Yes |
Summary
Universal messaging means your BASIC scripts work across all channels without modification. BotServer handles the complexity of channel-specific formatting and delivery, letting you focus on the conversation logic.
BASIC vs Automation Tools: A Practical Comparison
Understanding how General Bots BASIC compares to other automation platforms
Overview
General Bots BASIC provides a conversational-first approach to automation. This chapter compares BASIC with popular automation tools to help you understand when each approach works best.
Comparison Matrix
| Feature | Zapier | n8n | Make | Power Automate | BASIC |
|---|---|---|---|---|---|
| Webhooks | ✅ | ✅ | ✅ | ✅ | ✅ |
| Scheduling | ✅ | ✅ | ✅ | ✅ | ✅ SET SCHEDULE |
| HTTP/REST | ✅ | ✅ | ✅ | ✅ | ✅ |
| GraphQL | ❌ | ✅ | ✅ | ❌ | ✅ |
| SOAP | ❌ | ❌ | ✅ | ✅ | ✅ |
| Database Native | ❌ | ✅ | ✅ | ✅ | ✅ |
| Conversations | ❌ | ❌ | ❌ | ❌ | ✅ |
| WhatsApp Native | Plugin | Plugin | Plugin | Plugin | ✅ Built-in |
| Telegram Native | Plugin | Plugin | Plugin | ❌ | ✅ Built-in |
| Multi-Channel | Limited | Limited | Limited | Limited | ✅ Native |
| LLM Integration | Plugin | Plugin | Plugin | GPT-5 | ✅ Any model |
| Self-Hosted | ❌ | ✅ | ❌ | ❌ | ✅ |
| Open Source | ❌ | ✅ | ❌ | ❌ | ✅ AGPL |
Key Differences
Conversation-First Design
Traditional automation tools focus on backend workflows. BASIC adds interactive conversations:
Multi-Channel Native
The same bot works across all channels without modification:
This message reaches users on WhatsApp, Telegram, Web, or any configured channel—same content, adapted formatting.
LLM Model Freedom
BASIC supports any LLM provider:
- OpenAI (GPT-5, o3)
- Anthropic (Claude Sonnet 4.5, Opus 4.5)
- Local models (Llama, Mistral via llama.cpp)
- Groq, DeepSeek, and others
- Any OpenAI-compatible API
Configure in config.csv:
name,value
llm-url,http://localhost:8081
llm-model,model.gguf
When to Use Each Tool
Choose BASIC When You Need
- Interactive workflows - Users participate in the process
- Multi-channel presence - Same bot on WhatsApp, Telegram, Web
- AI-powered conversations - Natural language understanding
- Self-hosted deployment - Full data control
- Open source flexibility - Modify and extend as needed
Choose Traditional Automation When You Need
- Backend-only workflows - No user interaction required
- Visual workflow builders - Prefer drag-and-drop interfaces
- Existing integrations - Specific pre-built connectors
- Team familiarity - Team already knows the tool
Migration Examples
From Zapier
Zapier workflow: Form submission → Slack notification → CRM entry → Welcome email
BASIC equivalent:
From n8n
n8n workflow: Monitor website → Alert on error → Create ticket
BASIC equivalent runs on schedule and notifies immediately:
Complete Office Suite
BASIC provides built-in capabilities for common office tasks:
| Capability | BASIC Keyword |
|---|---|
| Send email | SEND MAIL |
| Create draft | CREATE DRAFT |
| Schedule meetings | BOOK |
| Manage files | UPLOAD, DOWNLOAD, LIST |
| Create tasks | CREATE TASK |
| Video meetings | CREATE MEETING |
Example: Daily Report Automation
Getting Started
Quick Start
- Download and run BotServer
- Edit your bot’s
.basfiles - Configure settings in
config.csv - Deploy to any channel
Resources
- Keywords Reference - Complete keyword documentation
- SET SCHEDULE - Automate with schedules
- WEBHOOK - Event-driven automation
- Templates - Ready-to-use examples
Template Variables
Templates support variable substitution using double curly braces {{variable_name}}. Variables are replaced at send time with values from the provided data object.
Built-in Variables
These variables are automatically available in all templates without explicit declaration:
| Variable | Description | Example |
|---|---|---|
{{recipient}} | Recipient email or phone | john@example.com |
{{to}} | Alias for recipient | john@example.com |
{{date}} | Current date (YYYY-MM-DD) | 2025-01-22 |
{{time}} | Current time (HH:MM) | 14:30 |
{{datetime}} | Combined date and time | 2025-01-22 14:30 |
{{year}} | Current year | 2025 |
{{month}} | Current month name | January |
Custom Variables
Pass custom variables via the variables parameter in SEND TEMPLATE:
WITH vars
.name = "John"
.company = "Acme Corp"
.product = "Pro Plan"
.discount = "20%"
END WITH
SEND TEMPLATE "welcome", "email", "john@example.com", vars
The template content would reference these variables:
Hello {{name}},
Welcome to {{company}}! You've signed up for {{product}}.
As a special offer, use code WELCOME for {{discount}} off your first purchase.
Best regards,
The Team
Channel-Specific Templates
Email Templates
Email templates support automatic Subject: line extraction. Place the subject on the first line:
Subject: Welcome to {{company}}, {{name}}!
Hello {{name}},
Thank you for joining us...
The system extracts the subject line and uses the remainder as the body.
WhatsApp Templates
WhatsApp templates must be pre-approved by Meta. Use numbered placeholders as required by the WhatsApp Business API:
Hello {{1}}, your order {{2}} has shipped. Track at {{3}}
Map variables using numeric keys:
WITH vars
.1 = customer_name
.2 = order_id
.3 = tracking_url
END WITH
SEND TEMPLATE "order-shipped", "whatsapp", phone, vars
SMS Templates
Keep SMS templates under 160 characters for single-segment delivery:
Hi {{name}}, your code is {{code}}. Valid for 10 minutes.
Template Examples
Welcome Email
Subject: Welcome to {{company}}!
Hi {{name}},
Thanks for signing up on {{date}}. Here's what you can do next:
1. Complete your profile
2. Explore our features
3. Join our community
Questions? Reply to this email.
Best,
{{company}} Team
Order Confirmation
Subject: Order #{{order_id}} Confirmed
Hi {{name}},
Your order has been confirmed!
Order: #{{order_id}}
Date: {{date}}
Total: {{total}}
Items:
{{items}}
Shipping to:
{{address}}
Track your order: {{tracking_url}}
Appointment Reminder
Subject: Reminder: {{appointment_type}} tomorrow
Hi {{name}},
This is a reminder of your upcoming appointment:
Date: {{appointment_date}}
Time: {{appointment_time}}
Location: {{location}}
Need to reschedule? Reply to this email or call {{phone}}.
See you soon!
Creating Templates
Via BASIC
CREATE TEMPLATE "welcome", "email", "Welcome {{name}}!", "Hello {{name}}, thank you for joining {{company}}!"
Retrieving Templates
template = GET TEMPLATE "welcome"
TALK "Template body: " + template.body
Variable Extraction
Variables are automatically extracted from template content when the template is created. The system identifies all {{variable}} patterns and stores them for validation. Built-in variables (recipient, date, time, etc.) are excluded from the extraction.
Fallback Values
Handle missing variables using NVL in your code:
WITH vars
.name = NVL(user_name, "Friend")
.company = NVL(user_company, "your organization")
END WITH
SEND TEMPLATE "greeting", "email", email, vars
Multi-Channel Delivery
Send the same template to multiple channels in one call:
WITH vars
.name = "John"
.message = "Your appointment is confirmed"
END WITH
SEND TEMPLATE "appointment-confirm", "email,sms,whatsapp", recipient, vars
Or send channel-specific versions:
SEND TEMPLATE "appointment-email", "email", email, vars
SEND TEMPLATE "appointment-sms", "sms", phone, vars
Bulk Sending
Send templates to multiple recipients:
recipients = ["a@example.com", "b@example.com", "c@example.com"]
count = SEND TEMPLATE "newsletter" TO "email" recipients, #{month: "January"}
TALK "Sent to " + count + " recipients"
Best Practices
Keep variable names simple. Use name rather than customer_first_name_from_database. Shorter names are easier to maintain.
Provide fallbacks. Always handle the case where a variable might be missing or empty.
Test templates. Verify all variables populate correctly before deploying to production.
Respect channel limits. SMS has a 160-character single-segment limit. WhatsApp templates require Meta approval.
Personalize thoughtfully. Using {{name}} improves engagement, but avoid over-personalization that feels intrusive.
Include unsubscribe options. Marketing emails should always provide an unsubscribe mechanism.
Database Storage
Templates are stored in the message_templates table:
| Column | Type | Description |
|---|---|---|
id | UUID | Template identifier |
bot_id | UUID | Owning bot |
name | TEXT | Template name |
channel | TEXT | email/whatsapp/sms/telegram/push |
subject | TEXT | Email subject (nullable) |
body | TEXT | Template body |
variables | JSONB | List of variable names |
is_active | BOOL | Active status |
See Also
- SEND TEMPLATE Keyword - Full keyword reference
- SET SCHEDULE - Scheduled template delivery
- Universal Messaging - Multi-channel patterns
Template Examples
Templates are pre-built BASIC scripts that demonstrate common use cases and patterns. Each template includes complete code, explanations, and interactive WhatsApp-style sample dialogs showing how the bot behaves in real conversations.
Available Templates
🚀 start.bas
Topic: Basic Greeting & Help Flow
The simplest possible bot - learn BASIC fundamentals with a greeting flow that demonstrates SET, TALK, HEAR, and IF/ELSE.
Perfect for:
- Learning BASIC syntax
- Quick demos
- Starting point for new bots
📋 enrollment.bas
Topic: User Registration & Data Collection
A complete data collection workflow that gathers user information step-by-step, validates inputs, confirms details, and saves the data.
Perfect for:
- Customer onboarding
- Event registrations
- Lead capture forms
- Survey collection
🔐 auth.bas
Topic: Authentication Patterns
Secure user authentication flows including login, registration, password reset, and session management.
Perfect for:
- User login systems
- Account verification
- Password recovery
- Session handling
Template Structure
Each template documentation includes:
- Topic Description - What the template is for
- The Code - Complete, working BASIC script
- Sample Dialogs - WhatsApp-style conversations showing real interactions
- Keywords Used - Quick reference of BASIC keywords
- Customization Ideas - Ways to extend the template
Using Templates
Method 1: Copy and Customize
Copy the template code into your .gbdialog folder and modify it:
' Copy start.bas and customize
SET user_name = "Guest"
TALK "Hello, " + user_name + "! Welcome to My Company."
HEAR user_input
' ... add your logic
Method 2: Include Templates
Use the INCLUDE keyword to use templates as building blocks:
INCLUDE "templates/auth.bas"
' Now use auth functions
CALL authenticate_user()
Method 3: Use as Reference
Study the templates to learn patterns, then write your own:
' Learned from enrollment.bas pattern
PARAM name AS string LIKE "John Doe"
DESCRIPTION "User's full name"
TALK "What's your name?"
HEAR name
' ... continue with your logic
More Templates
The templates/ directory contains 20+ ready-to-use bot configurations:
| Template | Description |
|---|---|
default.gbai | Basic bot with weather, email, and calculation tools |
edu.gbai | Educational bot for course management |
crm.gbai | Customer relationship management |
announcements.gbai | Broadcast messaging system |
whatsapp.gbai | WhatsApp Business integration |
store.gbai | E-commerce bot |
healthcare | Healthcare appointment scheduling |
hr | Human resources assistant |
finance | Financial services bot |
marketing.gbai | Marketing automation |
reminder.gbai | Task and reminder management |
backup.gbai | Automated backup workflows |
crawler.gbai | Web crawling and data extraction |
Related
- BASIC vs n8n/Zapier/Make - Why BASIC beats drag-and-drop tools
- Keywords Reference - Complete keyword documentation
- Consolidated Examples - More code examples
Start Template
The start template is the simplest possible bot - a greeting flow that demonstrates the core interaction pattern of BASIC: greeting users and responding to their input.
Topic: Basic Greeting & Help Flow
This template is perfect for:
- Learning BASIC fundamentals
- Simple FAQ bots
- Quick demos
- Starting point for more complex bots
The Code
REM Basic greeting and help flow
SET user_name = "Guest"
TALK "Hello, " + user_name + "! How can I help you today?"
HEAR user_input
IF user_input = "help" THEN
TALK "Sure, I can assist with account info, orders, or support."
ELSE
TALK "Sorry, I didn't understand. Type 'help' for options."
END IF
Sample Dialogs
These conversations show how the start template works in real-world scenarios.
Dialog 1: User Asks for Help
Dialog 2: Unknown Input
Dialog 3: Personalized Greeting (Enhanced Version)
When you add user detection, the experience improves:
Keywords Used
| Keyword | Purpose |
|---|---|
SET | Assign a value to a variable |
TALK | Send a message to the user |
HEAR | Wait for and capture user input |
IF/ELSE | Conditional branching based on input |
How It Works
- Variable Setup:
SETcreates a variable to hold the user’s name - Greeting:
TALKsends the welcome message - Input Capture:
HEARwaits for user response - Response Logic:
IF/ELSEdetermines what to say back
Enhanced Version
Here’s the same template enhanced with LLM for natural understanding:
REM Smart greeting flow with LLM
SET user_name = "Guest"
TALK "Hello, " + user_name + "! How can I help you today?"
HEAR user_input
' Let LLM understand intent
intent = LLM "Classify this user message into one category: help, account, orders, support, other. Message: " + user_input
SWITCH intent
CASE "help"
TALK "I can assist with account info, orders, or support."
CASE "account"
TALK "Let me pull up your account information..."
CASE "orders"
TALK "I'll check on your recent orders..."
CASE "support"
TALK "Connecting you with our support team..."
DEFAULT
response = LLM "Respond helpfully to: " + user_input
TALK response
END SWITCH
Customization Ideas
Add User Detection
' Get user info if available
user_name = GET BOT MEMORY "user_" + user_id + "_name"
IF user_name = "" THEN
TALK "Hi there! What's your name?"
HEAR user_name
SET BOT MEMORY "user_" + user_id + "_name", user_name
END IF
TALK "Welcome back, " + user_name + "!"
Add Quick Reply Buttons
ADD SUGGESTION "Account Info"
ADD SUGGESTION "My Orders"
ADD SUGGESTION "Get Support"
TALK "What would you like help with?"
HEAR choice
Add Time-Based Greeting
hour = HOUR(NOW())
IF hour < 12 THEN
greeting = "Good morning"
ELSE IF hour < 18 THEN
greeting = "Good afternoon"
ELSE
greeting = "Good evening"
END IF
TALK greeting + ", " + user_name + "!"
Related Templates
- enrollment.bas - Multi-step data collection
- auth.bas - User authentication patterns
Enrollment Template
The enrollment template demonstrates how to build a complete data collection workflow that gathers user information step-by-step, validates inputs, confirms details, and saves the data.
Topic: User Registration & Data Collection
This template is perfect for:
- Customer onboarding flows
- Event registrations
- Lead capture forms
- Survey collection
- Application submissions
The Code
REM Enrollment Tool Example
PARAM name AS string LIKE "Abreu Silva"
DESCRIPTION "Required full name of the individual."
PARAM birthday AS date LIKE "23/09/2001"
DESCRIPTION "Required birth date of the individual in DD/MM/YYYY format."
PARAM email AS string LIKE "abreu.silva@example.com"
DESCRIPTION "Required email address for contact purposes."
PARAM personalid AS integer LIKE "12345678900"
DESCRIPTION "Required Personal ID number of the individual (only numbers)."
PARAM address AS string LIKE "Rua das Flores, 123 - SP"
DESCRIPTION "Required full address of the individual."
DESCRIPTION "This is the enrollment process, called when the user wants to enrol."
REM Start enrollment
TALK "Welcome to the enrollment process! Let's get you registered."
TALK "First, what is your full name?"
HEAR name
TALK "Thank you. What is your birth date? (DD/MM/YYYY)"
HEAR birthday
TALK "What is your email address?"
HEAR email
TALK "Please provide your Personal ID number (numbers only):"
HEAR personalid
TALK "Finally, what is your full address?"
HEAR address
REM Validate and confirm
TALK "Please confirm your details:"
TALK "Name: " + name
TALK "Birth Date: " + birthday
TALK "Email: " + email
TALK "Personal ID: " + personalid
TALK "Address: " + address
TALK "Are these details correct? (yes/no)"
HEAR confirmation
IF confirmation = "yes" THEN
SAVE "enrollments.csv", name, birthday, email, personalid, address
TALK "Thank you! Your enrollment has been successfully submitted."
ELSE
TALK "Let's start over with the correct information."
END IF
Sample Dialogs
These conversations show how the enrollment template works in real-world scenarios.
Dialog 1: Successful Enrollment
Dialog 2: User Corrects Information
Dialog 3: LLM-Assisted Natural Input
When using the LLM, users can provide information naturally:
Keywords Used
| Keyword | Purpose |
|---|---|
PARAM | Define expected input parameters with types and examples |
DESCRIPTION | Provide context for LLM understanding |
TALK | Send messages to the user |
HEAR | Wait for and capture user input |
IF/ELSE | Conditional logic for confirmation |
SAVE | Persist data to CSV file |
How It Works
- Parameter Definition: The
PARAMdeclarations tell the LLM what information to collect - Step-by-Step Collection: Each
HEARcaptures one piece of data - Confirmation Loop: User reviews all data before submission
- Data Persistence:
SAVEstores the validated data
Customization Ideas
Add Validation
HEAR email
IF NOT INSTR(email, "@") THEN
TALK "Please enter a valid email address"
HEAR email
END IF
Add to Database Instead of CSV
INSERT "users", name, birthday, email, personalid, address
Send Confirmation Email
SEND MAIL email, "Welcome!", "Your registration is complete, " + name
Related Templates
Authentication Template
The authentication template demonstrates secure user verification flows including login, registration, password validation, and session management.
Topic: User Authentication & Security
This template is perfect for:
- User login systems
- Account verification
- Password recovery flows
- Session management
- Two-factor authentication
The Code
REM Authentication Flow with Retry Logic
PARAM username AS string LIKE "john.doe"
DESCRIPTION "Username or email for authentication"
PARAM password AS string LIKE "********"
DESCRIPTION "User's password (masked input)"
SET max_attempts = 3
SET attempts = 0
TALK "Welcome! Please enter your username:"
HEAR username
LABEL auth_loop
TALK "Enter your password:"
HEAR password AS PASSWORD ' Masked input
' Verify credentials
user = FIND "users", "username='" + username + "'"
IF user = NULL THEN
TALK "Username not found. Would you like to register? (yes/no)"
HEAR register_choice
IF register_choice = "yes" THEN
GOTO registration
ELSE
TALK "Goodbye!"
EXIT
END IF
END IF
IF user.password = HASH(password) THEN
SET BOT MEMORY "authenticated_user", username
SET BOT MEMORY "session_start", NOW()
TALK "Welcome back, " + user.name + "! You are now logged in."
EXIT
ELSE
SET attempts = attempts + 1
IF attempts >= max_attempts THEN
TALK "Too many failed attempts. Your account is temporarily locked."
SEND MAIL user.email, "Security Alert", "Multiple failed login attempts detected."
EXIT
END IF
TALK "Incorrect password. " + (max_attempts - attempts) + " attempts remaining."
GOTO auth_loop
END IF
LABEL registration
TALK "Let's create your account. Enter your email:"
HEAR email
TALK "Create a password (min 8 characters):"
HEAR new_password AS PASSWORD
IF LEN(new_password) < 8 THEN
TALK "Password too short. Please try again."
GOTO registration
END IF
INSERT "users", username, email, HASH(new_password), NOW()
TALK "Account created! You can now log in."
Sample Dialogs
These conversations show how the authentication template works in real-world scenarios.
Dialog 1: Successful Login
Dialog 2: Failed Login with Retry
Dialog 3: Account Locked
Dialog 4: New User Registration
Keywords Used
| Keyword | Purpose |
|---|---|
PARAM | Define expected input parameters |
SET | Assign values to variables |
TALK | Send messages to the user |
HEAR | Capture user input |
HEAR AS PASSWORD | Masked password input |
FIND | Query database for user |
IF/ELSE | Conditional logic |
GOTO/LABEL | Flow control for retry loop |
HASH | Secure password hashing |
SET BOT MEMORY | Store session data |
SEND MAIL | Send security alerts |
INSERT | Create new user record |
EXIT | End the dialog |
How It Works
- Username Input: Collects the username first
- User Lookup: Checks if user exists in database
- Password Verification: Compares hashed password
- Retry Logic: Allows 3 attempts before lockout
- Session Creation: Stores auth state in bot memory
- Registration: Offers new account creation if user not found
Security Features
Password Hashing
' Never store plain text passwords!
hashed = HASH(password)
INSERT "users", username, email, hashed
Rate Limiting
IF attempts >= max_attempts THEN
SET BOT MEMORY "locked_" + username, NOW()
TALK "Account locked for 15 minutes."
END IF
Two-Factor Authentication
' Send OTP after password verification
otp = RANDOM(100000, 999999)
SET BOT MEMORY "otp_" + username, otp
SEND MAIL email, "Your verification code", "Code: " + otp
TALK "Enter the 6-digit code sent to your email:"
HEAR user_otp
IF user_otp = GET BOT MEMORY "otp_" + username THEN
TALK "Two-factor authentication successful!"
ELSE
TALK "Invalid code."
END IF
Customization Ideas
Add “Forgot Password”
TALK "Forgot your password? (yes/no)"
HEAR forgot
IF forgot = "yes" THEN
reset_token = RANDOM_STRING(32)
SET BOT MEMORY "reset_" + username, reset_token
SEND MAIL user.email, "Password Reset", "Click here: /reset/" + reset_token
TALK "Password reset link sent to your email."
END IF
Session Timeout
session_start = GET BOT MEMORY "session_start"
IF DATEDIFF("minute", session_start, NOW()) > 30 THEN
TALK "Session expired. Please log in again."
SET BOT MEMORY "authenticated_user", ""
END IF
Social Login
TALK "Login with: 1) Password 2) Google 3) GitHub"
HEAR login_method
SWITCH login_method
CASE "2"
' Redirect to OAuth
url = GET "auth/google/redirect"
TALK "Click to login: " + url
CASE "3"
url = GET "auth/github/redirect"
TALK "Click to login: " + url
DEFAULT
' Standard password flow
END SWITCH
Related Templates
- start.bas - Basic greeting flow
- enrollment.bas - Data collection patterns
AI Search Template
The AI Search template provides an intelligent document search bot that uses AI to answer questions based on your uploaded documents. It combines vector search with large language models for accurate, context-aware responses.
Topic: AI-Powered Document Search & Q&A
This template is perfect for:
- Knowledge base assistants
- Document-based customer support
- Internal documentation search
- FAQ automation with source documents
The Code
REM AI Search - Intelligent Document Q&A
REM Uses RAG (Retrieval Augmented Generation) for accurate answers
' Add search tools
ADD TOOL "search-documents"
ADD TOOL "summarize-document"
' Use the knowledge base
USE KB "ai-search.gbkb"
' Set up the AI context
SET CONTEXT "document-search" AS "You are a helpful document search assistant. Answer questions based on the documents in your knowledge base. Always cite your sources when possible. If the answer is not in the documents, say so clearly."
' Clear and add suggestions
CLEAR SUGGESTIONS
ADD SUGGESTION "search" AS "Search documents"
ADD SUGGESTION "summary" AS "Get document summary"
ADD SUGGESTION "help" AS "How to use"
BEGIN TALK
**AI Search Assistant** 🔍
I can help you find information in your documents using AI-powered search.
**What I can do:**
• Search across all uploaded documents
• Answer questions with context
• Summarize long documents
• Find specific information quickly
Just ask me a question or describe what you're looking for.
END TALK
BEGIN SYSTEM PROMPT
You are an AI document search assistant with access to a knowledge base of documents.
When answering questions:
1. Search the knowledge base for relevant information
2. Provide accurate answers based on the documents
3. Cite the source document when possible
4. If information isn't found, clearly state that
5. Offer to search for related topics
Be concise but thorough. Always prioritize accuracy over speed.
END SYSTEM PROMPT
Sample Dialogs
These conversations show how the AI Search template works in real-world scenarios.
Dialog 1: Document Search Query
Dialog 2: Information Not Found
Dialog 3: Document Summary Request
Keywords Used
| Keyword | Purpose |
|---|---|
ADD TOOL | Register search and summary tools |
USE KB | Connect to the knowledge base |
SET CONTEXT | Define the AI’s role and behavior |
ADD SUGGESTION | Create quick action buttons |
BEGIN TALK/END TALK | Multi-line welcome message |
BEGIN SYSTEM PROMPT/END SYSTEM PROMPT | Define AI behavior rules |
How It Works
- Knowledge Base Connection:
USE KBloads your documents for semantic search - Tool Registration:
ADD TOOLenables search and summarization capabilities - Context Setting:
SET CONTEXTtells the AI how to behave - User Query: User asks a question in natural language
- RAG Process: System searches documents, retrieves relevant chunks
- AI Response: LLM generates answer based on retrieved context
Template Structure
ai-search.gbai/
├── ai-search.gbdialog/
│ ├── start.bas # Main entry point
│ └── qr.bas # QR code handler
├── ai-search.gbdrive/
│ └── manuals/ # Folder for PDF documents
│ └── 42LB5800.pdf # Example manual
├── ai-search.gbkb/
│ └── docs/ # Knowledge base documents
│ └── README.md # KB documentation
└── ai-search.gbot/
└── config.csv # Bot configuration
Customization Ideas
Add Document Categories
ADD SUGGESTION "manuals" AS "📚 Product Manuals"
ADD SUGGESTION "policies" AS "📋 Policies"
ADD SUGGESTION "tutorials" AS "🎓 Tutorials"
HEAR category
SWITCH category
CASE "manuals"
USE KB "manuals.gbkb"
CASE "policies"
USE KB "policies.gbkb"
CASE "tutorials"
USE KB "tutorials.gbkb"
END SWITCH
Add Source Citations
SET CONTEXT "search-with-citations" AS "Always include the document name and page number when citing information. Format: [Document Name, Page X]"
Add Search Filters
PARAM search_query AS STRING LIKE "how to reset" DESCRIPTION "What to search for"
PARAM doc_type AS STRING LIKE "manual" DESCRIPTION "Type of document: manual, policy, guide"
DESCRIPTION "Search documents with optional type filter"
IF doc_type <> "" THEN
results = FIND "documents.csv", "type = '" + doc_type + "'"
' Search within filtered results
ELSE
' Search all documents
END IF
Add Follow-up Questions
TALK "Here's what I found about your question..."
TALK response
TALK "Would you like me to:"
ADD SUGGESTION "more" AS "Tell me more"
ADD SUGGESTION "related" AS "Show related topics"
ADD SUGGESTION "new" AS "Ask new question"
HEAR followup
IF followup = "more" THEN
' Provide more detail
ELSE IF followup = "related" THEN
' Show related topics
END IF
Best Practices
- Organize Documents: Keep documents in logical folders within
.gbdrive - Update Regularly: Re-index knowledge base when documents change
- Clear Context: Set a specific context to improve answer relevance
- Handle Missing Info: Always gracefully handle cases where info isn’t found
- Cite Sources: Configure the AI to cite document sources for credibility
Related Templates
- talk-to-data.md - Query structured data with natural language
- crawler.md - Crawl websites to build knowledge bases
Analytics Dashboard Template
The analytics dashboard template provides real-time insights into your knowledge base performance, document statistics, and system health metrics. It uses pre-computed statistics stored in bot memory for fast loading.
Topic: Knowledge Base Analytics & Monitoring
This template is perfect for:
- Monitoring knowledge base growth
- Tracking document indexing status
- System health monitoring
- Capacity planning
The Code
REM Analytics Dashboard Start Dialog
REM Displays pre-computed statistics from update-stats.bas
DESCRIPTION "View knowledge base analytics and statistics"
REM Load pre-computed values from BOT MEMORY
totalDocs = GET BOT MEMORY("analytics_total_docs")
totalVectors = GET BOT MEMORY("analytics_total_vectors")
storageMB = GET BOT MEMORY("analytics_storage_mb")
collections = GET BOT MEMORY("analytics_collections")
docsWeek = GET BOT MEMORY("analytics_docs_week")
docsMonth = GET BOT MEMORY("analytics_docs_month")
growthRate = GET BOT MEMORY("analytics_growth_rate")
healthPercent = GET BOT MEMORY("analytics_health_percent")
lastUpdate = GET BOT MEMORY("analytics_last_update")
REM Set contexts for different report types
SET CONTEXT "overview" AS "Total documents: " + totalDocs + ", Storage: " + storageMB + " MB"
SET CONTEXT "activity" AS "Documents added this week: " + docsWeek + ", Growth rate: " + growthRate + "%"
SET CONTEXT "health" AS "System health: " + healthPercent + "%, Last updated: " + lastUpdate
REM Setup suggestions
CLEAR SUGGESTIONS
ADD SUGGESTION "overview" AS "Show overview"
ADD SUGGESTION "activity" AS "Recent activity"
ADD SUGGESTION "health" AS "System health"
REM Display dashboard
TALK "📊 **Analytics Dashboard**"
TALK ""
TALK "**Knowledge Base Overview**"
TALK "• Documents: " + FORMAT(totalDocs, "#,##0")
TALK "• Vectors: " + FORMAT(totalVectors, "#,##0")
TALK "• Storage: " + FORMAT(storageMB, "#,##0.00") + " MB"
TALK ""
TALK "Ask me about any metric or select a topic above."
Sample Dialogs
These conversations show how the analytics dashboard works in real-world scenarios.
Dialog 1: Viewing Overview Statistics
Dialog 2: Checking System Health
Dialog 3: Statistics Not Yet Computed
Keywords Used
| Keyword | Purpose |
|---|---|
GET BOT MEMORY | Retrieve pre-computed statistics |
SET CONTEXT | Provide context for AI responses |
CLEAR SUGGESTIONS | Reset quick reply options |
ADD SUGGESTION | Add quick reply buttons |
TALK | Display formatted statistics |
FORMAT | Format numbers with separators |
How It Works
- Load Statistics: Pre-computed values are retrieved from bot memory
- Set Contexts: Different contexts are set for overview, activity, and health queries
- Setup UI: Quick reply suggestions are configured
- Display Dashboard: Formatted statistics are shown to the user
The Update Stats Job
Statistics are pre-computed by a scheduled job to ensure fast dashboard loading:
REM update-stats.bas - Scheduled job to compute analytics
SET SCHEDULE "0 * * * *" REM Run every hour
REM Compute statistics
totalDocs = KB DOCUMENTS COUNT()
totalVectors = KB STATISTICS().total_vectors
storageMB = KB STORAGE SIZE() / 1024 / 1024
collections = UBOUND(KB LIST COLLECTIONS())
REM Calculate activity
docsWeek = KB DOCUMENTS ADDED SINCE(NOW() - 7)
docsMonth = KB DOCUMENTS ADDED SINCE(NOW() - 30)
REM Store in bot memory
SET BOT MEMORY "analytics_total_docs", totalDocs
SET BOT MEMORY "analytics_total_vectors", totalVectors
SET BOT MEMORY "analytics_storage_mb", storageMB
SET BOT MEMORY "analytics_collections", collections
SET BOT MEMORY "analytics_docs_week", docsWeek
SET BOT MEMORY "analytics_docs_month", docsMonth
SET BOT MEMORY "analytics_last_update", NOW()
TALK "Analytics updated successfully."
Customization Ideas
Add Export Functionality
ADD TOOL "export-stats"
REM In export-stats.bas
PARAM format AS STRING LIKE "csv" DESCRIPTION "Export format: csv, json, xlsx"
data = []
data.total_docs = GET BOT MEMORY("analytics_total_docs")
data.total_vectors = GET BOT MEMORY("analytics_total_vectors")
data.storage_mb = GET BOT MEMORY("analytics_storage_mb")
IF format = "csv" THEN
SAVE "analytics-export.csv", data
TALK "📥 Analytics exported to analytics-export.csv"
ELSE IF format = "json" THEN
WRITE "analytics-export.json", TOJSON(data)
TALK "📥 Analytics exported to analytics-export.json"
END IF
Add Alerting
REM Check for issues and alert
IF healthPercent < 90 THEN
SEND MAIL "admin@company.com", "System Health Alert", "Health dropped to " + healthPercent + "%"
END IF
IF storageMB > 900 THEN
SEND MAIL "admin@company.com", "Storage Warning", "Storage usage at " + storageMB + " MB"
END IF
Add Trend Visualization
REM Generate a simple trend chart
ADD TOOL "show-trend"
REM Collect historical data
history = FIND "analytics_history.csv", "date > " + FORMAT(NOW() - 30, "YYYY-MM-DD")
REM Create chart
chart = CREATE CHART "line", history, "date", "documents"
TALK chart
Related Templates
- backup.bas - Backup management and monitoring
- start.bas - Basic bot structure
Announcements Template
The announcements template provides a company communication system for sharing weekly updates, circulars, and organizational news through an AI-powered conversational interface.
Topic: Company Announcements & Communications
This template is perfect for:
- Weekly company announcements
- Internal circulars distribution
- Multi-topic news aggregation
- Organizational communications
- Employee information portals
The Code
resume1 = GET BOT MEMORY("resume")
resume2 = GET BOT MEMORY("auxiliom")
resume3 = GET BOT MEMORY("toolbix")
SET CONTEXT "general" AS resume1
SET CONTEXT "auxiliom" AS resume2
SET CONTEXT "toolbix" AS resume3
CLEAR SUGGESTIONS
ADD SUGGESTION "general" AS "Weekly announcements"
ADD SUGGESTION "general" AS "Latest circulars"
ADD SUGGESTION "auxiliom" AS "What is Auxiliom?"
ADD SUGGESTION "auxiliom" AS "Auxiliom services"
ADD SUGGESTION "toolbix" AS "Toolbix features"
ADD SUGGESTION "toolbix" AS "Toolbix for business"
ADD TOOL "change-subject"
TALK resume1
TALK "Ask me about any announcement or circular."
Sample Dialogs
These conversations show how the announcements template works in real-world scenarios.
Dialog 1: Weekly Announcements
Dialog 2: Switching Topics
Dialog 3: Latest Circulars
Keywords Used
| Keyword | Purpose |
|---|---|
GET BOT MEMORY | Retrieve stored announcement summaries |
SET CONTEXT | Define contexts for different topics |
CLEAR SUGGESTIONS | Reset quick reply options |
ADD SUGGESTION | Create topic-based quick replies |
ADD TOOL | Register topic switching tool |
TALK | Display announcements to user |
How It Works
- Load Content: Pre-stored summaries are retrieved from bot memory
- Multi-Context Setup: Different contexts for each announcement topic
- Topic Suggestions: Quick replies organized by topic category
- Dynamic Display: Current announcements shown on start
- Topic Switching: Users can change subjects using the tool
Template Structure
announcements.gbai/
├── announcements.gbdialog/
│ ├── start.bas # Main entry point
│ ├── auth.bas # Admin authentication
│ ├── change-subject.bas # Topic switching
│ └── update-summary.bas # Update announcements
├── announcements.gbkb/
│ ├── auxiliom/ # Auxiliom topic KB
│ ├── news/ # General news KB
│ └── toolbix/ # Toolbix topic KB
└── announcements.gbot/
└── config.csv # Bot configuration
Change Subject Tool: change-subject.bas
PARAM subject AS STRING LIKE "toolbix" DESCRIPTION "Topic to switch to: general, auxiliom, toolbix"
DESCRIPTION "Change the current announcement topic"
subject_lower = LCASE(subject)
IF subject_lower = "general" OR INSTR(subject_lower, "news") > 0 OR INSTR(subject_lower, "announcement") > 0 THEN
resume = GET BOT MEMORY("resume")
SET CONTEXT "current" AS resume
TALK "📰 Switched to **General Announcements**"
TALK resume
ELSE IF subject_lower = "auxiliom" THEN
resume = GET BOT MEMORY("auxiliom")
SET CONTEXT "current" AS resume
TALK "🔧 Switched to **Auxiliom**"
TALK resume
ELSE IF subject_lower = "toolbix" THEN
resume = GET BOT MEMORY("toolbix")
SET CONTEXT "current" AS resume
TALK "🛠️ Switched to **Toolbix**"
TALK resume
ELSE
TALK "Available topics: General Announcements, Auxiliom, Toolbix"
TALK "Which topic would you like?"
END IF
RETURN subject_lower
Update Summary Tool: update-summary.bas
PARAM topic AS STRING LIKE "general" DESCRIPTION "Topic to update"
PARAM content AS STRING DESCRIPTION "New summary content"
DESCRIPTION "Update the announcement summary for a topic (admin only)"
' Verify admin access
IF NOT IS_ADMIN(user_id) THEN
TALK "⚠️ This action requires administrator privileges."
RETURN NULL
END IF
topic_lower = LCASE(topic)
IF topic_lower = "general" THEN
SET BOT MEMORY "resume", content
ELSE IF topic_lower = "auxiliom" THEN
SET BOT MEMORY "auxiliom", content
ELSE IF topic_lower = "toolbix" THEN
SET BOT MEMORY "toolbix", content
ELSE
TALK "Unknown topic. Use: general, auxiliom, or toolbix"
RETURN NULL
END IF
' Log the update
WITH updateLog
timestamp = NOW()
updatedBy = user_id
topicUpdated = topic_lower
contentLength = LEN(content)
END WITH
SAVE "announcement_log.csv", updateLog
TALK "✅ " + topic + " summary updated successfully!"
TALK "Changes are now live."
RETURN topic_lower
Customization Ideas
Add Email Distribution
ADD TOOL "send-announcement"
PARAM announcement AS STRING DESCRIPTION "Announcement to distribute"
PARAM recipients AS STRING LIKE "all" DESCRIPTION "Recipients: all, managers, department name"
' Get recipient list
IF recipients = "all" THEN
employees = FIND "employees.csv"
ELSE IF recipients = "managers" THEN
employees = FIND "employees.csv", "role = 'manager'"
ELSE
employees = FIND "employees.csv", "department = '" + recipients + "'"
END IF
FOR EACH emp IN employees
SEND MAIL emp.email, "Company Announcement", announcement
WAIT 1
NEXT
TALK "📧 Announcement sent to " + UBOUND(employees) + " recipients."
Add Announcement Categories
CLEAR SUGGESTIONS
ADD SUGGESTION "hr" AS "HR Updates"
ADD SUGGESTION "it" AS "IT Announcements"
ADD SUGGESTION "finance" AS "Finance News"
ADD SUGGESTION "events" AS "Upcoming Events"
ADD SUGGESTION "policy" AS "Policy Changes"
ADD SUGGESTION "all" AS "All Announcements"
Add Read Receipts
' Track who has read announcements
WITH readReceipt
userId = user_id
announcementId = current_announcement_id
readAt = NOW()
END WITH
SAVE "read_receipts.csv", readReceipt
' Check read percentage
total = COUNT("employees.csv")
reads = COUNT("read_receipts.csv", "announcementId = '" + current_announcement_id + "'")
percentage = (reads / total) * 100
TALK "📊 " + FORMAT(percentage, "#0") + "% of employees have read this announcement."
Add Scheduled Announcements
PARAM schedule_time AS STRING LIKE "2025-01-20 09:00" DESCRIPTION "When to publish"
PARAM announcement AS STRING DESCRIPTION "Announcement content"
SET SCHEDULE schedule_time
SET BOT MEMORY "resume", announcement
' Notify all employees
employees = FIND "employees.csv"
FOR EACH emp IN employees
TALK TO emp.phone, "📢 New announcement: " + LEFT(announcement, 100) + "..."
NEXT
TALK "Announcement published and distributed."
Best Practices
- Keep It Current: Update announcements regularly
- Organize by Topic: Use clear topic categories
- Summarize: Start with key points, allow drill-down
- Archive Old News: Move outdated items to archive
- Track Engagement: Monitor which topics get most questions
Related Templates
- broadcast.bas - Mass messaging to employees
- edu.bas - Educational announcements
- hr-employees.bas - Employee communications
Backup Template
The backup template provides automated file archiving and restoration capabilities, helping you protect important data with scheduled backups and easy recovery options.
Topic: File Backup & Recovery
This template is perfect for:
- Automated data protection
- Scheduled file archiving
- Disaster recovery preparation
- Compliance with data retention policies
The Code
ADD TOOL "backup-to-server"
ADD TOOL "restore-file"
ADD TOOL "list-archived"
ADD TOOL "cleanup-old"
CLEAR SUGGESTIONS
ADD SUGGESTION "backup" AS "Run backup now"
ADD SUGGESTION "list" AS "View archived files"
ADD SUGGESTION "restore" AS "Restore a file"
ADD SUGGESTION "status" AS "Backup status"
SET CONTEXT "backup" AS "You are a backup management assistant. Help users archive files to server storage, restore archived files, and manage backup schedules."
BEGIN TALK
**Backup Manager**
I can help you with:
• Archive files to server storage
• Restore archived files
• View backup history
• Manage backup schedules
Select an option or tell me what you need.
END TALK
BEGIN SYSTEM PROMPT
You are a backup management assistant.
Archive files older than specified days to server storage.
Track all backup operations in log.xlsx.
Support restore operations from archived files.
Maintain MD5 checksums for integrity verification.
END SYSTEM PROMPT
Sample Dialogs
These conversations show how the backup template works in real-world scenarios.
Dialog 1: Running a Backup
Dialog 2: Viewing Archived Files
Dialog 3: Restoring a File
Keywords Used
| Keyword | Purpose |
|---|---|
ADD TOOL | Register backup tools for AI to use |
ADD SUGGESTION | Create quick action buttons |
SET CONTEXT | Define the bot’s role and capabilities |
BEGIN TALK | Welcome message block |
BEGIN SYSTEM PROMPT | AI behavior instructions |
Backup Tool: backup-to-server.bas
PARAM folder AS STRING LIKE "documents" DESCRIPTION "Folder to backup"
PARAM days AS INTEGER LIKE 30 DESCRIPTION "Archive files older than X days"
DESCRIPTION "Archive files older than specified days to server storage"
IF NOT folder THEN
folder = "documents"
END IF
IF NOT days THEN
days = 30
END IF
' Calculate cutoff date
cutoff = DATEADD(NOW(), -days, "days")
' Find files to archive
files = FIND folder, "modified < '" + FORMAT(cutoff, "YYYY-MM-DD") + "'"
IF UBOUND(files) = 0 THEN
TALK "No files found older than " + days + " days."
RETURN 0
END IF
' Create archive name
archiveName = "backup-" + FORMAT(NOW(), "YYYY-MM-DD") + ".zip"
' Compress files
COMPRESS files, archiveName
' Calculate checksums
FOR EACH file IN files
checksum = MD5(file)
WITH logEntry
timestamp = NOW()
filename = file.name
size = file.size
md5 = checksum
archive = archiveName
status = "archived"
END WITH
SAVE "log.xlsx", logEntry
NEXT
' Move to server storage
MOVE archiveName, "server://backups/" + archiveName
TALK "✅ Backup completed: " + UBOUND(files) + " files archived to " + archiveName
RETURN UBOUND(files)
Restore Tool: restore-file.bas
PARAM filename AS STRING LIKE "report.xlsx" DESCRIPTION "Name of file to restore"
PARAM date AS STRING LIKE "2025-01-15" DESCRIPTION "Backup date to restore from" OPTIONAL
DESCRIPTION "Restore a file from archived backups"
' Search for file in backup logs
IF date THEN
results = FIND "log.xlsx", "filename LIKE '%" + filename + "%' AND archive LIKE '%" + date + "%'"
ELSE
results = FIND "log.xlsx", "filename LIKE '%" + filename + "%'"
END IF
IF UBOUND(results) = 0 THEN
TALK "No archived files found matching '" + filename + "'"
RETURN NULL
END IF
IF UBOUND(results) > 1 AND NOT date THEN
TALK "Found " + UBOUND(results) + " versions. Please specify which date:"
FOR EACH result IN results
TALK "• " + result.archive + " (" + FORMAT(result.timestamp, "MMM DD, YYYY") + ")"
NEXT
RETURN results
END IF
' Get the archive
archive = results[1].archive
originalChecksum = results[1].md5
' Download from server
DOWNLOAD "server://backups/" + archive, archive
' Extract the specific file
EXTRACT archive, filename, "restored/"
' Verify checksum
restoredChecksum = MD5("restored/" + filename)
IF restoredChecksum = originalChecksum THEN
TALK "✅ File restored and verified: restored/" + filename
ELSE
TALK "⚠️ Warning: Checksum mismatch. File may be corrupted."
END IF
' Log restoration
WITH logEntry
timestamp = NOW()
action = "restore"
filename = filename
archive = archive
verified = (restoredChecksum = originalChecksum)
END WITH
SAVE "log.xlsx", logEntry
RETURN "restored/" + filename
How It Works
- Tool Registration:
ADD TOOLmakes backup functions available to the AI - Quick Actions:
ADD SUGGESTIONcreates one-tap backup options - Context Setting: Defines the bot as a backup management assistant
- File Scanning: Finds files matching age criteria
- Compression: Creates ZIP archives with checksums
- Logging: Tracks all operations in log.xlsx
- Restoration: Extracts files and verifies integrity
Scheduling Backups
Set up automated backups with scheduled jobs:
PARAM jobname AS STRING DESCRIPTION "Name of the backup job"
IF jobname = "daily backup" THEN
SET SCHEDULE "0 2 * * *" ' Run at 2 AM daily
' Backup documents folder
CALL backup-to-server("documents", 7)
' Backup reports folder
CALL backup-to-server("reports", 30)
' Send confirmation
SEND MAIL "admin@company.com", "Daily Backup Complete", "Backup completed at " + NOW()
END IF
IF jobname = "weekly cleanup" THEN
SET SCHEDULE "0 3 * * 0" ' Run at 3 AM on Sundays
' Remove backups older than 90 days
CALL cleanup-old(90)
SEND MAIL "admin@company.com", "Weekly Cleanup Complete", "Old backups removed"
END IF
Customization Ideas
Add Email Notifications
' After backup completes
SEND MAIL "admin@company.com", "Backup Report",
"Files archived: " + fileCount + "\n" +
"Total size: " + totalSize + " MB\n" +
"Archive: " + archiveName
Add Backup Verification
' Verify backup integrity
FOR EACH entry IN FIND("log.xlsx", "archive = '" + archiveName + "'")
originalFile = GET entry.filename
archivedChecksum = entry.md5
IF MD5(originalFile) <> archivedChecksum THEN
TALK "⚠️ Warning: " + entry.filename + " has changed since backup"
END IF
NEXT
Add Storage Monitoring
' Check available storage
storageUsed = FOLDER_SIZE("server://backups/")
storageLimit = 10000 ' 10 GB in MB
IF storageUsed > storageLimit * 0.9 THEN
TALK "⚠️ Storage is 90% full. Consider cleaning old backups."
SEND MAIL "admin@company.com", "Storage Warning", "Backup storage is almost full"
END IF
Related Templates
- start.bas - Basic greeting flow
- analytics-dashboard.bas - Monitor system metrics
- broadcast.bas - Send notifications to teams
Bank Template
The bank template provides a complete digital banking assistant for financial institutions, enabling customers to manage accounts, transfers, payments, cards, and investments through conversational AI.
Topic: Digital Banking Assistant
This template is perfect for:
- Retail banking customer service
- Account management automation
- Payment and transfer processing
- Card services and support
- Investment inquiries
The Code
ADD TOOL "check-balance"
ADD TOOL "transfer-money"
ADD TOOL "pay-bill"
ADD TOOL "card-services"
ADD TOOL "loan-inquiry"
ADD TOOL "investment-info"
ADD TOOL "transaction-history"
ADD TOOL "open-account"
ADD BOT "fraud-detector" WITH TRIGGER "suspicious, fraud, unauthorized, stolen, hack"
ADD BOT "investment-advisor" WITH TRIGGER "invest, stocks, funds, portfolio, returns, CDB, LCI"
ADD BOT "loan-specialist" WITH TRIGGER "loan, financing, credit, mortgage, empréstimo"
ADD BOT "card-services" WITH TRIGGER "card, credit card, debit card, block card, limit"
USE KB "banking-faq"
CLEAR SUGGESTIONS
ADD SUGGESTION "balance" AS "Check my balance"
ADD SUGGESTION "transfer" AS "Make a transfer"
ADD SUGGESTION "pix" AS "Send PIX"
ADD SUGGESTION "bills" AS "Pay a bill"
ADD SUGGESTION "card" AS "Card services"
ADD SUGGESTION "history" AS "Transaction history"
ADD SUGGESTION "invest" AS "Investment options"
ADD SUGGESTION "loan" AS "Loan information"
SET CONTEXT "You are a professional banking assistant for General Bank. Help customers with accounts, transfers, payments, cards, loans, and investments. Always verify identity before sensitive operations. Be helpful and secure. Never ask for full card numbers or passwords in chat."
BEGIN TALK
**General Bank** - Digital Banking Assistant
Welcome! I can help you with:
• Account balance and statements
• Transfers and PIX
• Bill payments
• Card services
• Investments
• Loans and financing
Select an option below or tell me what you need.
END TALK
BEGIN SYSTEM PROMPT
You are a secure banking assistant.
Security rules:
- Never display full account numbers
- Mask card numbers showing only last 4 digits
- Require confirmation for transactions over $1000
- Log all sensitive operations
- Escalate fraud concerns immediately
END SYSTEM PROMPT
Sample Dialogs
These conversations show how the bank template works in real-world scenarios.
Dialog 1: Check Balance
Dialog 2: PIX Transfer
Dialog 3: Block Lost Card
Dialog 4: Fraud Detection Escalation
Keywords Used
| Keyword | Purpose |
|---|---|
ADD TOOL | Register banking operation tools |
ADD BOT | Register specialized bots with triggers |
USE KB | Load banking FAQ knowledge base |
ADD SUGGESTION | Create quick action buttons |
SET CONTEXT | Define bot behavior and security rules |
BEGIN TALK | Welcome message block |
BEGIN SYSTEM PROMPT | Security instructions for AI |
Multi-Bot Architecture
The bank template uses a multi-bot architecture for specialized handling:
| Bot | Trigger Words | Purpose |
|---|---|---|
fraud-detector | suspicious, fraud, unauthorized, stolen, hack | Handle security concerns |
investment-advisor | invest, stocks, funds, portfolio, CDB, LCI | Investment guidance |
loan-specialist | loan, financing, credit, mortgage | Loan inquiries |
card-services | card, credit card, debit card, block, limit | Card management |
Security Features
Built-in Protections
- Data Masking: Account and card numbers are always masked
- Transaction Limits: Confirmation required for large transactions
- Fraud Escalation: Automatic routing to fraud team for suspicious activity
- Audit Logging: All sensitive operations are logged
- No Sensitive Data: Never asks for passwords or full card numbers
Implementing Security Checks
' Example: Verify identity before sensitive operation
PARAM operation AS STRING
IF operation = "transfer" AND amount > 1000 THEN
TALK "For your security, please confirm your identity."
TALK "Enter the last 4 digits of your CPF:"
HEAR verification
IF NOT VERIFY_IDENTITY(verification) THEN
TALK "Verification failed. Please try again or call support."
RETURN
END IF
END IF
Customization Ideas
Add Investment Products
ADD TOOL "simulate-investment"
ADD TOOL "compare-products"
' In investment flow
products = FIND "investment_products.csv", "risk_level = 'low'"
TALK "Here are our low-risk investment options:"
FOR EACH product IN products
TALK "• " + product.name + " - " + product.rate + "% p.a."
NEXT
Add Bill Payment with Barcode
PARAM barcode AS STRING DESCRIPTION "Bill barcode or PIX copy-paste code"
IF LEN(barcode) = 47 THEN
' Boleto bancário
bill = PARSE_BOLETO(barcode)
TALK "Bill Details:"
TALK "Payee: " + bill.payee
TALK "Amount: R$ " + FORMAT(bill.amount, "#,##0.00")
TALK "Due Date: " + FORMAT(bill.due_date, "DD/MM/YYYY")
ELSE IF INSTR(barcode, "pix") > 0 THEN
' PIX QR Code
pix = PARSE_PIX(barcode)
TALK "PIX Payment: R$ " + FORMAT(pix.amount, "#,##0.00")
END IF
Add Account Statements
PARAM period AS STRING LIKE "last 30 days" DESCRIPTION "Statement period"
transactions = FIND "transactions.csv", "account_id = '" + account_id + "' AND date >= '" + start_date + "'"
TALK "📋 **Account Statement**"
TALK "Period: " + period
TALK ""
balance = 0
FOR EACH tx IN transactions
IF tx.type = "credit" THEN
balance = balance + tx.amount
TALK "➕ " + tx.description + ": R$ " + FORMAT(tx.amount, "#,##0.00")
ELSE
balance = balance - tx.amount
TALK "➖ " + tx.description + ": R$ " + FORMAT(tx.amount, "#,##0.00")
END IF
NEXT
TALK ""
TALK "**Final Balance:** R$ " + FORMAT(balance, "#,##0.00")
Related Templates
- store.bas - E-commerce with payment integration
- privacy.bas - Data protection compliance
- auth.bas - Authentication patterns
Broadcast Template
The broadcast template enables mass messaging to contact lists, perfect for announcements, marketing campaigns, and bulk notifications through WhatsApp and other channels.
Topic: Mass Messaging & Announcements
This template is perfect for:
- Company-wide announcements
- Marketing campaigns
- Customer notifications
- Event reminders
- Newsletter distribution
The Code
PARAM message AS STRING LIKE "Hello {name}, how are you?" DESCRIPTION "Message to broadcast, supports {name} and {mobile} variables"
PARAM listfile AS STRING LIKE "broadcast.csv" DESCRIPTION "CSV file with contacts (name, mobile columns)"
PARAM filter AS STRING LIKE "status=active" DESCRIPTION "Filter condition for contact list" OPTIONAL
DESCRIPTION "Send broadcast message to a list of contacts from CSV file"
IF NOT listfile THEN
listfile = "broadcast.csv"
END IF
IF filter THEN
list = FIND listfile, filter
ELSE
list = FIND listfile
END IF
IF UBOUND(list) = 0 THEN
TALK "No contacts found in " + listfile
RETURN 0
END IF
index = 1
sent = 0
DO WHILE index < UBOUND(list)
row = list[index]
msg = REPLACE(message, "{name}", row.name)
msg = REPLACE(msg, "{mobile}", row.mobile)
TALK TO row.mobile, msg
WAIT 5
WITH logEntry
timestamp = NOW()
user = USERNAME
from = FROM
mobile = row.mobile
name = row.name
status = "sent"
END WITH
SAVE "Log.xlsx", logEntry
sent = sent + 1
index = index + 1
LOOP
TALK "Broadcast sent to " + sent + " contacts."
RETURN sent
Sample Dialogs
These conversations show how the broadcast template works in real-world scenarios.
Dialog 1: Simple Broadcast
Dialog 2: Filtered Broadcast
Dialog 3: No Contacts Found
Keywords Used
| Keyword | Purpose |
|---|---|
PARAM | Define input parameters with descriptions |
DESCRIPTION | Tool description for AI |
FIND | Query contacts from CSV file |
REPLACE | Substitute variables in message template |
TALK TO | Send message to specific phone number |
WAIT | Delay between messages (rate limiting) |
SAVE | Log each message to spreadsheet |
RETURN | Return count of sent messages |
How It Works
- Load Contacts:
FINDretrieves contacts from CSV with optional filter - Validate List: Checks if contacts were found
- Loop Through Contacts: Iterates through each contact
- Personalize Message:
REPLACEsubstitutes {name} and {mobile} - Send Message:
TALK TOdelivers to each phone number - Rate Limiting:
WAIT 5pauses 5 seconds between messages - Log Operation: Each send is recorded in Log.xlsx
- Report Results: Returns total messages sent
Contact List Format
Your CSV file should have these columns:
name,mobile,status,segment
John Smith,+5511999999999,active,regular
Maria Garcia,+5521888888888,active,vip
Carlos Santos,+5531777777777,inactive,regular
Ana Lima,+5541666666666,active,vip
| Column | Required | Description |
|---|---|---|
name | Yes | Contact’s display name |
mobile | Yes | Phone in international format |
status | No | For filtering (active/inactive) |
segment | No | For targeting (vip/regular) |
Customization Ideas
Add Message Templates
ADD TOOL "broadcast"
ADD TOOL "list-templates"
ADD TOOL "create-template"
' Load saved templates
templates = FIND "message_templates.csv"
TALK "Available templates:"
FOR EACH template IN templates
TALK "• " + template.name + ": " + LEFT(template.message, 50) + "..."
NEXT
TALK "Which template would you like to use?"
HEAR templateName
selected = FIND "message_templates.csv", "name = '" + templateName + "'"
message = selected.message
Add Scheduling
PARAM schedule_time AS STRING LIKE "2025-01-20 09:00" DESCRIPTION "When to send (optional)"
IF schedule_time THEN
SET SCHEDULE schedule_time
' Store broadcast details for later
SET BOT MEMORY "scheduled_message", message
SET BOT MEMORY "scheduled_list", listfile
SET BOT MEMORY "scheduled_filter", filter
TALK "📅 Broadcast scheduled for " + schedule_time
TALK "I'll send to " + UBOUND(list) + " contacts at that time."
RETURN 0
END IF
Add Progress Updates
total = UBOUND(list)
checkpoints = [25, 50, 75, 100]
DO WHILE index <= total
' ... send message ...
' Check progress
percent = INT((index / total) * 100)
IF INARRAY(percent, checkpoints) THEN
TALK "📊 Progress: " + percent + "% (" + index + "/" + total + ")"
END IF
index = index + 1
LOOP
Add Opt-Out Handling
' Check if contact has opted out
optouts = FIND "optouts.csv"
DO WHILE index <= UBOUND(list)
row = list[index]
' Skip opted-out contacts
IF FIND("optouts.csv", "mobile = '" + row.mobile + "'") THEN
WITH logEntry
mobile = row.mobile
status = "skipped-optout"
END WITH
SAVE "Log.xlsx", logEntry
index = index + 1
CONTINUE
END IF
' ... send message ...
LOOP
Add Media Support
PARAM image AS STRING LIKE "promo.jpg" DESCRIPTION "Image to include (optional)"
IF image THEN
msg = msg + "\n[Image: " + image + "]"
TALK TO row.mobile, msg, image
ELSE
TALK TO row.mobile, msg
END IF
Best Practices
Message Content
- Personalize: Always use
{name}for a personal touch - Be Concise: Keep messages short and clear
- Clear CTA: Include a clear call-to-action
- Identify Yourself: Make sure recipients know who’s messaging
Compliance
- Consent Required: Only message contacts who opted in
- Easy Opt-Out: Include unsubscribe instructions
- Respect Hours: Don’t send late at night
- Honor Limits: WhatsApp has daily messaging limits
Performance
- Rate Limiting: Keep 5+ second delays to avoid blocks
- Batch Processing: For large lists, consider batching
- Error Handling: Log and handle failed sends
- Monitor Results: Check logs for delivery issues
Logging Structure
The Log.xlsx file tracks all broadcast activity:
| Column | Description |
|---|---|
| timestamp | When message was sent |
| user | Who initiated the broadcast |
| from | Sender identifier |
| mobile | Recipient phone number |
| name | Recipient name |
| status | sent/failed/skipped |
| error | Error message if failed |
Related Templates
- announcements.bas - Company announcements system
- whatsapp.bas - WhatsApp-specific features
- store.bas - E-commerce with customer notifications
Default Template
The default template is the starter bot that comes with General Bots, providing essential utility tools like weather forecasts, email sending, SMS messaging, calculations, and translations.
Topic: Starter Bot with Essential Tools
This template is perfect for:
- Quick start with General Bots
- Basic utility functions
- Learning BASIC syntax
- Foundation for custom bots
Available Tools
The default template includes these ready-to-use tools:
| Tool | File | Description |
|---|---|---|
| Weather | weather.bas | Get weather forecasts for any city |
| Send Email | send-email.bas | Send emails to any address |
| Send SMS | send-sms.bas | Send text messages to mobile phones |
| Calculate | calculate.bas | Perform mathematical calculations |
| Translate | translate.bas | Translate text between languages |
The Code: weather.bas
PARAM location AS STRING LIKE "New York" DESCRIPTION "City or location to get weather forecast"
DESCRIPTION "Get current weather forecast for any city or location"
lat = 40.7128
lon = -74.0060
location_lower = LCASE(location)
IF INSTR(location_lower, "new york") > 0 THEN
lat = 40.7128
lon = -74.0060
ELSE IF INSTR(location_lower, "london") > 0 THEN
lat = 51.5074
lon = -0.1278
ELSE IF INSTR(location_lower, "tokyo") > 0 THEN
lat = 35.6762
lon = 139.6503
ELSE IF INSTR(location_lower, "sao paulo") > 0 THEN
lat = -23.5505
lon = -46.6333
END IF
weather_url = "https://api.open-meteo.com/v1/forecast?latitude=" + lat + "&longitude=" + lon + "¤t_weather=true"
weather_data = GET weather_url
IF weather_data.current_weather THEN
current = weather_data.current_weather
code = current.weathercode
condition = "Clear"
icon = "☀️"
IF code = 0 THEN
condition = "Clear sky"
icon = "☀️"
ELSE IF code >= 1 AND code <= 3 THEN
condition = "Partly cloudy"
icon = "⛅"
ELSE IF code >= 51 AND code <= 67 THEN
condition = "Rainy"
icon = "🌧️"
ELSE IF code >= 95 AND code <= 99 THEN
condition = "Thunderstorm"
icon = "⛈️"
END IF
TALK icon + " Weather for " + location + ":"
TALK "Temperature: " + current.temperature + "°C"
TALK "Condition: " + condition
TALK "Wind: " + current.windspeed + " km/h"
ELSE
TALK "Could not fetch weather for: " + location
END IF
Sample Dialogs
These conversations show how the default template works in real-world scenarios.
Dialog 1: Weather Forecast
Dialog 2: Send Email
Dialog 3: Translation
Dialog 4: Calculation
Template Structure
default.gbai/
├── default.gbdialog/
│ ├── calculate.bas # Math calculations
│ ├── send-email.bas # Email sending
│ ├── send-sms.bas # SMS messaging
│ ├── translate.bas # Text translation
│ └── weather.bas # Weather forecasts
└── default.gbot/
└── config.csv # Bot configuration
Keywords Used
| Keyword | Purpose |
|---|---|
PARAM | Define tool parameters |
DESCRIPTION | Tool description for AI |
GET | HTTP GET request |
TALK | Send message to user |
SEND MAIL | Send email |
SEND SMS | Send text message |
INSTR | Find substring position |
LCASE | Convert to lowercase |
Supported Cities (Weather)
The weather tool includes coordinates for these cities:
- New York, Los Angeles, Chicago (USA)
- London, Paris, Berlin, Madrid (Europe)
- Tokyo, Beijing, Singapore, Mumbai, Dubai (Asia)
- Sydney (Australia)
- São Paulo, Rio de Janeiro (Brazil)
- Toronto (Canada)
Customization Ideas
Add More Cities
ELSE IF INSTR(location_lower, "amsterdam") > 0 THEN
lat = 52.3676
lon = 4.9041
ELSE IF INSTR(location_lower, "moscow") > 0 THEN
lat = 55.7558
lon = 37.6173
END IF
Add Extended Forecast
' Get 7-day forecast
weather_url = weather_url + "&daily=temperature_2m_max,temperature_2m_min&forecast_days=7"
weather_data = GET weather_url
TALK "📅 7-Day Forecast for " + location + ":"
FOR i = 1 TO 7
TALK "Day " + i + ": " + weather_data.daily.temperature_2m_max[i] + "°C / " + weather_data.daily.temperature_2m_min[i] + "°C"
NEXT
Add Email Templates
PARAM template AS STRING LIKE "meeting-reminder" DESCRIPTION "Email template to use"
IF template = "meeting-reminder" THEN
subject = "Meeting Reminder"
body = "Hi {name},\n\nThis is a reminder about our upcoming meeting.\n\nBest regards"
body = REPLACE(body, "{name}", recipient_name)
END IF
SEND MAIL recipient, subject, body
Add SMS Confirmation
PARAM phone AS PHONE DESCRIPTION "Phone number with country code"
PARAM message AS STRING DESCRIPTION "Message to send"
DESCRIPTION "Send SMS with delivery confirmation"
SEND SMS phone, message
TALK "📱 SMS sent to " + phone
TALK "Message: " + LEFT(message, 50) + "..."
' Log the message
WITH smsLog
timestamp = NOW()
recipient = phone
content = message
status = "sent"
END WITH
SAVE "sms_log.csv", smsLog
Using as a Base Template
The default template is designed to be extended. Here’s how to build on it:
1. Copy the Template
cp -r templates/default.gbai packages/my-bot.gbai
2. Add Your Tools
Create new .bas files in the .gbdialog folder for your custom functionality.
3. Add a Start Script
Create start.bas to configure your bot:
ADD TOOL "weather"
ADD TOOL "send-email"
ADD TOOL "send-sms"
ADD TOOL "calculate"
ADD TOOL "translate"
' Add your custom tools
ADD TOOL "my-custom-tool"
CLEAR SUGGESTIONS
ADD SUGGESTION "weather" AS "Check weather"
ADD SUGGESTION "email" AS "Send email"
ADD SUGGESTION "translate" AS "Translate text"
BEGIN TALK
Welcome! I can help you with weather, emails, translations, and more.
END TALK
Related Templates
- start.bas - Basic greeting flow
- broadcast.bas - Mass messaging
- store.bas - E-commerce features
Education Template
The education template provides a comprehensive educational institution assistant that helps students and staff with enrollment, course management, schedules, grades, tuition information, and academic support.
Topic: Educational Institution Assistant
This template is perfect for:
- Universities and colleges
- Online learning platforms
- Training centers
- K-12 schools
- Corporate learning management
The Code
ADD TOOL "enrollment"
ADD TOOL "course-info"
ADD TOOL "schedule"
ADD TOOL "grades"
ADD TOOL "tuition"
ADD TOOL "support"
USE KB "edu.gbkb"
CLEAR SUGGESTIONS
ADD SUGGESTION "enroll" AS "Enroll in a course"
ADD SUGGESTION "courses" AS "View available courses"
ADD SUGGESTION "schedule" AS "My class schedule"
ADD SUGGESTION "grades" AS "Check my grades"
ADD SUGGESTION "tuition" AS "Payment information"
ADD SUGGESTION "help" AS "Academic support"
SET CONTEXT "education" AS "You are an educational institution assistant helping with enrollment, courses, schedules, grades, and academic support. Be helpful and guide students through processes clearly."
BEGIN TALK
**Education Assistant**
Welcome! I can help you with:
• Course enrollment and registration
• Available courses and programs
• Class schedules and calendars
• Grades and transcripts
• Tuition and payment info
• Academic support and advising
Select an option or ask me anything.
END TALK
BEGIN SYSTEM PROMPT
You are an AI assistant for an educational institution.
Be friendly and professional.
Provide clear, accurate assistance.
Reduce administrative workload by handling common inquiries.
Help with enrollment and registration.
Provide course information and prerequisites.
Answer admissions questions.
Guide through registration process.
Explain academic policies.
END SYSTEM PROMPT
Sample Dialogs
These conversations show how the education template works in real-world scenarios.
Dialog 1: Course Enrollment
Dialog 2: Check Grades
Dialog 3: Class Schedule
Dialog 4: Tuition Payment
Keywords Used
| Keyword | Purpose |
|---|---|
ADD TOOL | Register enrollment and academic tools |
USE KB | Load educational knowledge base |
ADD SUGGESTION | Create quick action buttons |
SET CONTEXT | Define educational assistant behavior |
BEGIN TALK | Welcome message block |
BEGIN SYSTEM PROMPT | AI behavior instructions |
Template Structure
edu.gbai/
├── edu.gbdialog/
│ ├── start.bas # Main entry point
│ └── enrollment.bas # Enrollment workflow
├── edu.gbdata/
│ └── (data tables) # Student/course data
├── edu.gbot/
│ └── config.csv # Bot configuration
└── edu.gbkb/
└── academic-policies.md # Knowledge base
Enrollment Tool: enrollment.bas
PARAM student_id AS STRING DESCRIPTION "Student ID number"
PARAM course_code AS STRING LIKE "CS101" DESCRIPTION "Course code to enroll in"
DESCRIPTION "Enroll a student in a course after checking prerequisites and availability"
' Verify student exists
student = FIND "students.csv", "id = '" + student_id + "'"
IF NOT student THEN
TALK "Student ID not found. Please verify your ID."
RETURN NULL
END IF
' Get course information
course = FIND "courses.csv", "code = '" + course_code + "'"
IF NOT course THEN
TALK "Course " + course_code + " not found."
RETURN NULL
END IF
' Check if already enrolled
existing = FIND "enrollments.csv", "student_id = '" + student_id + "' AND course_code = '" + course_code + "'"
IF existing THEN
TALK "You're already enrolled in " + course_code + "."
RETURN NULL
END IF
' Check prerequisites
IF course.prerequisite <> "" THEN
prereq = FIND "enrollments.csv", "student_id = '" + student_id + "' AND course_code = '" + course.prerequisite + "' AND grade >= 'C'"
IF NOT prereq THEN
TALK "You need to complete " + course.prerequisite + " before enrolling in " + course_code + "."
RETURN NULL
END IF
END IF
' Check availability
enrolled_count = COUNT("enrollments.csv", "course_code = '" + course_code + "' AND term = 'Fall2024'")
IF enrolled_count >= course.capacity THEN
TALK "This course is full. Would you like to join the waitlist?"
HEAR waitlist_choice
IF LOWER(waitlist_choice) = "yes" THEN
WITH waitlist_entry
student_id = student_id
course_code = course_code
timestamp = NOW()
END WITH
SAVE "waitlist.csv", waitlist_entry
TALK "You've been added to the waitlist. We'll notify you if a spot opens."
END IF
RETURN NULL
END IF
' Create enrollment
WITH enrollment
id = GUID()
student_id = student_id
course_code = course_code
term = "Fall2024"
enrollment_date = NOW()
status = "enrolled"
END WITH
SAVE "enrollments.csv", enrollment
' Send confirmation email
SEND MAIL student.email, "Enrollment Confirmed: " + course_code,
"You have been enrolled in " + course.name + ".\n" +
"Schedule: " + course.schedule + "\n" +
"Room: " + course.room + "\n" +
"Instructor: " + course.instructor
TALK "✅ You're enrolled in " + course.name + "!"
TALK "📅 Schedule: " + course.schedule
TALK "🏫 Room: " + course.room
RETURN enrollment.id
Grades Tool: grades.bas
PARAM student_id AS STRING DESCRIPTION "Student ID number"
PARAM term AS STRING LIKE "Fall2024" DESCRIPTION "Academic term" OPTIONAL
DESCRIPTION "Retrieve student grades for current or specified term"
IF NOT term THEN
term = "Fall2024" ' Current term
END IF
' Get student info
student = FIND "students.csv", "id = '" + student_id + "'"
IF NOT student THEN
TALK "Student not found."
RETURN NULL
END IF
' Get enrollments with grades
enrollments = FIND "enrollments.csv", "student_id = '" + student_id + "' AND term = '" + term + "'"
IF UBOUND(enrollments) = 0 THEN
TALK "No courses found for " + term + "."
RETURN NULL
END IF
TALK "📊 **Grades for " + student.name + " - " + term + "**"
TALK ""
total_points = 0
total_credits = 0
FOR EACH enrollment IN enrollments
course = FIND "courses.csv", "code = '" + enrollment.course_code + "'"
grade_display = enrollment.grade
IF grade_display = "" THEN
grade_display = "In Progress"
END IF
TALK "• " + enrollment.course_code + " - " + course.name + ": **" + grade_display + "**"
IF enrollment.grade <> "" THEN
grade_points = GRADE_TO_POINTS(enrollment.grade)
total_points = total_points + (grade_points * course.credits)
total_credits = total_credits + course.credits
END IF
NEXT
IF total_credits > 0 THEN
gpa = total_points / total_credits
TALK ""
TALK "**Term GPA:** " + FORMAT(gpa, "#.00")
IF gpa >= 3.5 THEN
TALK "🌟 Dean's List!"
END IF
END IF
RETURN enrollments
Customization Ideas
Add Course Recommendations
ADD TOOL "recommend-courses"
' Based on major and completed courses
completed = FIND "enrollments.csv", "student_id = '" + student_id + "' AND grade >= 'C'"
major = student.major
' Find next required courses
requirements = FIND "degree_requirements.csv", "major = '" + major + "'"
recommended = []
FOR EACH req IN requirements
already_done = FILTER(completed, "course_code = '" + req.course_code + "'")
IF UBOUND(already_done) = 0 THEN
' Check if prerequisites met
IF req.prerequisite = "" OR HAS_COMPLETED(student_id, req.prerequisite) THEN
PUSH recommended, req
END IF
END IF
NEXT
TALK "Based on your progress, I recommend these courses for next term:"
FOR EACH course IN FIRST(recommended, 5)
TALK "• " + course.course_code + " - " + course.name
NEXT
Add Academic Calendar Integration
ADD TOOL "important-dates"
dates = FIND "academic_calendar.csv", "date >= '" + NOW() + "' AND date <= '" + DATEADD(NOW(), 30, 'days') + "'"
TALK "📅 **Upcoming Important Dates:**"
FOR EACH date IN dates
TALK "• " + FORMAT(date.date, "MMM DD") + ": " + date.event
NEXT
Add Advisor Scheduling
ADD TOOL "book-advisor"
PARAM preferred_date AS DATE DESCRIPTION "Preferred date for appointment"
advisor = FIND "advisors.csv", "department = '" + student.major + "'"
available = FIND "advisor_slots.csv", "advisor_id = '" + advisor.id + "' AND date = '" + preferred_date + "' AND booked = false"
IF UBOUND(available) > 0 THEN
TALK "Available times on " + FORMAT(preferred_date, "MMM DD") + ":"
FOR EACH slot IN available
ADD SUGGESTION slot.time AS slot.time
NEXT
HEAR selected_time
' Book the appointment
UPDATE "advisor_slots" SET booked = true WHERE id = slot.id
TALK "✅ Appointment booked with " + advisor.name + " on " + FORMAT(preferred_date, "MMM DD") + " at " + selected_time
SEND MAIL student.email, "Advisor Appointment Confirmed", "Your meeting with " + advisor.name + " is scheduled."
END IF
Add Document Requests
ADD TOOL "request-transcript"
PARAM delivery_method AS STRING LIKE "email" DESCRIPTION "Delivery: email, mail, or pickup"
' Check for holds
holds = FIND "student_holds.csv", "student_id = '" + student_id + "' AND resolved = false"
IF UBOUND(holds) > 0 THEN
TALK "⚠️ There's a hold on your account. Please resolve it before requesting transcripts."
TALK "Hold reason: " + holds[1].reason
RETURN NULL
END IF
' Create transcript request
WITH request
id = GUID()
student_id = student_id
type = "official_transcript"
delivery = delivery_method
status = "processing"
request_date = NOW()
fee = 10.00
END WITH
SAVE "document_requests.csv", request
TALK "✅ Transcript request submitted!"
TALK "📋 Request #: " + request.id
TALK "💰 Fee: $10.00 (added to your account)"
TALK "📬 Delivery: " + delivery_method
TALK "⏱️ Processing time: 3-5 business days"
Related Templates
- start.bas - Basic greeting patterns
- enrollment.bas - Detailed enrollment workflow
- auth.bas - Student authentication
HR Employees Template
The HR Employees template provides a comprehensive employee management system that helps HR teams manage employee records, organizational structure, and personnel information through a conversational interface.
Topic: Employee Management & HR Directory
This template is perfect for:
- HR departments
- People operations teams
- Employee self-service portals
- Organizational management
- Employee directory services
The Code
ADD TOOL "add-employee"
ADD TOOL "update-employee"
ADD TOOL "search-employee"
ADD TOOL "employee-directory"
ADD TOOL "org-chart"
ADD TOOL "emergency-contacts"
USE KB "employees.gbkb"
SET CONTEXT "employee management" AS "You are an HR assistant helping manage employee information. Help with adding new employees, updating records, searching the directory, viewing org charts, and managing emergency contacts. Maintain confidentiality of employee data."
CLEAR SUGGESTIONS
ADD SUGGESTION "directory" AS "Employee directory"
ADD SUGGESTION "add" AS "Add new employee"
ADD SUGGESTION "search" AS "Search employee"
ADD SUGGESTION "org" AS "Organization chart"
ADD SUGGESTION "emergency" AS "Emergency contacts"
BEGIN TALK
**Employee Management System**
I can help you with:
• View employee directory
• Add new employees
• Search for employees
• View organization chart
• Manage emergency contacts
• Generate employee reports
Select an option or tell me what you need.
END TALK
BEGIN SYSTEM PROMPT
You are an HR assistant for the Employee Management System.
Confirm sensitive operations before executing.
Never expose salaries or personal IDs without authorization.
Use professional and helpful language.
END SYSTEM PROMPT
Sample Dialogs
These conversations show how the HR Employees template works in real-world scenarios.
Dialog 1: Search Employee
Dialog 2: Add New Employee
Dialog 3: View Organization Chart
Dialog 4: Emergency Contacts
Keywords Used
| Keyword | Purpose |
|---|---|
ADD TOOL | Register employee management tools |
USE KB | Load HR knowledge base |
SET CONTEXT | Define HR assistant behavior |
ADD SUGGESTION | Create quick action buttons |
BEGIN TALK | Welcome message block |
BEGIN SYSTEM PROMPT | Confidentiality and behavior rules |
Template Structure
employees.gbai/
├── employees.gbdialog/
│ ├── start.bas # Main entry point
│ ├── add-employee.bas # New employee onboarding
│ ├── update-employee.bas # Update employee records
│ ├── search-employee.bas # Employee search
│ ├── employee-directory.bas # Full directory view
│ ├── org-chart.bas # Organization structure
│ └── emergency-contacts.bas # Emergency contact access
├── employees.gbdata/
│ └── employees.csv # Employee database
├── employees.gbdrive/
│ └── templates/ # Document templates
├── employees.gbkb/
│ ├── hr-policies.md # HR policies
│ └── org-structure.md # Organization info
└── employees.gbot/
└── config.csv # Bot configuration
Search Employee Tool: search-employee.bas
PARAM query AS STRING LIKE "John" DESCRIPTION "Name, department, or title to search for"
PARAM department AS STRING LIKE "Engineering" DESCRIPTION "Filter by department" OPTIONAL
DESCRIPTION "Search for employees by name, department, or title"
' Build search filter
filter = "name LIKE '%" + query + "%' OR title LIKE '%" + query + "%'"
IF department THEN
filter = "(" + filter + ") AND department = '" + department + "'"
END IF
' Execute search
results = FIND "employees.csv", filter
IF UBOUND(results) = 0 THEN
TALK "No employees found matching '" + query + "'"
RETURN NULL
END IF
TALK "🔍 Found " + UBOUND(results) + " employee(s):"
TALK ""
FOR EACH emp IN results
TALK "**" + emp.name + "**"
TALK "📧 " + emp.email
TALK "📞 Ext. " + emp.extension
TALK "💼 " + emp.title
TALK "🏢 " + emp.department
TALK ""
NEXT
RETURN results
Add Employee Tool: add-employee.bas
PARAM name AS STRING LIKE "John Smith" DESCRIPTION "Employee full name"
PARAM title AS STRING LIKE "Software Engineer" DESCRIPTION "Job title"
PARAM department AS STRING LIKE "Engineering" DESCRIPTION "Department name"
PARAM manager AS STRING LIKE "Jane Doe" DESCRIPTION "Manager's name"
PARAM start_date AS DATE LIKE "2025-02-01" DESCRIPTION "Start date"
DESCRIPTION "Add a new employee to the system"
' Generate employee ID
employeeId = "EMP-" + FORMAT(NOW(), "YYYY") + "-" + FORMAT(RANDOM(1000, 9999))
' Generate email
emailName = LOWER(REPLACE(name, " ", "."))
email = emailName + "@company.com"
' Assign extension
extension = FORMAT(RANDOM(4000, 4999))
' Find manager ID
managerRecord = FIND "employees.csv", "name = '" + manager + "'"
IF NOT managerRecord THEN
TALK "⚠️ Manager '" + manager + "' not found. Please verify the name."
RETURN NULL
END IF
' Create employee record
WITH employee
id = employeeId
name = name
email = email
extension = extension
title = title
department = department
manager_id = managerRecord.id
manager_name = manager
start_date = start_date
status = "active"
created_at = NOW()
END WITH
' Save to database
SAVE "employees.csv", employee
' Send welcome email
SEND MAIL email, "Welcome to the Company!",
"Dear " + name + ",\n\n" +
"Welcome to the team! Your employee ID is " + employeeId + ".\n" +
"Your manager is " + manager + ".\n" +
"Start date: " + FORMAT(start_date, "MMMM DD, YYYY") + "\n\n" +
"HR will contact you with onboarding details.\n\n" +
"Best regards,\nHR Team"
' Create IT ticket for equipment
CREATE_TASK "New Employee Setup - " + name,
"Please prepare workstation for new employee:\n" +
"Name: " + name + "\n" +
"Department: " + department + "\n" +
"Start Date: " + FORMAT(start_date, "MMM DD, YYYY"),
"it@company.com"
' Notify manager
SEND MAIL managerRecord.email, "New Team Member: " + name,
"A new team member has been added:\n\n" +
"Name: " + name + "\n" +
"Title: " + title + "\n" +
"Start Date: " + FORMAT(start_date, "MMM DD, YYYY") + "\n\n" +
"Please prepare for their onboarding."
TALK "✅ Employee **" + name + "** added successfully!"
TALK "🆔 ID: " + employeeId
TALK "📧 Email: " + email
TALK "📞 Extension: " + extension
RETURN employee
Org Chart Tool: org-chart.bas
PARAM department AS STRING LIKE "Engineering" DESCRIPTION "Department to show org chart for"
PARAM manager AS STRING DESCRIPTION "Show org chart under specific manager" OPTIONAL
DESCRIPTION "Display organization chart for a department or team"
IF manager THEN
' Get org chart under specific manager
managerRecord = FIND "employees.csv", "name = '" + manager + "'"
IF NOT managerRecord THEN
TALK "Manager not found."
RETURN NULL
END IF
reports = FIND "employees.csv", "manager_id = '" + managerRecord.id + "'"
TALK "👔 **" + manager + "** - " + managerRecord.title
FOR EACH emp IN reports
subReports = COUNT("employees.csv", "manager_id = '" + emp.id + "'")
IF subReports > 0 THEN
TALK "├── 👤 " + emp.name + " (" + emp.title + " - " + subReports + " reports)"
ELSE
TALK "├── 👤 " + emp.name + " (" + emp.title + ")"
END IF
NEXT
ELSE
' Get department org chart
deptHead = FIND "employees.csv", "department = '" + department + "' AND title LIKE '%Director%' OR title LIKE '%VP%'"
IF NOT deptHead THEN
deptHead = FIND "employees.csv", "department = '" + department + "' AND title LIKE '%Manager%'"
END IF
TALK "🏢 **" + department + " Organization**"
TALK ""
FOR EACH head IN deptHead
TALK "👔 **" + head.title + "** - " + head.name
reports = FIND "employees.csv", "manager_id = '" + head.id + "'"
FOR EACH emp IN reports
subCount = COUNT("employees.csv", "manager_id = '" + emp.id + "'")
IF subCount > 0 THEN
TALK "├── 👤 " + emp.name + " (" + subCount + " reports)"
ELSE
TALK "├── 👤 " + emp.name
END IF
NEXT
TALK ""
NEXT
END IF
totalCount = COUNT("employees.csv", "department = '" + department + "'")
TALK "**Total:** " + totalCount + " employees in " + department
RETURN department
Customization Ideas
Add Employee Self-Service
' Allow employees to update their own info
IF user_id = employee.id THEN
TALK "What would you like to update?"
ADD SUGGESTION "phone" AS "Phone number"
ADD SUGGESTION "address" AS "Address"
ADD SUGGESTION "emergency" AS "Emergency contacts"
ADD SUGGESTION "photo" AS "Profile photo"
HEAR updateChoice
' Only allow non-sensitive updates
IF updateChoice = "phone" THEN
TALK "Enter your new phone number:"
HEAR newPhone
UPDATE "employees.csv" SET phone = newPhone WHERE id = user_id
TALK "✅ Phone number updated!"
END IF
END IF
Add Birthday Reminders
' Scheduled job for birthday notifications
SET SCHEDULE "0 9 * * *" ' Run daily at 9 AM
today = FORMAT(NOW(), "MM-DD")
birthdays = FIND "employees.csv", "FORMAT(birth_date, 'MM-DD') = '" + today + "'"
FOR EACH emp IN birthdays
' Notify their team
manager = FIND "employees.csv", "id = '" + emp.manager_id + "'"
SEND MAIL manager.email, "🎂 Team Birthday Today!",
emp.name + " has a birthday today! Don't forget to wish them well."
' Send birthday message
SEND MAIL emp.email, "🎂 Happy Birthday!",
"Dear " + emp.name + ",\n\nHappy Birthday from all of us!"
NEXT
Add Anniversary Tracking
' Check for work anniversaries
today = FORMAT(NOW(), "MM-DD")
anniversaries = FIND "employees.csv", "FORMAT(start_date, 'MM-DD') = '" + today + "'"
FOR EACH emp IN anniversaries
years = YEAR(NOW()) - YEAR(emp.start_date)
IF years > 0 THEN
SEND MAIL emp.email, "🎉 Happy Work Anniversary!",
"Congratulations on " + years + " years with us!"
' Milestone recognition
IF years = 5 OR years = 10 OR years = 15 OR years = 20 THEN
CREATE_TASK "Milestone Recognition - " + emp.name,
emp.name + " has completed " + years + " years. Please arrange recognition.",
"hr@company.com"
END IF
END IF
NEXT
Add Department Reports
ADD TOOL "department-report"
PARAM department AS STRING DESCRIPTION "Department to generate report for"
DESCRIPTION "Generate a department headcount and demographics report"
employees = FIND "employees.csv", "department = '" + department + "'"
totalCount = UBOUND(employees)
managerCount = 0
avgTenure = 0
FOR EACH emp IN employees
IF INSTR(emp.title, "Manager") > 0 OR INSTR(emp.title, "Director") > 0 THEN
managerCount = managerCount + 1
END IF
avgTenure = avgTenure + DATEDIFF(NOW(), emp.start_date, "years")
NEXT
avgTenure = avgTenure / totalCount
TALK "📊 **" + department + " Department Report**"
TALK ""
TALK "👥 Total Employees: " + totalCount
TALK "👔 Managers: " + managerCount
TALK "📅 Avg. Tenure: " + FORMAT(avgTenure, "#.#") + " years"
TALK ""
TALK "**By Level:**"
' ... additional breakdown
Data Security
The employee management system includes several security features:
- Access Control: Sensitive data requires authorization
- Audit Logging: All access to confidential info is logged
- Data Masking: Personal IDs and salaries are not exposed
- Emergency Override: Emergency contacts accessible with justification
Related Templates
- helpdesk.bas - IT ticket integration
- edu.bas - Training and development
- privacy.bas - Data protection compliance
ERP Template
The ERP (Enterprise Resource Planning) template provides comprehensive inventory management, purchasing, and warehouse operations through a conversational AI interface.
Topic: Enterprise Resource Planning & Inventory
This template is perfect for:
- Warehouse management
- Inventory tracking
- Purchase order processing
- Stock transfers
- Cycle counting and audits
The Code
ADD TOOL "inventory-management"
ADD TOOL "purchasing"
ADD TOOL "erp-jobs"
SET CONTEXT "erp" AS "You are an ERP assistant helping with inventory management, purchasing, and warehouse operations. Help users receive inventory, ship orders, check stock levels, transfer between warehouses, and conduct cycle counts."
CLEAR SUGGESTIONS
ADD SUGGESTION "receive" AS "Receive inventory"
ADD SUGGESTION "ship" AS "Ship order"
ADD SUGGESTION "stock" AS "Check stock"
ADD SUGGESTION "transfer" AS "Transfer stock"
ADD SUGGESTION "count" AS "Cycle count"
ADD SUGGESTION "purchase" AS "Create PO"
BEGIN TALK
**ERP Inventory Manager**
I can help you with:
• Receive inventory from purchase orders
• Ship orders to customers
• Check stock levels across warehouses
• Transfer stock between locations
• Conduct cycle counts
• Create and manage purchase orders
What would you like to do?
END TALK
BEGIN SYSTEM PROMPT
You are an ERP inventory management assistant.
Key operations:
- receive_inventory: Process incoming goods from POs
- ship_inventory: Process outgoing shipments for sales orders
- check_stock: Query inventory levels
- transfer_stock: Move inventory between warehouses
- cycle_count: Physical inventory verification
Always confirm quantities before processing.
Log all transactions for audit trail.
Alert on low stock and reorder points.
END SYSTEM PROMPT
Sample Dialogs
These conversations show how the ERP template works in real-world scenarios.
Dialog 1: Receiving Inventory
Dialog 2: Check Stock Levels
Dialog 3: Ship an Order
Dialog 4: Transfer Stock
Keywords Used
| Keyword | Purpose |
|---|---|
ADD TOOL | Register ERP operation tools |
SET CONTEXT | Define ERP assistant behavior |
FIND | Query inventory and orders |
SAVE | Record transactions |
UPDATE | Modify stock levels |
SEND MAIL | Notify stakeholders |
Template Structure
erp.gbai/
├── erp.gbdialog/
│ ├── inventory-management.bas # Stock operations
│ ├── purchasing.bas # PO management
│ ├── erp-jobs.bas # Scheduled tasks
│ └── tables.bas # Data structures
└── erp.gbot/
└── config.csv # Configuration
Data Tables
Items Table
| Field | Description |
|---|---|
| id | Unique item identifier |
| item_code | SKU/product code |
| name | Item description |
| category | Product category |
| unit_of_measure | UOM (each, case, etc.) |
| minimum_stock_level | Reorder threshold |
| reorder_point | When to reorder |
| reorder_quantity | How much to order |
| average_cost | Weighted average cost |
| last_cost | Most recent purchase cost |
Inventory Stock Table
| Field | Description |
|---|---|
| item_id | Reference to item |
| warehouse_id | Location |
| quantity_on_hand | Physical count |
| quantity_reserved | Allocated to orders |
| quantity_available | On hand minus reserved |
| last_movement_date | Last transaction |
| last_counted_date | Last physical count |
Inventory Transactions Table
| Field | Description |
|---|---|
| transaction_type | receipt, shipment, transfer, adjustment |
| transaction_number | Unique reference |
| item_id | Item affected |
| warehouse_id | Location |
| quantity | Amount (+/-) |
| unit_cost | Cost per unit |
| reference_type | PO, SO, Transfer |
| reference_id | Source document |
Inventory Management Tool
PARAM action AS STRING LIKE "check_stock" DESCRIPTION "Action: receive_inventory, ship_inventory, check_stock, transfer_stock, cycle_count"
PARAM item_data AS OBJECT LIKE "{po_number: 'PO-123'}" DESCRIPTION "Data object with action-specific parameters"
DESCRIPTION "Manage inventory operations"
user_id = GET "session.user_id"
warehouse_id = GET "session.warehouse_id"
IF action = "receive_inventory" THEN
po_number = item_data.po_number
po = FIND "purchase_orders", "po_number = '" + po_number + "'"
IF NOT po THEN
TALK "Purchase order not found."
RETURN NULL
END IF
po_lines = FIND "purchase_order_lines", "po_id = '" + po.id + "'"
FOR EACH line IN po_lines
item = FIND "items", "id = '" + line.item_id + "'"
TALK "Receiving " + item.name + " - Ordered: " + line.quantity_ordered
TALK "Enter quantity received:"
HEAR qty_received AS INTEGER
' Update stock
stock = FIND "inventory_stock", "item_id = '" + item.id + "' AND warehouse_id = '" + warehouse_id + "'"
IF NOT stock THEN
WITH newStock
item_id = item.id
warehouse_id = warehouse_id
quantity_on_hand = qty_received
END WITH
SAVE "inventory_stock", newStock
ELSE
new_qty = stock.quantity_on_hand + qty_received
UPDATE "inventory_stock" SET quantity_on_hand = new_qty WHERE id = stock.id
END IF
' Create transaction record
WITH transaction
transaction_type = "receipt"
item_id = item.id
warehouse_id = warehouse_id
quantity = qty_received
unit_cost = line.unit_price
reference_type = "purchase_order"
reference_id = po.id
created_at = NOW()
END WITH
SAVE "inventory_transactions", transaction
NEXT
UPDATE "purchase_orders" SET status = "received" WHERE id = po.id
TALK "Purchase order " + po_number + " received."
END IF
IF action = "check_stock" THEN
item_search = item_data.item_search
items = FIND "items", "name LIKE '%" + item_search + "%'"
FOR EACH item IN items
TALK "📦 " + item.name + " (" + item.item_code + ")"
stocks = FIND "inventory_stock", "item_id = '" + item.id + "'"
total = 0
FOR EACH stock IN stocks
warehouse = FIND "warehouses", "id = '" + stock.warehouse_id + "'"
TALK " " + warehouse.name + ": " + stock.quantity_on_hand
total = total + stock.quantity_on_hand
NEXT
TALK " **TOTAL:** " + total
IF total < item.minimum_stock_level THEN
TALK " ⚠️ Below minimum (" + item.minimum_stock_level + ")"
END IF
NEXT
END IF
Scheduled Jobs: erp-jobs.bas
PARAM jobname AS STRING DESCRIPTION "Job to execute"
IF jobname = "low stock alert" THEN
SET SCHEDULE "0 8 * * *" ' Daily at 8 AM
' Find items below reorder point
low_items = SQL "SELECT i.*, s.quantity_on_hand
FROM items i
JOIN inventory_stock s ON i.id = s.item_id
WHERE s.quantity_on_hand <= i.reorder_point"
IF UBOUND(low_items) > 0 THEN
report = "Low Stock Alert\n\n"
FOR EACH item IN low_items
report = report + item.name + ": " + item.quantity_on_hand + " (reorder at " + item.reorder_point + ")\n"
NEXT
SEND MAIL "purchasing@company.com", "Daily Low Stock Alert", report
END IF
END IF
IF jobname = "pending shipments" THEN
SET SCHEDULE "0 7 * * *" ' Daily at 7 AM
pending = FIND "sales_orders", "status = 'ready_to_ship'"
TALK "📦 " + UBOUND(pending) + " orders ready to ship today."
SEND MAIL "warehouse@company.com", "Pending Shipments",
UBOUND(pending) + " orders need to be shipped today."
END IF
Best Practices
- Always Verify Quantities: Confirm counts before processing
- Maintain Audit Trail: Log all inventory movements
- Regular Cycle Counts: Schedule periodic physical inventory
- Monitor Reorder Points: Act on low stock alerts promptly
- Validate PO/SO Numbers: Check document existence before processing
- Cost Tracking: Maintain accurate cost records for COGS
Related Templates
- store.bas - E-commerce integration
- talk-to-data.bas - Inventory analytics
- backup.bas - Data backup procedures
IT Helpdesk Template
The IT Helpdesk template provides a complete IT support ticketing system that helps users report problems, track ticket status, and get help with common technical issues.
Topic: IT Support & Ticket Management
This template is perfect for:
- Internal IT support desks
- Technical support teams
- MSP (Managed Service Provider) helpdesks
- Customer technical support
- Self-service IT portals
The Code
ADD TOOL "create-ticket"
ADD TOOL "check-ticket-status"
ADD TOOL "my-tickets"
ADD TOOL "update-ticket"
ADD TOOL "close-ticket"
USE KB "helpdesk.gbkb"
SET CONTEXT "it helpdesk" AS "You are an IT helpdesk assistant. Help users create support tickets, check ticket status, and troubleshoot common issues. Gather necessary information before creating tickets: issue description, urgency level, and affected systems."
CLEAR SUGGESTIONS
ADD SUGGESTION "new" AS "Report a problem"
ADD SUGGESTION "status" AS "Check ticket status"
ADD SUGGESTION "password" AS "Reset my password"
ADD SUGGESTION "vpn" AS "VPN issues"
ADD SUGGESTION "email" AS "Email not working"
ADD SUGGESTION "mytickets" AS "View my tickets"
BEGIN TALK
**IT Helpdesk Support**
I can help you with:
• Create a new support ticket
• Check ticket status
• Password resets
• Network and VPN problems
• Email issues
• Hardware and software support
For urgent issues affecting multiple users, mention "urgent" or "critical".
What can I help you with?
END TALK
BEGIN SYSTEM PROMPT
You are an IT Helpdesk support assistant.
Priority levels:
- Critical: System down, security breach, multiple users affected
- High: Single user unable to work, deadline impact
- Medium: Issue with workaround available
- Low: Minor inconvenience, feature requests
Before creating a ticket, collect:
- Clear description of the issue
- When the issue started
- Error messages if any
- Steps already tried
Try to resolve simple issues using the knowledge base before creating tickets.
END SYSTEM PROMPT
Sample Dialogs
These conversations show how the IT Helpdesk template works in real-world scenarios.
Dialog 1: Creating a Support Ticket
Dialog 2: Password Reset
Dialog 3: Check Ticket Status
Dialog 4: Critical System Issue
Keywords Used
| Keyword | Purpose |
|---|---|
ADD TOOL | Register ticket management tools |
USE KB | Load helpdesk knowledge base for troubleshooting |
SET CONTEXT | Define IT support assistant behavior |
ADD SUGGESTION | Create common issue shortcuts |
BEGIN TALK | Welcome message with options |
BEGIN SYSTEM PROMPT | Priority definitions and guidelines |
Template Structure
helpdesk.gbai/
├── helpdesk.gbdialog/
│ ├── start.bas # Main entry point
│ ├── create-ticket.bas # Ticket creation
│ ├── check-ticket-status.bas # Status lookup
│ ├── my-tickets.bas # User's tickets
│ ├── update-ticket.bas # Ticket updates
│ └── close-ticket.bas # Ticket resolution
├── helpdesk.gbdrive/
│ └── templates/ # Response templates
├── helpdesk.gbkb/
│ ├── common-issues.md # Troubleshooting guides
│ └── security-tips.md # Security best practices
└── helpdesk.gbot/
└── config.csv # Bot configuration
Create Ticket Tool: create-ticket.bas
PARAM description AS STRING LIKE "Computer won't start" DESCRIPTION "Issue description"
PARAM category AS STRING LIKE "hardware" DESCRIPTION "Category: hardware, software, network, email, access"
PARAM priority AS STRING LIKE "medium" DESCRIPTION "Priority: critical, high, medium, low" OPTIONAL
DESCRIPTION "Create a new IT support ticket"
' Get user information
user_email = FROM
user_name = USERNAME
' Auto-detect priority if not provided
IF NOT priority THEN
IF INSTR(LOWER(description), "urgent") > 0 OR INSTR(LOWER(description), "critical") > 0 THEN
priority = "critical"
ELSE IF INSTR(LOWER(description), "can't work") > 0 OR INSTR(LOWER(description), "blocked") > 0 THEN
priority = "high"
ELSE
priority = "medium"
END IF
END IF
' Generate ticket number
ticketNumber = "INC-" + FORMAT(NOW(), "YYYY") + "-" + FORMAT(RANDOM(1000, 9999))
' Set SLA based on priority
SELECT CASE priority
CASE "critical"
slaMinutes = 15
slaText = "15 minutes"
CASE "high"
slaMinutes = 120
slaText = "2 hours"
CASE "medium"
slaMinutes = 480
slaText = "8 hours"
CASE "low"
slaMinutes = 1440
slaText = "24 hours"
END SELECT
' Create ticket record
WITH ticket
id = ticketNumber
user_email = user_email
user_name = user_name
description = description
category = category
priority = priority
status = "open"
sla_due = DATEADD(NOW(), slaMinutes, "minutes")
created_at = NOW()
END WITH
SAVE "tickets.csv", ticket
' Send confirmation email
SEND MAIL user_email, "Ticket Created: " + ticketNumber,
"Your support ticket has been created.\n\n" +
"Ticket: " + ticketNumber + "\n" +
"Issue: " + description + "\n" +
"Priority: " + priority + "\n" +
"Response time: " + slaText
' Notify support team
IF priority = "critical" THEN
SEND MAIL "oncall@company.com", "🚨 CRITICAL: " + ticketNumber,
"Critical ticket requires immediate attention:\n" + description
END IF
TALK "✅ Ticket **" + ticketNumber + "** created!"
TALK "Priority: " + UPPER(priority)
TALK "Expected response: " + slaText
RETURN ticketNumber
My Tickets Tool: my-tickets.bas
PARAM status AS STRING LIKE "open" DESCRIPTION "Filter by status: open, closed, all" OPTIONAL
DESCRIPTION "View your support tickets"
user_email = FROM
IF NOT status OR status = "all" THEN
tickets = FIND "tickets.csv", "user_email = '" + user_email + "'"
ELSE
tickets = FIND "tickets.csv", "user_email = '" + user_email + "' AND status = '" + status + "'"
END IF
IF UBOUND(tickets) = 0 THEN
TALK "You have no " + IIF(status, status, "") + " tickets."
RETURN NULL
END IF
TALK "🎫 **Your Tickets:**"
TALK ""
FOR EACH ticket IN tickets
statusIcon = "🔵"
IF ticket.status = "open" THEN statusIcon = "🟡"
IF ticket.status = "in_progress" THEN statusIcon = "🔵"
IF ticket.status = "resolved" THEN statusIcon = "🟢"
IF ticket.status = "closed" THEN statusIcon = "⚪"
TALK "**" + ticket.id + "** " + statusIcon
TALK "📋 " + LEFT(ticket.description, 50) + "..."
TALK "📊 Status: " + ticket.status
TALK "📅 Created: " + FORMAT(ticket.created_at, "MMM DD, YYYY")
TALK ""
NEXT
RETURN tickets
Customization Ideas
Add Knowledge Base Self-Service
' Before creating a ticket, search KB for solutions
solutions = SEARCH KB description
IF UBOUND(solutions) > 0 THEN
TALK "I found some articles that might help:"
FOR EACH solution IN FIRST(solutions, 3)
TALK "• " + solution.title
NEXT
TALK ""
TALK "Did any of these solve your issue?"
HEAR resolved
IF LOWER(resolved) = "yes" THEN
TALK "Great! Let me know if you need anything else."
RETURN NULL
END IF
END IF
' Continue to ticket creation...
Add Asset Tracking
PARAM asset_tag AS STRING DESCRIPTION "Asset tag of affected equipment"
' Look up asset information
asset = FIND "assets.csv", "tag = '" + asset_tag + "'"
IF asset THEN
ticket.asset_tag = asset_tag
ticket.asset_type = asset.type
ticket.asset_model = asset.model
ticket.warranty_status = asset.warranty_expires > NOW()
IF asset.warranty_expires > NOW() THEN
TALK "ℹ️ This device is under warranty until " + FORMAT(asset.warranty_expires, "MMM DD, YYYY")
END IF
END IF
Add Escalation Rules
' Check if ticket needs escalation
IF ticket.priority = "critical" AND ticket.category = "security" THEN
' Escalate to security team
SEND MAIL "security@company.com", "🔴 Security Incident: " + ticketNumber, description
ticket.escalated_to = "security"
ticket.escalation_time = NOW()
END IF
IF ticket.priority = "critical" AND DATEDIFF(NOW(), ticket.created_at, "minutes") > 30 THEN
' Escalate if no response in 30 minutes
SEND MAIL "it-manager@company.com", "⚠️ SLA Breach Risk: " + ticketNumber,
"Critical ticket approaching SLA breach"
END IF
Add Satisfaction Survey
' When closing ticket
IF action = "close" THEN
ticket.status = "closed"
ticket.closed_at = NOW()
ticket.resolution = resolution
UPDATE "tickets.csv", ticket
TALK "Your ticket has been resolved!"
TALK ""
TALK "How would you rate your support experience?"
ADD SUGGESTION "5" AS "⭐⭐⭐⭐⭐ Excellent"
ADD SUGGESTION "4" AS "⭐⭐⭐⭐ Good"
ADD SUGGESTION "3" AS "⭐⭐⭐ Average"
ADD SUGGESTION "2" AS "⭐⭐ Poor"
ADD SUGGESTION "1" AS "⭐ Very Poor"
HEAR rating
WITH feedback
ticket_id = ticketNumber
rating = rating
timestamp = NOW()
END WITH
SAVE "satisfaction.csv", feedback
TALK "Thank you for your feedback!"
END IF
Priority Matrix
| Priority | Response Time | Resolution Time | Examples |
|---|---|---|---|
| Critical | 15 minutes | 4 hours | System outage, security breach, multiple users down |
| High | 2 hours | 8 hours | Single user unable to work, deadline impact |
| Medium | 8 hours | 24 hours | Issue with workaround available |
| Low | 24 hours | 72 hours | Feature requests, minor inconveniences |
Related Templates
- hr/employees.bas - Employee management integration
- announcements.bas - IT announcements
- backup.bas - Backup and recovery
Privacy Template
The privacy template provides a complete LGPD/GDPR/CCPA-compliant Privacy Rights Center, enabling users to exercise their data protection rights through a conversational interface.
Topic: Data Privacy & Compliance
This template is perfect for:
- LGPD compliance (Brazil)
- GDPR compliance (EU)
- CCPA compliance (California)
- Data subject rights management
- Consent management portals
The Code
ADD TOOL "request-data"
ADD TOOL "export-data"
ADD TOOL "delete-data"
ADD TOOL "manage-consents"
ADD TOOL "rectify-data"
ADD TOOL "object-processing"
USE KB "privacy.gbkb"
CLEAR SUGGESTIONS
ADD SUGGESTION "access" AS "View my data"
ADD SUGGESTION "export" AS "Export my data"
ADD SUGGESTION "delete" AS "Delete my data"
ADD SUGGESTION "consents" AS "Manage consents"
ADD SUGGESTION "correct" AS "Correct my data"
ADD SUGGESTION "object" AS "Object to processing"
SET CONTEXT "privacy rights" AS "You are a Privacy Rights Center assistant helping users exercise their data protection rights under LGPD, GDPR, and CCPA. Help with data access, rectification, erasure, portability, and consent management."
BEGIN TALK
**Privacy Rights Center**
As a data subject, you have the following rights:
1. **Access** - View all data we hold about you
2. **Rectification** - Correct inaccurate data
3. **Erasure** - Request deletion of your data
4. **Portability** - Export your data
5. **Object** - Opt-out of certain processing
6. **Consent** - Review and update your consents
Select an option or describe your request.
END TALK
BEGIN SYSTEM PROMPT
You are a Privacy Rights Center assistant for LGPD/GDPR/CCPA compliance.
Data subject rights:
- Right of Access: View all personal data
- Right to Rectification: Correct inaccurate data
- Right to Erasure: Delete personal data (right to be forgotten)
- Right to Portability: Export data in machine-readable format
- Right to Object: Opt-out of marketing, profiling, etc.
- Consent Management: Review and withdraw consents
Always verify identity before processing sensitive requests.
Log all privacy requests for compliance audit.
Provide clear timelines for request fulfillment.
Escalate complex requests to the Data Protection Officer.
END SYSTEM PROMPT
Sample Dialogs
These conversations show how the privacy template works in real-world scenarios.
Dialog 1: Data Access Request
Dialog 2: Data Deletion Request
Dialog 3: Consent Management
Dialog 4: Data Export (Portability)
Keywords Used
| Keyword | Purpose |
|---|---|
ADD TOOL | Register privacy rights tools |
USE KB | Load privacy policy knowledge base |
ADD SUGGESTION | Create quick action buttons for rights |
SET CONTEXT | Define privacy assistant behavior |
BEGIN TALK | Welcome message with rights summary |
BEGIN SYSTEM PROMPT | Compliance rules and procedures |
Template Structure
privacy.gbai/
├── privacy.gbdialog/
│ ├── start.bas # Main entry point
│ ├── request-data.bas # Data access requests
│ ├── export-data.bas # Data portability
│ ├── delete-data.bas # Right to erasure
│ ├── manage-consents.bas # Consent management
│ └── rectify-data.bas # Data correction
├── privacy.gbot/
│ └── config.csv # Configuration
├── privacy.gbkb/
│ └── privacy-policy.md # Privacy documentation
└── privacy.gbui/
└── index.html # Web portal UI
Data Subject Rights by Regulation
| Right | LGPD (Brazil) | GDPR (EU) | CCPA (California) |
|---|---|---|---|
| Access | Art. 18 | Art. 15 | §1798.100 |
| Rectification | Art. 18 III | Art. 16 | - |
| Erasure | Art. 18 VI | Art. 17 | §1798.105 |
| Portability | Art. 18 V | Art. 20 | §1798.100 |
| Object | Art. 18 IV | Art. 21 | §1798.120 |
| Consent | Art. 8 | Art. 7 | §1798.135 |
Response Deadlines
| Regulation | Standard | Extended |
|---|---|---|
| LGPD | 15 days | - |
| GDPR | 30 days | 90 days (complex) |
| CCPA | 45 days | 90 days |
Request Data Tool: request-data.bas
PARAM request_type AS STRING LIKE "full" DESCRIPTION "Type of data request: full, summary, specific"
DESCRIPTION "Process a data access request (Right of Access)"
' Verify identity first
TALK "🔐 To protect your privacy, I need to verify your identity."
TALK "I'll send a verification code to your registered email."
code = FORMAT(RANDOM(100000, 999999))
SET BOT MEMORY "verification_code_" + user_id, code
SET BOT MEMORY "verification_expiry_" + user_id, DATEADD(NOW(), 10, "minutes")
SEND MAIL user_email, "Privacy Request Verification", "Your verification code is: " + code
TALK "Please enter the 6-digit code sent to your email:"
HEAR entered_code
stored_code = GET BOT MEMORY("verification_code_" + user_id)
expiry = GET BOT MEMORY("verification_expiry_" + user_id)
IF entered_code <> stored_code OR NOW() > expiry THEN
TALK "❌ Invalid or expired code. Please try again."
RETURN NULL
END IF
' Log the request for compliance
WITH request
id = "ACC-" + FORMAT(NOW(), "YYYY") + "-" + FORMAT(RANDOM(100000, 999999))
user_id = user_id
type = "access"
status = "processing"
created_at = NOW()
deadline = DATEADD(NOW(), 15, "days")
END WITH
SAVE "privacy_requests.csv", request
' Retrieve user data
userData = FIND "users.csv", "id = '" + user_id + "'"
activityData = FIND "activity_log.csv", "user_id = '" + user_id + "'"
consents = FIND "consents.csv", "user_id = '" + user_id + "'"
TALK "✅ Identity verified. Here's your data:"
TALK ""
TALK "**📋 Personal Information**"
TALK "• Name: " + userData.name
TALK "• Email: " + MASK_EMAIL(userData.email)
TALK "• Account created: " + FORMAT(userData.created_at, "MMM DD, YYYY")
TALK ""
TALK "**📊 Activity Summary**"
TALK "• Total activities: " + UBOUND(activityData)
TALK "• Last activity: " + FORMAT(activityData[1].timestamp, "MMM DD, YYYY")
TALK ""
TALK "**🔔 Consent Status**"
FOR EACH consent IN consents
status_icon = IIF(consent.granted, "✅", "❌")
TALK "• " + consent.purpose + ": " + status_icon
NEXT
TALK ""
TALK "Request ID: **" + request.id + "**"
TALK "Would you like a full export of your data?"
RETURN request.id
Delete Data Tool: delete-data.bas
PARAM confirm AS STRING LIKE "yes" DESCRIPTION "Confirmation to proceed with deletion"
DESCRIPTION "Process a data erasure request (Right to be Forgotten)"
' Warn about consequences
TALK "⚠️ **Data Deletion Request**"
TALK ""
TALK "This will permanently delete:"
TALK "• Your profile and personal information"
TALK "• Activity history and preferences"
TALK "• Communication history"
TALK ""
TALK "**Note:** Some data may be retained for legal compliance:"
TALK "• Financial records (tax requirements)"
TALK "• Fraud prevention data"
TALK "• Legal dispute documentation"
TALK ""
TALK "Type **DELETE MY DATA** to confirm this irreversible action:"
HEAR confirmation
IF UPPER(confirmation) <> "DELETE MY DATA" THEN
TALK "Deletion cancelled. Your data remains unchanged."
RETURN NULL
END IF
' Create deletion request
WITH request
id = "DEL-" + FORMAT(NOW(), "YYYY") + "-" + FORMAT(RANDOM(100000, 999999))
user_id = user_id
type = "erasure"
status = "pending_verification"
created_at = NOW()
deadline = DATEADD(NOW(), 15, "days")
END WITH
SAVE "privacy_requests.csv", request
' Send verification email
verification_link = "https://privacy.company.com/verify/" + request.id
SEND MAIL user_email, "Confirm Data Deletion Request",
"Click to confirm your data deletion request:\n\n" + verification_link +
"\n\nThis link expires in 24 hours.\n\nRequest ID: " + request.id
TALK "📧 A verification email has been sent."
TALK "Please click the link to confirm your deletion request."
TALK ""
TALK "**Timeline:**"
TALK "• Verification: 24 hours"
TALK "• Processing: 15 business days (LGPD) / 30 days (GDPR)"
TALK ""
TALK "Request ID: **" + request.id + "**"
RETURN request.id
Customization Ideas
Add Identity Verification Options
TALK "How would you like to verify your identity?"
ADD SUGGESTION "email" AS "Email verification"
ADD SUGGESTION "sms" AS "SMS verification"
ADD SUGGESTION "id" AS "Upload ID document"
HEAR method
SWITCH method
CASE "email"
' Send email code
CASE "sms"
' Send SMS code
CASE "id"
TALK "Please upload a photo of your government-issued ID."
HEAR id_upload AS FILE
' Process ID verification
END SWITCH
Add DPO Escalation
' For complex requests
IF request_complexity = "high" THEN
TALK "This request requires review by our Data Protection Officer."
TALK "You will be contacted within 5 business days."
SEND MAIL "dpo@company.com", "Privacy Request Escalation",
"Request ID: " + request.id + "\n" +
"Type: " + request.type + "\n" +
"User: " + user_email + "\n" +
"Reason: Complex request requiring DPO review"
END IF
Add Audit Logging
' Log all privacy operations
WITH auditLog
timestamp = NOW()
request_id = request.id
user_id = user_id
action = "data_access"
ip_address = GET_CLIENT_IP()
user_agent = GET_USER_AGENT()
result = "success"
END WITH
SAVE "privacy_audit_log.csv", auditLog
Best Practices
- Always Verify Identity: Never provide data without verification
- Log Everything: Maintain audit trails for compliance
- Clear Timelines: Communicate response deadlines clearly
- Explain Retention: Be transparent about what data is retained and why
- Easy Consent Management: Make it simple to change preferences
- Secure Communications: Use encrypted channels for sensitive data
Related Templates
- auth.bas - Authentication patterns
- bank.bas - Secure financial data handling
- hipaa.bas - Healthcare privacy compliance
Sales Pipeline Template
The sales pipeline template provides a complete CRM (Customer Relationship Management) system for managing deals, tracking opportunities through sales stages, and generating revenue forecasts.
Topic: Sales Pipeline & Deal Management
This template is perfect for:
- Sales teams tracking deals
- Revenue forecasting
- Pipeline management
- Win/loss analysis
- Sales performance reporting
The Code
ADD TOOL "create-deal"
ADD TOOL "update-stage"
ADD TOOL "list-deals"
ADD TOOL "deal-details"
ADD TOOL "pipeline-report"
ADD TOOL "forecast-revenue"
USE KB "sales-pipeline.gbkb"
SET CONTEXT "sales pipeline" AS "You are a sales assistant helping manage the sales pipeline. Help with creating new deals, updating deal stages, viewing pipeline status, generating sales forecasts, and analyzing win/loss rates."
CLEAR SUGGESTIONS
ADD SUGGESTION "newdeal" AS "Create a new deal"
ADD SUGGESTION "pipeline" AS "Show my pipeline"
ADD SUGGESTION "update" AS "Update a deal stage"
ADD SUGGESTION "forecast" AS "View sales forecast"
ADD SUGGESTION "report" AS "Generate pipeline report"
BEGIN TALK
**Sales Pipeline Manager**
I can help you with:
• Create new deals and opportunities
• View and manage your pipeline
• Update deal stages
• Generate sales forecasts
• Pipeline analytics and reports
• Track win/loss rates
Select an option or tell me what you need.
END TALK
BEGIN SYSTEM PROMPT
You are a sales pipeline assistant.
Pipeline stages:
- Lead: Initial contact, not qualified
- Qualified: Budget, authority, need, timeline confirmed
- Proposal: Quote sent
- Negotiation: Active discussions
- Closed Won: Successfully closed
- Closed Lost: Lost or no decision
Always encourage sales reps and provide actionable insights.
Confirm changes before saving.
Use currency format for amounts.
END SYSTEM PROMPT
Sample Dialogs
These conversations show how the sales pipeline template works in real-world scenarios.
Dialog 1: Creating a New Deal
Dialog 2: Viewing Pipeline
Dialog 3: Update Deal Stage
Dialog 4: Revenue Forecast
Keywords Used
| Keyword | Purpose |
|---|---|
ADD TOOL | Register pipeline management tools |
USE KB | Load sales methodology knowledge base |
SET CONTEXT | Define sales assistant behavior |
ADD SUGGESTION | Create quick action buttons |
BEGIN TALK | Welcome message with options |
BEGIN SYSTEM PROMPT | Sales stage definitions and guidelines |
Pipeline Stages
| Stage | Win Probability | Description |
|---|---|---|
| Lead | 20% | Initial contact, not yet qualified |
| Qualified | 40% | BANT criteria confirmed |
| Proposal | 50% | Quote or proposal sent |
| Negotiation | 80% | Active deal discussions |
| Closed Won | 100% | Deal successfully closed |
| Closed Lost | 0% | Deal lost or abandoned |
Template Structure
sales-pipeline.gbai/
├── sales-pipeline.gbdialog/
│ ├── start.bas # Main entry point
│ ├── create-deal.bas # New deal creation
│ ├── update-stage.bas # Stage progression
│ ├── list-deals.bas # Pipeline view
│ ├── deal-details.bas # Individual deal info
│ ├── pipeline-report.bas # Analytics reports
│ └── forecast-revenue.bas # Revenue forecasting
├── sales-pipeline.gbdrive/
│ └── templates/ # Proposal templates
├── sales-pipeline.gbkb/
│ └── sales-methodology.md # Sales best practices
└── sales-pipeline.gbot/
└── config.csv # Bot configuration
Create Deal Tool: create-deal.bas
PARAM company AS STRING LIKE "Acme Corp" DESCRIPTION "Company or account name"
PARAM value AS NUMBER LIKE 50000 DESCRIPTION "Deal value in dollars"
PARAM close_date AS DATE LIKE "2025-03-31" DESCRIPTION "Expected close date"
PARAM contact AS STRING DESCRIPTION "Primary contact name" OPTIONAL
PARAM notes AS STRING DESCRIPTION "Deal notes" OPTIONAL
DESCRIPTION "Create a new deal in the sales pipeline"
' Generate deal ID
dealId = "DEAL-" + FORMAT(NOW(), "YYYY") + "-" + FORMAT(RANDOM(1000, 9999))
' Get sales rep info
salesRep = USERNAME
salesRepEmail = FROM
' Create deal record
WITH deal
id = dealId
company = company
value = value
expected_close = close_date
contact_name = contact
notes = notes
stage = "lead"
probability = 20
owner = salesRep
owner_email = salesRepEmail
created_at = NOW()
updated_at = NOW()
END WITH
SAVE "deals.csv", deal
' Log activity
WITH activity
deal_id = dealId
type = "created"
description = "Deal created with value $" + FORMAT(value, "#,##0")
user = salesRep
timestamp = NOW()
END WITH
SAVE "deal_activities.csv", activity
TALK "✅ **Deal Created!**"
TALK "🏢 **Company:** " + company
TALK "💰 **Value:** $" + FORMAT(value, "#,##0")
TALK "📅 **Expected Close:** " + FORMAT(close_date, "MMMM DD, YYYY")
TALK "📊 **Stage:** Lead"
TALK "🎫 **Deal ID:** " + dealId
TALK ""
TALK "Good luck! 🍀"
RETURN dealId
Update Stage Tool: update-stage.bas
PARAM deal_id AS STRING LIKE "DEAL-2025-0142" DESCRIPTION "Deal ID or company name"
PARAM new_stage AS STRING LIKE "qualified" DESCRIPTION "New stage: lead, qualified, proposal, negotiation, closed_won, closed_lost"
PARAM reason AS STRING DESCRIPTION "Reason for stage change" OPTIONAL
DESCRIPTION "Update the stage of a deal in the pipeline"
' Find deal
deal = FIND "deals.csv", "id = '" + deal_id + "' OR LOWER(company) LIKE '%" + LOWER(deal_id) + "%'"
IF NOT deal THEN
TALK "Deal not found. Please check the deal ID or company name."
RETURN NULL
END IF
old_stage = deal.stage
new_stage_lower = LOWER(new_stage)
' Set probability based on stage
SELECT CASE new_stage_lower
CASE "lead"
probability = 20
CASE "qualified"
probability = 40
CASE "proposal"
probability = 50
CASE "negotiation"
probability = 80
CASE "closed_won"
probability = 100
CASE "closed_lost"
probability = 0
END SELECT
' Update deal
deal.stage = new_stage_lower
deal.probability = probability
deal.updated_at = NOW()
IF new_stage_lower = "closed_won" THEN
deal.closed_date = NOW()
deal.closed_value = deal.value
ELSE IF new_stage_lower = "closed_lost" THEN
deal.closed_date = NOW()
deal.lost_reason = reason
END IF
UPDATE "deals.csv", deal
' Log activity
WITH activity
deal_id = deal.id
type = "stage_change"
description = "Stage changed: " + old_stage + " → " + new_stage_lower
user = USERNAME
timestamp = NOW()
END WITH
SAVE "deal_activities.csv", activity
' Format stage names
old_stage_display = PROPER(REPLACE(old_stage, "_", " "))
new_stage_display = PROPER(REPLACE(new_stage_lower, "_", " "))
TALK "✅ **Deal Updated!**"
TALK "🏢 **" + deal.company + "**"
TALK "📊 " + old_stage_display + " → **" + new_stage_display + "**"
TALK "💰 $" + FORMAT(deal.value, "#,##0")
IF new_stage_lower = "closed_won" THEN
TALK ""
TALK "🎉 Congratulations on closing the deal!"
ELSE IF new_stage_lower = "closed_lost" THEN
TALK ""
TALK "📝 Deal marked as lost. Keep pushing on the other opportunities!"
ELSE
TALK ""
TALK "Win probability: " + probability + "%"
END IF
RETURN deal.id
Forecast Revenue Tool: forecast-revenue.bas
PARAM period AS STRING LIKE "this quarter" DESCRIPTION "Forecast period: this month, this quarter, this year"
DESCRIPTION "Generate revenue forecast based on pipeline and probabilities"
' Determine date range
IF INSTR(LOWER(period), "month") > 0 THEN
start_date = DATE(YEAR(NOW()), MONTH(NOW()), 1)
end_date = DATEADD(DATEADD(start_date, 1, "month"), -1, "day")
period_name = FORMAT(NOW(), "MMMM YYYY")
ELSE IF INSTR(LOWER(period), "quarter") > 0 THEN
quarter = INT((MONTH(NOW()) - 1) / 3) + 1
start_date = DATE(YEAR(NOW()), (quarter - 1) * 3 + 1, 1)
end_date = DATEADD(DATEADD(start_date, 3, "month"), -1, "day")
period_name = "Q" + quarter + " " + YEAR(NOW())
ELSE
start_date = DATE(YEAR(NOW()), 1, 1)
end_date = DATE(YEAR(NOW()), 12, 31)
period_name = YEAR(NOW())
END IF
' Get deals closing in period
deals = FIND "deals.csv", "expected_close >= '" + FORMAT(start_date, "YYYY-MM-DD") + "' AND expected_close <= '" + FORMAT(end_date, "YYYY-MM-DD") + "' AND stage NOT IN ('closed_won', 'closed_lost')"
' Calculate forecasts by stage
weighted_total = 0
best_case = 0
committed = 0
stages = ["negotiation", "proposal", "qualified", "lead"]
stage_totals = []
FOR EACH stage IN stages
stage_deals = FILTER(deals, "stage = '" + stage + "'")
stage_value = 0
stage_weighted = 0
FOR EACH deal IN stage_deals
stage_value = stage_value + deal.value
stage_weighted = stage_weighted + (deal.value * deal.probability / 100)
NEXT
best_case = best_case + stage_value
weighted_total = weighted_total + stage_weighted
IF stage = "negotiation" THEN
committed = committed + stage_weighted
END IF
stage_totals[stage] = {value: stage_value, weighted: stage_weighted, prob: deals[1].probability}
NEXT
' Get closed won in period
closed = FIND "deals.csv", "closed_date >= '" + FORMAT(start_date, "YYYY-MM-DD") + "' AND stage = 'closed_won'"
closed_value = 0
FOR EACH deal IN closed
closed_value = closed_value + deal.closed_value
NEXT
' Get quota
quota = GET BOT MEMORY("quota_" + USERNAME)
IF NOT quota THEN quota = 200000
attainment = ((closed_value + weighted_total) / quota) * 100
TALK "📈 **" + period_name + " Revenue Forecast**"
TALK ""
TALK "**By Stage:**"
TALK "• Negotiation (80%): $" + FORMAT(stage_totals["negotiation"].weighted, "#,##0")
TALK "• Proposal (50%): $" + FORMAT(stage_totals["proposal"].weighted, "#,##0")
TALK "• Qualified (40%): $" + FORMAT(stage_totals["qualified"].weighted, "#,##0")
TALK "• Lead (20%): $" + FORMAT(stage_totals["lead"].weighted, "#,##0")
TALK ""
TALK "**Weighted Forecast:** $" + FORMAT(weighted_total, "#,##0")
TALK "**Best Case:** $" + FORMAT(best_case, "#,##0")
TALK "**Committed:** $" + FORMAT(committed, "#,##0")
TALK "**Already Closed:** $" + FORMAT(closed_value, "#,##0")
TALK ""
TALK "**Quota:** $" + FORMAT(quota, "#,##0")
TALK "**Attainment:** " + FORMAT(attainment, "#,##0") + "% (forecasted)"
IF attainment >= 100 THEN
TALK ""
TALK "🎯 You're on track to exceed quota!"
ELSE IF attainment >= 80 THEN
TALK ""
TALK "📊 You're close! Focus on advancing your top deals."
ELSE
TALK ""
TALK "⚠️ You need more pipeline coverage. Time to prospect!"
END IF
RETURN {weighted: weighted_total, best_case: best_case, attainment: attainment}
Customization Ideas
Add Deal Scoring
' Calculate deal score based on various factors
score = 0
' Company size score
IF deal.company_size > 1000 THEN
score = score + 20
ELSE IF deal.company_size > 100 THEN
score = score + 10
END IF
' Budget confirmed
IF deal.budget_confirmed THEN
score = score + 25
END IF
' Decision maker engaged
IF deal.decision_maker THEN
score = score + 25
END IF
' Timeline urgency
IF DATEDIFF(deal.expected_close, NOW(), "days") < 30 THEN
score = score + 20
END IF
' Competitor involved
IF deal.competitor THEN
score = score - 10
END IF
deal.score = score
TALK "Deal Score: " + score + "/100"
Add Activity Tracking
ADD TOOL "log-activity"
PARAM deal_id AS STRING DESCRIPTION "Deal ID"
PARAM activity_type AS STRING LIKE "call" DESCRIPTION "Type: call, email, meeting, demo, proposal"
PARAM notes AS STRING DESCRIPTION "Activity notes"
WITH activity
deal_id = deal_id
type = activity_type
notes = notes
user = USERNAME
timestamp = NOW()
END WITH
SAVE "deal_activities.csv", activity
' Update deal's last activity date
UPDATE "deals.csv" SET last_activity = NOW() WHERE id = deal_id
TALK "✅ Activity logged for deal " + deal_id
Add Win/Loss Analysis
ADD TOOL "win-loss-report"
won = FIND "deals.csv", "stage = 'closed_won' AND closed_date >= '" + start_date + "'"
lost = FIND "deals.csv", "stage = 'closed_lost' AND closed_date >= '" + start_date + "'"
won_count = UBOUND(won)
lost_count = UBOUND(lost)
win_rate = (won_count / (won_count + lost_count)) * 100
won_value = 0
FOR EACH deal IN won
won_value = won_value + deal.value
NEXT
TALK "📊 **Win/Loss Analysis**"
TALK ""
TALK "**Win Rate:** " + FORMAT(win_rate, "#0.0") + "%"
TALK "**Deals Won:** " + won_count + " ($" + FORMAT(won_value, "#,##0") + ")"
TALK "**Deals Lost:** " + lost_count
TALK ""
TALK "**Top Loss Reasons:**"
' Aggregate loss reasons...
Add Email Integration
' Send proposal email from pipeline
ADD TOOL "send-proposal"
PARAM deal_id AS STRING DESCRIPTION "Deal to send proposal for"
deal = FIND "deals.csv", "id = '" + deal_id + "'"
' Generate proposal from template
proposal = FILL "proposal-template.docx", deal
' Send email
SEND MAIL deal.contact_email, "Proposal for " + deal.company,
"Please find attached our proposal.\n\nBest regards,\n" + USERNAME,
proposal
' Update deal stage
deal.stage = "proposal"
deal.proposal_sent = NOW()
UPDATE "deals.csv", deal
TALK "📧 Proposal sent to " + deal.contact_email
TALK "Deal moved to Proposal stage."
Best Practices
- Keep Deals Updated: Update deal stages promptly for accurate forecasting
- Log Activities: Track all customer interactions
- Use BANT: Qualify deals properly before advancing
- Clean Pipeline: Remove stale deals regularly
- Review Weekly: Check pipeline health and forecasts weekly
Related Templates
- crm/contacts.bas - Contact management
- marketing.bas - Lead generation
- store.bas - E-commerce integration
Store Template
The store template provides a complete e-commerce assistant that helps customers browse products, manage shopping carts, and complete purchases through conversational AI.
Topic: E-Commerce & Shopping Assistant
This template is perfect for:
- Online retail stores
- Product catalog browsing
- Shopping cart management
- Order tracking
- Customer support for e-commerce
The Code
ADD TOOL "checkout"
ADD TOOL "search-product"
ADD TOOL "add-to-cart"
ADD TOOL "view-cart"
ADD TOOL "track-order"
ADD TOOL "product-details"
data = FIND "products.csv"
CLEAR SUGGESTIONS
ADD SUGGESTION "products" AS "View products"
ADD SUGGESTION "cart" AS "View my cart"
ADD SUGGESTION "checkout" AS "Checkout"
ADD SUGGESTION "orders" AS "Track my order"
ADD SUGGESTION "help" AS "Shopping help"
SET CONTEXT "store" AS "You are a virtual store sales assistant. Help customers browse products, add items to cart, and complete purchases. Be friendly and helpful. Available products: ${TOJSON(data)}"
BEGIN TALK
**Virtual Store**
Welcome! I can help you with:
• Browse our product catalog
• Add items to your cart
• Complete your purchase
• Track your orders
Select an option or tell me what you're looking for.
END TALK
BEGIN SYSTEM PROMPT
You are a friendly sales assistant in our virtual store.
Welcome customers warmly.
Help them find products.
Provide clear product information.
Guide through purchase process.
Offer assistance when needed.
Product catalog is available in context.
Suggest related products when appropriate.
Confirm items before adding to cart.
END SYSTEM PROMPT
Sample Dialogs
These conversations show how the store template works in real-world scenarios.
Dialog 1: Product Search
Dialog 2: Add to Cart and Checkout
Dialog 3: Order Tracking
Dialog 4: Product Recommendations
Keywords Used
| Keyword | Purpose |
|---|---|
ADD TOOL | Register e-commerce tools |
FIND | Load product catalog from CSV |
ADD SUGGESTION | Create quick action buttons |
SET CONTEXT | Define store context with product data |
BEGIN TALK | Welcome message block |
BEGIN SYSTEM PROMPT | Sales assistant behavior rules |
Template Structure
store.gbai/
├── store.gbdialog/
│ ├── start.bas # Main entry point
│ └── checkout.bas # Checkout process
├── store.gbdata/
│ └── products.csv # Product catalog
└── store.gbot/
└── config.csv # Bot configuration
Checkout Tool: checkout.bas
PARAM confirm AS STRING LIKE "yes" DESCRIPTION "Confirm order placement"
DESCRIPTION "Complete the purchase and process payment"
' Get cart from memory
cart = GET BOT MEMORY("cart_" + user_id)
IF UBOUND(cart) = 0 THEN
TALK "Your cart is empty. Add some items first!"
RETURN NULL
END IF
' Calculate totals
subtotal = 0
FOR EACH item IN cart
subtotal = subtotal + (item.price * item.quantity)
NEXT
shipping = 9.99
IF subtotal > 100 THEN
shipping = 0 ' Free shipping over $100
END IF
total = subtotal + shipping
' Show order summary
TALK "📦 **Order Summary**"
TALK ""
FOR EACH item IN cart
TALK item.quantity + "x " + item.name + " - $" + FORMAT(item.price * item.quantity, "#,##0.00")
NEXT
TALK ""
TALK "Subtotal: $" + FORMAT(subtotal, "#,##0.00")
IF shipping = 0 THEN
TALK "Shipping: FREE ✨"
ELSE
TALK "Shipping: $" + FORMAT(shipping, "#,##0.00")
END IF
TALK "**Total: $" + FORMAT(total, "#,##0.00") + "**"
TALK ""
TALK "Type CONFIRM to place your order."
HEAR confirmation
IF UPPER(confirmation) = "CONFIRM" THEN
' Create order
orderNumber = "ORD-" + FORMAT(NOW(), "YYYY-MMDD") + "-" + FORMAT(RANDOM(100, 999))
WITH order
id = orderNumber
user_id = user_id
items = TOJSON(cart)
subtotal = subtotal
shipping = shipping
total = total
status = "confirmed"
created_at = NOW()
END WITH
SAVE "orders.csv", order
' Clear cart
SET BOT MEMORY "cart_" + user_id, []
' Send confirmation email
SEND MAIL user_email, "Order Confirmed - " + orderNumber,
"Thank you for your order!\n\nOrder: " + orderNumber + "\nTotal: $" + total
TALK "✅ **Order Confirmed!**"
TALK "Order #" + orderNumber
TALK "📧 Confirmation sent to your email"
TALK "🚚 Estimated delivery: 3-5 business days"
TALK ""
TALK "Thank you for shopping with us! 🎉"
RETURN orderNumber
ELSE
TALK "Order cancelled. Your cart items are saved."
RETURN NULL
END IF
Add to Cart Tool: add-to-cart.bas
PARAM product_id AS STRING LIKE "PROD001" DESCRIPTION "Product ID to add"
PARAM quantity AS INTEGER LIKE 1 DESCRIPTION "Quantity to add"
DESCRIPTION "Add a product to the shopping cart"
IF NOT quantity THEN
quantity = 1
END IF
' Find product
product = FIND "products.csv", "id = '" + product_id + "'"
IF NOT product THEN
TALK "Sorry, I couldn't find that product. Please try again."
RETURN NULL
END IF
' Get current cart
cart = GET BOT MEMORY("cart_" + user_id)
IF NOT cart THEN
cart = []
END IF
' Check if product already in cart
found = FALSE
FOR i = 1 TO UBOUND(cart)
IF cart[i].product_id = product_id THEN
cart[i].quantity = cart[i].quantity + quantity
found = TRUE
EXIT FOR
END IF
NEXT
' Add new item if not found
IF NOT found THEN
WITH item
product_id = product_id
name = product.name
price = product.price
quantity = quantity
END WITH
cart = APPEND(cart, item)
END IF
' Save cart
SET BOT MEMORY "cart_" + user_id, cart
' Calculate cart total
cartTotal = 0
cartCount = 0
FOR EACH item IN cart
cartTotal = cartTotal + (item.price * item.quantity)
cartCount = cartCount + item.quantity
NEXT
TALK "✅ Added to cart!"
TALK "**" + product.name + "** - $" + FORMAT(product.price, "#,##0.00")
TALK ""
TALK "🛒 Your cart: " + cartCount + " items ($" + FORMAT(cartTotal, "#,##0.00") + ")"
' Suggest related products
IF product.category THEN
related = FIND "products.csv", "category = '" + product.category + "' AND id <> '" + product_id + "'"
IF UBOUND(related) > 0 THEN
TALK ""
TALK "You might also like: **" + related[1].name + "** - $" + FORMAT(related[1].price, "#,##0.00")
END IF
END IF
RETURN cart
Customization Ideas
Add Product Reviews
ADD TOOL "show-reviews"
' In show-reviews.bas
PARAM product_id AS STRING DESCRIPTION "Product to show reviews for"
reviews = FIND "reviews.csv", "product_id = '" + product_id + "'"
IF UBOUND(reviews) = 0 THEN
TALK "No reviews yet for this product."
RETURN
END IF
avgRating = 0
FOR EACH review IN reviews
avgRating = avgRating + review.rating
NEXT
avgRating = avgRating / UBOUND(reviews)
TALK "⭐ **Customer Reviews** (" + FORMAT(avgRating, "#.#") + "/5)"
TALK ""
FOR EACH review IN FIRST(reviews, 3)
TALK "**" + review.author + "** - " + STRING(review.rating, "⭐")
TALK review.comment
TALK ""
NEXT
Add Discount Codes
PARAM code AS STRING DESCRIPTION "Discount code to apply"
discount = FIND "discounts.csv", "code = '" + UPPER(code) + "' AND valid_until >= '" + FORMAT(NOW(), "YYYY-MM-DD") + "'"
IF NOT discount THEN
TALK "Sorry, that code is invalid or expired."
RETURN NULL
END IF
SET BOT MEMORY "discount_" + user_id, discount
TALK "✅ Discount applied!"
TALK "**" + discount.description + "**"
IF discount.type = "percent" THEN
TALK "You'll save " + discount.value + "% on your order!"
ELSE
TALK "You'll save $" + FORMAT(discount.value, "#,##0.00") + " on your order!"
END IF
Add Wishlist Feature
ADD TOOL "add-to-wishlist"
ADD TOOL "view-wishlist"
' In add-to-wishlist.bas
PARAM product_id AS STRING DESCRIPTION "Product to add to wishlist"
wishlist = GET USER MEMORY("wishlist")
IF NOT wishlist THEN
wishlist = []
END IF
wishlist = APPEND(wishlist, product_id)
SET USER MEMORY "wishlist", wishlist
product = FIND "products.csv", "id = '" + product_id + "'"
TALK "❤️ Added **" + product.name + "** to your wishlist!"
Add Inventory Check
' Before adding to cart, check stock
stock = FIND "inventory.csv", "product_id = '" + product_id + "'"
IF stock.quantity < quantity THEN
IF stock.quantity = 0 THEN
TALK "😔 Sorry, this item is out of stock."
TALK "Would you like to be notified when it's available?"
ELSE
TALK "⚠️ Only " + stock.quantity + " left in stock."
TALK "Would you like to add " + stock.quantity + " instead?"
END IF
RETURN NULL
END IF
Related Templates
- bank.bas - Payment processing integration
- broadcast.bas - Marketing campaigns
- talk-to-data.bas - Sales analytics
Talk to Data Template
The Talk to Data template enables natural language queries against your structured data, transforming plain English questions into SQL queries and visualizations. It’s like having a data analyst available 24/7.
Topic: Natural Language Data Analysis
This template is perfect for:
- Business intelligence dashboards
- Self-service analytics
- Report generation on demand
- Data exploration without SQL knowledge
- Executive summaries and KPI tracking
The Code
ADD TOOL "query-data"
ADD TOOL "create-chart"
ADD TOOL "export-data"
ADD TOOL "notify-latest-orders"
SET ANSWER MODE "sql"
CLEAR SUGGESTIONS
ADD SUGGESTION "products" AS "Top products chart"
ADD SUGGESTION "sales" AS "Sales across years"
ADD SUGGESTION "orders" AS "Latest orders"
ADD SUGGESTION "chart" AS "Create a chart"
ADD SUGGESTION "export" AS "Export data"
SET CONTEXT "talk-to-data" AS "You are a data analyst assistant helping users query and visualize their data. Convert natural language questions into SQL queries and generate charts. Be helpful and suggest visualizations."
BEGIN TALK
**Talk To Data**
I can help you analyze your data with natural language queries.
**Examples:**
• Show me top products in a rainbow colored pie chart
• Sales across years
• Latest orders this month
• Compare revenue by region
Just ask me anything about your data.
END TALK
BEGIN SYSTEM PROMPT
You are a data analysis assistant that converts natural language to SQL queries.
Chart types:
- timeseries: For data over time
- bar: For comparisons
- pie/donut: For proportions
- line: For trends
When users ask about data:
1. Understand the intent
2. Generate appropriate SQL
3. Suggest relevant visualizations
4. Offer to export if needed
Always use LOWER() for text comparisons.
Use LIKE with %% for partial matches.
Return clear, actionable insights.
END SYSTEM PROMPT
Sample Dialogs
These conversations show how the Talk to Data template works in real-world scenarios.
Dialog 1: Simple Data Query
Dialog 2: Creating a Visualization
Dialog 3: Time Series Analysis
Dialog 4: Latest Orders Notification
Keywords Used
| Keyword | Purpose |
|---|---|
ADD TOOL | Register data query and visualization tools |
SET ANSWER MODE | Configure SQL query generation mode |
SET CONTEXT | Define the data analyst role |
ADD SUGGESTION | Create quick query buttons |
BEGIN TALK | Welcome message with examples |
BEGIN SYSTEM PROMPT | Instructions for SQL generation |
How It Works
- Natural Language Input: User asks a question in plain English
- Intent Understanding: AI interprets what data is needed
- SQL Generation: Query is automatically generated
- Data Retrieval: SQL executes against your database
- Visualization: Results are formatted or charted
- Insights: AI provides analysis and recommendations
Query Tool: query-data.bas
PARAM query AS STRING LIKE "top 10 products by revenue" DESCRIPTION "Natural language data query"
PARAM format AS STRING LIKE "table" DESCRIPTION "Output format: table, chart, export" OPTIONAL
DESCRIPTION "Query data using natural language and return results"
' Convert natural language to SQL using AI
sql = LLM "Convert this to SQL for our database: " + query + ". Tables: products, orders, customers, order_items."
' Execute query
results = SQL sql
IF UBOUND(results) = 0 THEN
TALK "No data found for your query. Try rephrasing or ask what data is available."
RETURN NULL
END IF
' Format output based on request
IF format = "chart" OR INSTR(LOWER(query), "chart") > 0 THEN
' Determine chart type
IF INSTR(LOWER(query), "pie") > 0 THEN
chartType = "pie"
ELSE IF INSTR(LOWER(query), "line") > 0 OR INSTR(LOWER(query), "trend") > 0 THEN
chartType = "line"
ELSE IF INSTR(LOWER(query), "bar") > 0 THEN
chartType = "bar"
ELSE
chartType = "bar" ' Default
END IF
chart = CREATE CHART chartType, results
TALK chart
ELSE
' Display as table
TALK TABLE results
END IF
' Offer insights
IF UBOUND(results) > 5 THEN
insight = LLM "Provide a brief insight about this data: " + TOJSON(results)
TALK "💡 **Insight:** " + insight
END IF
RETURN results
Chart Tool: create-chart.bas
PARAM data_query AS STRING LIKE "sales by month" DESCRIPTION "Data to visualize"
PARAM chart_type AS STRING LIKE "bar" DESCRIPTION "Chart type: bar, line, pie, donut, timeseries"
PARAM title AS STRING LIKE "Monthly Sales" DESCRIPTION "Chart title" OPTIONAL
PARAM colors AS STRING LIKE "rainbow" DESCRIPTION "Color scheme: rainbow, blue, green, custom" OPTIONAL
DESCRIPTION "Create a visualization from data query"
' Get the data
results = CALL query-data(data_query, "raw")
IF NOT results THEN
TALK "Could not retrieve data for chart."
RETURN NULL
END IF
' Set chart options
WITH chartOptions
type = chart_type
title = IIF(title, title, data_query)
colorScheme = IIF(colors, colors, "default")
showLegend = TRUE
showValues = TRUE
END WITH
' Generate chart
chart = CREATE CHART chartOptions.type, results, chartOptions
TALK chart
' Provide chart summary
TALK "📊 Chart shows " + UBOUND(results) + " data points."
RETURN chart
Notify Latest Orders: notify-latest-orders.bas
PARAM since AS STRING LIKE "1 hour" DESCRIPTION "Time period for orders" OPTIONAL
PARAM notify AS STRING LIKE "sales@company.com" DESCRIPTION "Email to notify" OPTIONAL
DESCRIPTION "Get latest orders and optionally send notification"
IF NOT since THEN
since = "1 hour"
END IF
' Calculate time filter
cutoff = DATEADD(NOW(), -1, "hours")
IF INSTR(since, "day") > 0 THEN
cutoff = DATEADD(NOW(), -1, "days")
ELSE IF INSTR(since, "week") > 0 THEN
cutoff = DATEADD(NOW(), -7, "days")
END IF
' Query orders
orders = SQL "SELECT * FROM orders WHERE created_at >= '" + FORMAT(cutoff, "YYYY-MM-DD HH:mm:ss") + "' ORDER BY created_at DESC LIMIT 10"
IF UBOUND(orders) = 0 THEN
TALK "No new orders in the last " + since + "."
RETURN NULL
END IF
' Calculate totals
totalRevenue = 0
FOR EACH order IN orders
totalRevenue = totalRevenue + order.total
NEXT
' Display orders
TALK "🛒 **Latest Orders** (Last " + since + ")"
TALK ""
FOR EACH order IN orders
timeAgo = DATEDIFF(NOW(), order.created_at, "minutes")
TALK "**#" + order.order_number + "** - " + timeAgo + " min ago"
TALK "Customer: " + order.customer_name + " | $" + FORMAT(order.total, "#,##0.00") + " | " + order.status
TALK ""
NEXT
TALK "**Summary:** " + UBOUND(orders) + " orders, $" + FORMAT(totalRevenue, "#,##0.00") + " revenue"
' Send notification if requested
IF notify THEN
emailBody = "New orders in the last " + since + ":\n\n"
emailBody = emailBody + "Total Orders: " + UBOUND(orders) + "\n"
emailBody = emailBody + "Total Revenue: $" + FORMAT(totalRevenue, "#,##0.00")
SEND MAIL notify, "Order Update - " + UBOUND(orders) + " new orders", emailBody
TALK "📧 Notification sent to " + notify
END IF
RETURN orders
Setting Up Your Data
Connecting to Data Sources
The Talk to Data template works with various data sources:
' CSV files
data = FIND "sales.csv"
' Excel files
data = FIND "reports.xlsx", "Sheet1"
' SQL databases
data = SQL "SELECT * FROM products"
' External APIs
data = GET "https://api.example.com/sales"
Schema Configuration
For best results, configure your data schema:
SET CONTEXT "data-schema" AS "
Available tables:
- products: id, name, category, price, stock
- orders: id, customer_id, total, status, created_at
- customers: id, name, email, region
- order_items: order_id, product_id, quantity, price
"
Customization Ideas
Add Scheduled Reports
PARAM reportType AS STRING
IF reportType = "daily summary" THEN
SET SCHEDULE "0 8 * * *" ' Run at 8 AM daily
results = CALL query-data("sales summary for yesterday")
SEND MAIL "team@company.com", "Daily Sales Summary", results
TALK "Daily report sent."
END IF
IF reportType = "weekly dashboard" THEN
SET SCHEDULE "0 9 * * 1" ' Run at 9 AM on Mondays
results = CALL query-data("weekly sales by region")
chart = CALL create-chart("weekly sales", "bar")
SEND MAIL "executives@company.com", "Weekly Dashboard", chart
END IF
Add Natural Language Filters
' Enhanced query understanding
PARAM question AS STRING
' Extract time filters
IF INSTR(LOWER(question), "yesterday") > 0 THEN
dateFilter = "date = '" + FORMAT(NOW() - 1, "YYYY-MM-DD") + "'"
ELSE IF INSTR(LOWER(question), "last week") > 0 THEN
dateFilter = "date >= '" + FORMAT(NOW() - 7, "YYYY-MM-DD") + "'"
ELSE IF INSTR(LOWER(question), "this month") > 0 THEN
dateFilter = "MONTH(date) = " + MONTH(NOW())
END IF
' Apply to query
sql = sql + " WHERE " + dateFilter
Add Comparative Analysis
PARAM metric AS STRING LIKE "revenue"
PARAM compare AS STRING LIKE "this month vs last month"
DESCRIPTION "Compare metrics across time periods"
' Parse comparison periods
IF INSTR(compare, "month") > 0 THEN
current = SQL "SELECT SUM(" + metric + ") FROM sales WHERE MONTH(date) = " + MONTH(NOW())
previous = SQL "SELECT SUM(" + metric + ") FROM sales WHERE MONTH(date) = " + (MONTH(NOW()) - 1)
change = ((current - previous) / previous) * 100
TALK "📊 **" + metric + " Comparison**"
TALK "This month: $" + FORMAT(current, "#,##0")
TALK "Last month: $" + FORMAT(previous, "#,##0")
IF change > 0 THEN
TALK "📈 Change: +" + FORMAT(change, "#,##0.0") + "%"
ELSE
TALK "📉 Change: " + FORMAT(change, "#,##0.0") + "%"
END IF
END IF
Best Practices
- Define Your Schema: Provide clear table and column descriptions in context
- Use Examples: Include example queries in the welcome message
- Handle Edge Cases: Always check for empty results
- Provide Insights: Don’t just show data—interpret it
- Offer Next Steps: Suggest related queries or visualizations
Related Templates
- ai-search.bas - Search documents with AI
- analytics-dashboard.bas - System monitoring
- erp.bas - Enterprise resource planning
WhatsApp Template
The WhatsApp template provides specialized tools for WhatsApp Business API integration, including template message sending, task creation, and WhatsApp-specific features.
Topic: WhatsApp Business Integration
This template is perfect for:
- WhatsApp Business API integration
- Template message campaigns
- WhatsApp-based customer service
- Automated WhatsApp notifications
- Task management via WhatsApp
The Code: send.bas
PARAM phone AS PHONE LIKE "122233333333" DESCRIPTION "WhatsApp phone number with country code"
PARAM template AS STRING LIKE "newsletter-zap.txt" DESCRIPTION "Template file name to send"
PARAM variables AS OBJECT LIKE "{name: 'John'}" DESCRIPTION "Template variables for personalization" OPTIONAL
DESCRIPTION "Send a WhatsApp template message to a phone number"
SEND TEMPLATE TO phone, template, variables
WITH log
timestamp = NOW()
phoneNumber = phone
templateFile = template
status = "sent"
END WITH
SAVE "whatsapp_log.csv", log
TALK "WhatsApp message sent to " + phone
RETURN phone
Sample Dialogs
These conversations show how the WhatsApp template works in real-world scenarios.
Dialog 1: Sending a Template Message
Dialog 2: Creating a Task via WhatsApp
Dialog 3: Personalized Template with Variables
Keywords Used
| Keyword | Purpose |
|---|---|
PARAM | Define input parameters for the tool |
DESCRIPTION | Tool description for AI understanding |
SEND TEMPLATE TO | Send WhatsApp template message |
WITH/END WITH | Create structured log object |
SAVE | Log message to CSV file |
TALK | Confirm action to user |
RETURN | Return result |
Template Structure
whatsapp.gbai/
├── whatsapp.gbdialog/
│ ├── send.bas # Send template messages
│ └── create-task.bas # Create tasks via WhatsApp
├── whatsapp.gbkb/
│ ├── articles/ # Knowledge base articles
│ │ └── newsletter-zap.txt
│ └── images/ # Media files
└── whatsapp.gbot/
└── config.csv # Bot configuration
Create Task Tool: create-task.bas
PARAM title AS STRING LIKE "Call client" DESCRIPTION "Task title"
PARAM due_date AS DATE LIKE "2025-01-20" DESCRIPTION "Due date" OPTIONAL
PARAM priority AS STRING LIKE "medium" DESCRIPTION "Priority: high, medium, low" OPTIONAL
DESCRIPTION "Create a task from WhatsApp conversation"
IF NOT due_date THEN
due_date = NOW()
END IF
IF NOT priority THEN
priority = "medium"
END IF
WITH task
id = "TASK-" + FORMAT(RANDOM(10000, 99999))
taskTitle = title
dueDate = due_date
taskPriority = priority
createdBy = FROM
createdAt = NOW()
status = "pending"
END WITH
SAVE "tasks.csv", task
CREATE TASK title, priority, FROM
TALK "✅ Task created: " + title
TALK "📅 Due: " + FORMAT(due_date, "MMM DD, YYYY")
TALK "⚡ Priority: " + priority
RETURN task.id
WhatsApp Template Messages
Understanding Template Messages
WhatsApp Business API requires pre-approved templates for initiating conversations. Templates can include:
- Text: Plain text with optional variables
- Media: Images, documents, videos
- Buttons: Quick reply or call-to-action buttons
- Headers: Text, image, document, or video headers
Template File Format
Create templates in the .gbkb/articles/ folder:
newsletter-zap.txt
---
Hello {{1}}!
Here's your weekly newsletter:
📰 Top Stories This Week
{{2}}
🎯 Don't miss our special offer!
{{3}}
Reply STOP to unsubscribe.
Variables in Templates
Variables are placeholders replaced with actual values:
| Variable | Description | Example |
|---|---|---|
{{1}} | First parameter | Customer name |
{{2}} | Second parameter | Content body |
{{3}} | Third parameter | Offer details |
Customization Ideas
Add Bulk Messaging
PARAM template AS STRING DESCRIPTION "Template to send"
PARAM contacts_file AS STRING LIKE "contacts.csv" DESCRIPTION "CSV file with contacts"
DESCRIPTION "Send template to multiple contacts"
contacts = FIND contacts_file
sent = 0
failed = 0
FOR EACH contact IN contacts
variables = {
"name": contact.name,
"company": contact.company
}
result = SEND TEMPLATE TO contact.phone, template, variables
IF result THEN
sent = sent + 1
ELSE
failed = failed + 1
END IF
WAIT 2 ' Rate limiting
NEXT
TALK "📊 Bulk send complete!"
TALK "✅ Sent: " + sent
TALK "❌ Failed: " + failed
Add Message Status Tracking
' After sending
message_id = SEND TEMPLATE TO phone, template, variables
' Store for tracking
WITH messageRecord
id = message_id
phone = phone
template = template
status = "sent"
sentAt = NOW()
END WITH
SAVE "message_status.csv", messageRecord
' Webhook handler for status updates
ON WEBHOOK "whatsapp_status"
status = webhook_data.status
message_id = webhook_data.message_id
UPDATE "message_status.csv" SET status = status WHERE id = message_id
IF status = "delivered" THEN
TALK "✅ Message " + message_id + " delivered"
ELSE IF status = "read" THEN
TALK "👀 Message " + message_id + " read"
ELSE IF status = "failed" THEN
TALK "❌ Message " + message_id + " failed"
END IF
END ON
Add Interactive Buttons
PARAM phone AS PHONE DESCRIPTION "Recipient phone number"
DESCRIPTION "Send message with quick reply buttons"
template_with_buttons = {
"template": "order_confirmation",
"buttons": [
{"type": "quick_reply", "text": "Track Order"},
{"type": "quick_reply", "text": "Contact Support"},
{"type": "quick_reply", "text": "View Details"}
]
}
SEND TEMPLATE TO phone, template_with_buttons
TALK "Message with buttons sent to " + phone
Add Media Messages
PARAM phone AS PHONE DESCRIPTION "Recipient phone number"
PARAM image_url AS STRING DESCRIPTION "URL of image to send"
PARAM caption AS STRING DESCRIPTION "Image caption" OPTIONAL
DESCRIPTION "Send WhatsApp message with image"
' Send image with caption
SEND MEDIA TO phone, image_url, caption
WITH log
timestamp = NOW()
phone = phone
mediaType = "image"
mediaUrl = image_url
caption = caption
status = "sent"
END WITH
SAVE "whatsapp_media_log.csv", log
TALK "📷 Image sent to " + phone
WhatsApp Business API Best Practices
Message Timing
- Session Messages: Free-form messages within 24-hour window after user message
- Template Messages: Pre-approved templates for initiating conversations
- Rate Limits: Respect WhatsApp’s messaging limits
Template Approval
- Submit templates via WhatsApp Business Manager
- Wait for approval (usually 24-48 hours)
- Use approved templates only
- Follow content guidelines (no promotional content in utility templates)
Phone Number Format
Always use international format without + or spaces:
- ✅
5511999999999(Brazil) - ✅
14155551234(USA) - ❌
+55 11 99999-9999 - ❌
(11) 99999-9999
Compliance
- Opt-in Required: Only message users who have opted in
- Opt-out Handling: Honor STOP/unsubscribe requests immediately
- Business Verification: Complete WhatsApp business verification
- Quality Rating: Maintain high quality rating to avoid restrictions
Logging Structure
The whatsapp_log.csv tracks all messages:
| Column | Description |
|---|---|
| timestamp | When message was sent |
| phoneNumber | Recipient phone number |
| templateFile | Template used |
| variables | Personalization variables |
| status | sent/delivered/read/failed |
| messageId | WhatsApp message ID |
Error Handling
result = SEND TEMPLATE TO phone, template, variables
IF NOT result THEN
' Log the failure
WITH errorLog
timestamp = NOW()
phone = phone
template = template
error = "Send failed"
END WITH
SAVE "whatsapp_errors.csv", errorLog
TALK "❌ Failed to send message to " + phone
TALK "Please verify the phone number and try again."
RETURN NULL
END IF
Related Templates
- broadcast.bas - Mass messaging to contact lists
- store.bas - E-commerce with WhatsApp notifications
- bank.bas - Banking notifications via WhatsApp
Webhook Integrations and Jobs Examples
This document provides complete, production-ready examples demonstrating webhook endpoints and scheduled jobs. All examples use event-driven patterns—no conversation loops.
1. E-Commerce Order Management System
Complete order processing with inventory, payments, and notifications via webhook.
' order-system.bas
' E-commerce order management webhook
WEBHOOK "new-order"
' Extract order data
order_id = body.order_id
customer_email = body.customer.email
customer_name = body.customer.name
items = body.items
shipping_address = body.shipping
payment_method = body.payment.method
payment_token = body.payment.token
' Validate order
IF order_id = "" OR customer_email = "" THEN
result_status = 400
result_error = "Missing required fields"
EXIT
END IF
' Check inventory for all items
inventory_ok = TRUE
out_of_stock_items = ""
FOR EACH item IN items
product = FIND "products", "sku=" + item.sku
IF product.stock < item.quantity THEN
inventory_ok = FALSE
out_of_stock_items = out_of_stock_items + item.name + ", "
END IF
NEXT item
IF NOT inventory_ok THEN
SEND MAIL customer_email, "Order Issue - Items Out of Stock", "Unfortunately, the following items are out of stock: " + out_of_stock_items
result_status = 400
result_error = "Items out of stock"
EXIT
END IF
' Process payment
SET HEADER "Authorization", "Bearer " + GET BOT MEMORY "stripe_key"
payment_result = POST "https://api.stripe.com/v1/charges", body.total, "USD", payment_token, "Order " + order_id
IF payment_result.status <> "succeeded" THEN
SEND MAIL customer_email, "Payment Failed", "Your payment could not be processed. Please try again."
result_status = 402
result_error = "Payment failed"
EXIT
END IF
' Update inventory
FOR EACH item IN items
current_stock = FIND "products", "sku=" + item.sku
new_stock = current_stock.stock - item.quantity
UPDATE "products", "sku=" + item.sku, new_stock, NOW()
IF new_stock < 10 THEN
SEND MAIL "inventory@company.com", "Low Stock Alert: " + item.sku, "Stock level: " + new_stock
END IF
NEXT item
' Save order record
SAVE "orders", order_id, customer_email, customer_name, items, body.total, shipping_address, payment_result.id, "confirmed", NOW()
' Generate invoice PDF
invoice_pdf = GENERATE PDF "templates/invoice.html", order_id, customer_name, customer_email, items, body.subtotal, body.tax, body.shipping_cost, body.total, FORMAT(NOW(), "MMMM DD, YYYY"), "invoices/" + order_id + ".pdf"
' Send confirmation email
email_body = "Thank you for your order, " + customer_name + "!\n\nOrder #: " + order_id + "\nTotal: $" + body.total + "\n\nYour invoice is attached."
SEND MAIL customer_email, "Order Confirmed - #" + order_id, email_body, invoice_pdf.url
' Notify warehouse
POST "https://warehouse.internal/api/orders", order_id, items, shipping_address, "normal"
result_status = "confirmed"
result_order_id = order_id
result_payment_id = payment_result.id
2. HR Onboarding Automation
Complete employee onboarding workflow triggered by webhook.
' onboarding.bas
' HR onboarding automation webhook
WEBHOOK "new-employee"
employee_name = body.name
employee_email = body.email
department = body.department
start_date = body.start_date
manager_email = body.manager_email
role = body.role
' Validate input
IF employee_email = "" OR employee_name = "" THEN
result_status = 400
result_error = "Missing employee name or email"
EXIT
END IF
' Create employee record
employee_id = "EMP-" + FORMAT(NOW(), "YYYYMMDD") + "-" + LEFT(GUID(), 4)
SAVE "employees", employee_id, employee_name, employee_email, department, role, manager_email, start_date, "onboarding", NOW()
' Create tasks for IT setup
CREATE TASK "Create email account for " + employee_name, "it@company.com", start_date
CREATE TASK "Setup laptop for " + employee_name, "it@company.com", start_date
CREATE TASK "Create " + department + " system access for " + employee_name, "it@company.com", start_date
' Create tasks for HR
CREATE TASK "Prepare employment documents for " + employee_name, "hr@company.com", start_date
CREATE TASK "Schedule orientation for " + employee_name, "hr@company.com", start_date
CREATE TASK "Add " + employee_name + " to benefits enrollment", "hr@company.com", start_date
' Send welcome email to new employee
USE KB "employee-handbook"
SET CONTEXT "You are a friendly HR assistant. Create a warm, professional welcome message."
welcome_content = LLM "Write a welcome email for " + employee_name + " joining as " + role + " in " + department + " department, starting on " + start_date
SEND MAIL employee_email, "Welcome to the Team, " + employee_name + "!", welcome_content
' Notify manager
manager_message = "New team member alert!\n\n" + employee_name + " will be joining your team as " + role + " on " + start_date + ".\n\nPlease prepare:\n- First week schedule\n- Team introduction meeting\n- Project assignments"
SEND MAIL manager_email, "New Team Member: " + employee_name, manager_message
' Post to Slack
slack_channel = "#" + LCASE(department)
POST "https://hooks.slack.com/services/xxx", slack_channel, "🎉 Please welcome " + employee_name + " who will be joining us as " + role + " on " + start_date + "!"
' Schedule 30-60-90 day check-ins
check_in_dates = [30, 60, 90]
FOR EACH days IN check_in_dates
check_in_date = DATEADD(start_date, days, "day")
CREATE TASK days + "-day check-in with " + employee_name, manager_email, check_in_date
NEXT days
result_status = "success"
result_employee_id = employee_id
result_tasks_created = 9
3. Daily Business Intelligence Report
Automated daily report job with AI-generated insights.
' daily-report.bas
' Automated daily business intelligence report
SET SCHEDULE "daily-bi-report", "0 7 * * 1-5"
today = FORMAT(NOW(), "YYYY-MM-DD")
yesterday = FORMAT(DATEADD(NOW(), -1, "day"), "YYYY-MM-DD")
' Gather sales data
sales_today = FIND "orders", "DATE(created_at)='" + today + "'"
sales_yesterday = FIND "orders", "DATE(created_at)='" + yesterday + "'"
total_revenue_today = AGGREGATE "SUM", sales_today, "total"
total_revenue_yesterday = AGGREGATE "SUM", sales_yesterday, "total"
order_count_today = AGGREGATE "COUNT", sales_today, "id"
revenue_change = ((total_revenue_today - total_revenue_yesterday) / total_revenue_yesterday) * 100
' Gather support metrics
tickets_today = FIND "support_tickets", "DATE(created_at)='" + today + "'"
tickets_resolved = FILTER tickets_today, "status=resolved"
avg_resolution_time = AGGREGATE "AVG", tickets_resolved, "resolution_time_hours"
' Gather inventory alerts
low_stock = FIND "products", "stock < 10"
out_of_stock = FIND "products", "stock = 0"
' Compile data for AI analysis
report_data = "Date: " + today + ", Revenue: $" + total_revenue_today + ", Orders: " + order_count_today + ", Change: " + revenue_change + "%, Tickets: " + UBOUND(tickets_today) + " opened, " + UBOUND(tickets_resolved) + " resolved, Low stock: " + UBOUND(low_stock)
' Generate AI insights
SET CONTEXT "You are a business analyst. Analyze this data and provide actionable insights. Be concise and focus on key trends and recommendations."
ai_insights = LLM "Analyze this business data and provide 3-5 key insights:\n\n" + report_data
' Build report PDF
report_pdf = GENERATE PDF "templates/daily-report.html", "Daily Business Report - " + today, report_data, ai_insights, NOW(), "reports/daily-" + today + ".pdf"
' Send to executives
executives = ["ceo@company.com", "cfo@company.com", "coo@company.com"]
FOR EACH exec IN executives
SEND MAIL exec, "Daily Business Report - " + today, "Please find attached today's business intelligence report.\n\n" + ai_insights, report_pdf.url
NEXT exec
' Post summary to Slack
slack_summary = "📊 *Daily Report - " + today + "*\n\n💰 Revenue: $" + FORMAT(total_revenue_today, "#,##0.00") + " (" + FORMAT(revenue_change, "+0.0") + "%)\n📦 Orders: " + order_count_today + "\n🎫 Support Tickets: " + UBOUND(tickets_today) + " opened, " + UBOUND(tickets_resolved) + " resolved\n⚠️ Low Stock Items: " + UBOUND(low_stock)
POST "https://hooks.slack.com/services/xxx", "#executive-updates", slack_summary
' Store report in database
SAVE "daily_reports", today, report_data, ai_insights, report_pdf.url
PRINT "Daily report generated and distributed for " + today
4. Document Processing Pipeline
Automated document intake, processing, and classification via webhook.
' document-pipeline.bas
' Automated document processing and classification
WEBHOOK "document-upload"
document_url = body.document_url
document_name = body.filename
uploader_email = body.uploader_email
IF document_url = "" THEN
result_status = 400
result_error = "No document URL provided"
EXIT
END IF
' Download document
local_path = DOWNLOAD document_url, "incoming/" + document_name
' Extract text based on file type
file_extension = LCASE(RIGHT(document_name, 4))
content = GET local_path
' Classify document using AI
SET CONTEXT "You are a document classifier. Classify this document into one of these categories: invoice, contract, report, correspondence, legal, hr, other. Respond with just the category name."
classification_prompt = "Classify this document:\n\n" + LEFT(content, 5000)
category = TRIM(LCASE(LLM classification_prompt))
' Move to appropriate folder
destination = category + "/" + document_name
MOVE local_path, destination
' Create searchable index entry
doc_id = INSERT "documents", document_name, document_url, destination, category, LEFT(content, 1000), content, uploader_email, NOW()
' Add to knowledge base for future queries
USE KB category + "-docs"
' Category-specific processing
IF category = "invoice" THEN
SET CONTEXT "Extract from this invoice: vendor name, invoice number, date, due date, total amount. Respond in JSON."
invoice_data = LLM content
INSERT "accounts_payable", doc_id, invoice_data, "pending_review", NOW()
SEND MAIL "accounting@company.com", "New Invoice for Review", "A new invoice has been uploaded.\n\nDocument: " + document_name
END IF
IF category = "contract" THEN
SET CONTEXT "Extract from this contract: parties involved, effective date, expiration date, key terms. Respond in JSON."
contract_data = LLM content
INSERT "contracts", doc_id, contract_data, "active", NOW()
SEND MAIL "legal@company.com", "New Contract Uploaded", "A new contract has been processed.\n\nDocument: " + document_name
END IF
IF category = "hr" THEN
SEND MAIL "hr@company.com", "New HR Document", "A new HR document has been uploaded: " + document_name
END IF
' Notify uploader
SEND MAIL uploader_email, "Document Processed: " + document_name, "Your document has been successfully processed.\n\nCategory: " + category + "\nDocument ID: " + doc_id
result_status = "processed"
result_doc_id = doc_id
result_category = category
5. Real-time Data Sync (CRM to ERP)
Bidirectional sync between systems via webhook.
' data-sync.bas
' Real-time data synchronization between CRM and ERP
WEBHOOK "crm-update"
event_type = body.event
record_type = body.record_type
record_id = body.record_id
data = body.data
timestamp = body.timestamp
' Log sync event
INSERT "sync_logs", "crm", event_type, record_type, record_id, timestamp, NOW()
' Check for sync conflicts
last_erp_update = FIND "erp_sync_status", "record_id=" + record_id
IF last_erp_update.updated_at > timestamp THEN
INSERT "sync_conflicts", record_id, timestamp, last_erp_update.updated_at, data, last_erp_update.data, "pending_resolution"
SEND MAIL "data-admin@company.com", "Sync Conflict Detected", "Record " + record_id + " has conflicting updates. Please resolve in the admin portal."
result_status = "conflict"
result_message = "Newer data exists in ERP"
EXIT
END IF
' Transform data for ERP format based on record type
IF record_type = "customer" THEN
erp_endpoint = "/api/customers/" + record_id
erp_customer_code = record_id
erp_company_name = data.company
erp_contact_name = data.contact_first_name + " " + data.contact_last_name
erp_email = data.email
erp_phone = data.phone
END IF
IF record_type = "order" THEN
erp_endpoint = "/api/orders/" + record_id
erp_order_number = record_id
erp_customer_code = data.customer_id
erp_order_date = data.created_at
erp_total = data.total
END IF
IF record_type = "product" THEN
erp_endpoint = "/api/products/" + record_id
erp_sku = record_id
erp_description = data.name
erp_unit_price = data.price
END IF
' Send to ERP
erp_api_key = GET BOT MEMORY "erp_api_key"
SET HEADER "Authorization", "Bearer " + erp_api_key
SET HEADER "Content-Type", "application/json"
IF event_type = "create" THEN
erp_result = POST "https://erp.company.com" + erp_endpoint, data
ELSE IF event_type = "update" THEN
erp_result = PUT "https://erp.company.com" + erp_endpoint, data
ELSE IF event_type = "delete" THEN
erp_result = DELETE "https://erp.company.com" + erp_endpoint
END IF
' Update sync status
SAVE "erp_sync_status", record_id, record_type, timestamp, NOW(), erp_result.status, data
result_status = "synced"
result_record_id = record_id
result_erp_status = erp_result.status
6. Scheduled Lead Nurturing Job
Automated lead follow-up and nurturing campaign.
' lead-nurturing.bas
' Scheduled lead nurturing campaign
SET SCHEDULE "lead-nurture", "0 9 * * *"
' Find leads needing follow-up
cold_leads_3_days = FIND "leads", "status='cold' AND DATEDIFF(NOW(), last_contact) >= 3"
cold_leads_7_days = FIND "leads", "status='cold' AND DATEDIFF(NOW(), last_contact) >= 7"
cold_leads_14_days = FIND "leads", "status='cold' AND DATEDIFF(NOW(), last_contact) >= 14"
cold_leads_30_days = FIND "leads", "status='cold' AND DATEDIFF(NOW(), last_contact) >= 30"
' 3-day follow-up: Tips email
FOR EACH lead IN cold_leads_3_days
IF lead.nurture_stage = 0 THEN
SEND MAIL lead.email, "5 Tips to Improve Your Business", "templates/nurture-tips.html"
UPDATE "leads", "id=" + lead.id, 1, NOW()
END IF
NEXT lead
' 7-day follow-up: Case study
FOR EACH lead IN cold_leads_7_days
IF lead.nurture_stage = 1 THEN
SEND MAIL lead.email, "Case Study: How We Helped Similar Companies", "templates/nurture-case-study.html"
UPDATE "leads", "id=" + lead.id, 2, NOW()
END IF
NEXT lead
' 14-day follow-up: Free consultation
FOR EACH lead IN cold_leads_14_days
IF lead.nurture_stage = 2 THEN
SEND MAIL lead.email, "Free Consultation Offer", "templates/nurture-consultation.html"
UPDATE "leads", "id=" + lead.id, 3, NOW()
END IF
NEXT lead
' 30-day follow-up: Special offer
FOR EACH lead IN cold_leads_30_days
IF lead.nurture_stage = 3 THEN
SEND MAIL lead.email, "Special Limited Time Offer", "templates/nurture-special-offer.html"
UPDATE "leads", "id=" + lead.id, 4, NOW()
END IF
NEXT lead
' Log nurturing stats
PRINT "Lead nurturing completed: " + UBOUND(cold_leads_3_days) + " at stage 1, " + UBOUND(cold_leads_7_days) + " at stage 2"
7. Payment Collection Reminder Job
Automated payment reminders and collection workflow.
' payment-collection.bas
' Scheduled payment collection reminders
SET SCHEDULE "payment-reminders", "0 8 * * 1-5"
' Find overdue invoices
due_today = FIND "invoices", "status='pending' AND due_date = CURDATE()"
overdue_3_days = FIND "invoices", "status='pending' AND DATEDIFF(NOW(), due_date) = 3"
overdue_7_days = FIND "invoices", "status='pending' AND DATEDIFF(NOW(), due_date) = 7"
overdue_14_days = FIND "invoices", "status='pending' AND DATEDIFF(NOW(), due_date) = 14"
overdue_30_days = FIND "invoices", "status='pending' AND DATEDIFF(NOW(), due_date) >= 30"
' Due today reminder
FOR EACH invoice IN due_today
customer = FIND "customers", "id=" + invoice.customer_id
SEND MAIL customer.email, "Payment Due Today - Invoice #" + invoice.id, "Your invoice #" + invoice.id + " for $" + invoice.amount + " is due today. Please make payment to avoid late fees."
NEXT invoice
' 3-day overdue: First reminder
FOR EACH invoice IN overdue_3_days
customer = FIND "customers", "id=" + invoice.customer_id
SEND MAIL customer.email, "Payment Overdue - Invoice #" + invoice.id, "Your invoice #" + invoice.id + " for $" + invoice.amount + " is now 3 days overdue. Please remit payment as soon as possible."
UPDATE "invoices", "id=" + invoice.id, "first_reminder_sent", NOW()
NEXT invoice
' 7-day overdue: Second reminder with late fee warning
FOR EACH invoice IN overdue_7_days
customer = FIND "customers", "id=" + invoice.customer_id
SEND MAIL customer.email, "URGENT: Payment Overdue - Invoice #" + invoice.id, "Your invoice #" + invoice.id + " is now 7 days overdue. A late fee may be applied if not paid within 7 days."
UPDATE "invoices", "id=" + invoice.id, "second_reminder_sent", NOW()
NEXT invoice
' 14-day overdue: Final notice
FOR EACH invoice IN overdue_14_days
customer = FIND "customers", "id=" + invoice.customer_id
late_fee = invoice.amount * 0.05
new_total = invoice.amount + late_fee
SEND MAIL customer.email, "FINAL NOTICE: Invoice #" + invoice.id, "Your invoice is now 14 days overdue. A 5% late fee ($" + late_fee + ") has been applied. New total: $" + new_total
UPDATE "invoices", "id=" + invoice.id, late_fee, new_total, "final_notice_sent", NOW()
' Notify accounts receivable
SEND MAIL "ar@company.com", "Invoice Escalation: #" + invoice.id, "Invoice #" + invoice.id + " for " + customer.name + " is 14 days overdue. Amount: $" + new_total
NEXT invoice
' 30+ day overdue: Send to collections
FOR EACH invoice IN overdue_30_days
IF invoice.status <> "collections" THEN
customer = FIND "customers", "id=" + invoice.customer_id
UPDATE "invoices", "id=" + invoice.id, "collections", NOW()
' Notify collections team
SEND MAIL "collections@company.com", "New Collections Account: " + customer.name, "Invoice #" + invoice.id + " - $" + invoice.total_with_fees + "\nCustomer: " + customer.name + "\nDays overdue: " + DATEDIFF(NOW(), invoice.due_date)
END IF
NEXT invoice
PRINT "Payment reminders sent: " + UBOUND(due_today) + " due today, " + UBOUND(overdue_3_days) + " 3-day, " + UBOUND(overdue_7_days) + " 7-day"
8. Appointment Scheduling Webhook
Handle appointment bookings from external calendar systems.
' appointment-webhook.bas
' Handle appointment scheduling from external systems
WEBHOOK "appointment-booked"
appointment_id = body.appointment_id
customer_email = body.customer.email
customer_name = body.customer.name
customer_phone = body.customer.phone
service_type = body.service
appointment_date = body.date
appointment_time = body.time
staff_id = body.staff_id
' Validate
IF appointment_id = "" OR customer_email = "" THEN
result_status = 400
result_error = "Missing required fields"
EXIT
END IF
' Check staff availability
existing = FIND "appointments", "staff_id='" + staff_id + "' AND date='" + appointment_date + "' AND time='" + appointment_time + "'"
IF UBOUND(existing) > 0 THEN
result_status = 409
result_error = "Time slot not available"
EXIT
END IF
' Save appointment
SAVE "appointments", appointment_id, customer_email, customer_name, customer_phone, service_type, appointment_date, appointment_time, staff_id, "confirmed", NOW()
' Get staff info
staff = FIND "staff", "id=" + staff_id
' Send confirmation to customer
confirmation_msg = "Your appointment has been confirmed!\n\n📅 " + appointment_date + " at " + appointment_time + "\n🏢 Service: " + service_type + "\n👤 With: " + staff.name + "\n\nPlease arrive 10 minutes early."
SEND MAIL customer_email, "Appointment Confirmed - " + service_type, confirmation_msg
' Send SMS reminder setup
SET SCHEDULE "reminder-" + appointment_id, DATEADD(appointment_date + " " + appointment_time, -24, "hour")
' Notify staff
SEND MAIL staff.email, "New Appointment: " + customer_name, "You have a new appointment:\n\n📅 " + appointment_date + " at " + appointment_time + "\n👤 Customer: " + customer_name + "\n📞 Phone: " + customer_phone + "\n🏢 Service: " + service_type
' Add to calendar
BOOK staff.email, "Appointment: " + customer_name + " - " + service_type, appointment_date, appointment_time, 60
result_status = "confirmed"
result_appointment_id = appointment_id
See Also
- Keywords Reference — Complete keyword documentation
- WEBHOOK — Creating API endpoints
- SET SCHEDULE — Scheduled automation
- Data Operations — Database keywords
- File Operations — File handling
- HTTP Operations — REST API calls
Data Sync Tools
This document provides a collection of specialized data synchronization tools. Instead of one monolithic sync system, these modular tools can be combined as needed.
Overview
Data synchronization is split into focused, reusable tools:
| Tool | Purpose | File |
|---|---|---|
crm-sync.bas | CRM to/from internal database | Bidirectional customer data |
erp-sync.bas | ERP system integration | Orders, inventory, accounting |
inventory-sync.bas | Real-time inventory updates | Stock levels across systems |
user-sync.bas | User/employee directory sync | HR systems, Active Directory |
conflict-resolver.bas | Handle sync conflicts | Automated or manual resolution |
sync-monitor.bas | Monitor sync health | Alerts and dashboards |
Tool 1: CRM Sync
Bidirectional synchronization with CRM systems (Salesforce, HubSpot, etc.).
' crm-sync.bas
' Bidirectional CRM synchronization tool
WEBHOOK "crm-inbound"
event = body.event
record_type = body.type
record_id = body.id
data = body.data
source_timestamp = body.timestamp
' Validate webhook signature
signature = headers.x_webhook_signature
secret = GET BOT MEMORY "crm_webhook_secret"
IF NOT VERIFY_SIGNATURE(body, signature, secret) THEN
WITH result = NEW OBJECT
.status = 401
.error = "Invalid signature"
END WITH
EXIT
END IF
' Log incoming sync event
WITH sync_event = NEW OBJECT
.direction = "inbound"
.source = "crm"
.event = event
.record_type = record_type
.record_id = record_id
.timestamp = source_timestamp
.received_at = NOW()
END WITH
INSERT "sync_events", sync_event
' Check for conflicts before processing
existing = FIND "local_data", "external_id=" + record_id
IF existing.updated_at > source_timestamp THEN
' Local data is newer - create conflict record
WITH conflict = NEW OBJECT
.record_id = record_id
.local_timestamp = existing.updated_at
.remote_timestamp = source_timestamp
.local_data = existing
.remote_data = data
.status = "pending"
.created_at = NOW()
END WITH
INSERT "sync_conflicts", conflict
WITH result = NEW OBJECT
.status = "conflict"
.conflict_id = conflict.id
END WITH
EXIT
END IF
' Transform CRM data to local format
SELECT CASE record_type
CASE "contact"
WITH local_record = NEW OBJECT
.external_id = record_id
.external_source = "crm"
.first_name = data.firstName
.last_name = data.lastName
.email = data.email
.phone = data.phone
.company = data.account.name
.title = data.title
.source = data.leadSource
.status = data.status
.updated_at = NOW()
.synced_at = NOW()
END WITH
table_name = "contacts"
CASE "account"
WITH local_record = NEW OBJECT
.external_id = record_id
.external_source = "crm"
.company_name = data.name
.industry = data.industry
.website = data.website
.annual_revenue = data.annualRevenue
.employee_count = data.numberOfEmployees
.billing_address = data.billingAddress
.updated_at = NOW()
.synced_at = NOW()
END WITH
table_name = "accounts"
CASE "opportunity"
WITH local_record = NEW OBJECT
.external_id = record_id
.external_source = "crm"
.name = data.name
.account_id = data.accountId
.amount = data.amount
.stage = data.stageName
.probability = data.probability
.close_date = data.closeDate
.updated_at = NOW()
.synced_at = NOW()
END WITH
table_name = "opportunities"
CASE ELSE
WITH result = NEW OBJECT
.status = 400
.error = "Unknown record type: " + record_type
END WITH
EXIT
END SELECT
' Apply changes based on event type
SELECT CASE event
CASE "created"
INSERT table_name, local_record
CASE "updated"
UPDATE table_name, "external_id=" + record_id, local_record
CASE "deleted"
WITH soft_delete = NEW OBJECT
.deleted_at = NOW()
.deleted_from = "crm"
END WITH
UPDATE table_name, "external_id=" + record_id, soft_delete
END SELECT
' Update sync status
WITH sync_status = NEW OBJECT
.record_id = record_id
.record_type = record_type
.last_sync = NOW()
.sync_direction = "inbound"
.status = "success"
END WITH
SAVE "sync_status", record_type + "_" + record_id, sync_status
WITH result = NEW OBJECT
.status = "synced"
.record_id = record_id
.direction = "inbound"
END WITH
CRM Outbound Sync
' crm-outbound.bas
' Push local changes to CRM
ON "contacts", "INSERT,UPDATE"
record = trigger.new_data
old_record = trigger.old_data
' Skip if this update came from CRM (prevent loops)
IF record.external_source = "crm" AND record.synced_at = record.updated_at THEN
EXIT
END IF
' Check if record exists in CRM
IF record.external_id = "" THEN
' New record - create in CRM
operation = "create"
endpoint = "/api/contacts"
ELSE
' Existing record - update in CRM
operation = "update"
endpoint = "/api/contacts/" + record.external_id
END IF
' Transform to CRM format
WITH crm_data = NEW OBJECT
.firstName = record.first_name
.lastName = record.last_name
.email = record.email
.phone = record.phone
.title = record.title
.leadSource = record.source
END WITH
' Send to CRM
crm_api_key = GET BOT MEMORY "crm_api_key"
SET HEADER "Authorization", "Bearer " + crm_api_key
SET HEADER "Content-Type", "application/json"
IF operation = "create" THEN
response = POST "https://api.crm.com" + endpoint, crm_data
' Store external ID
WITH id_update = NEW OBJECT
.external_id = response.id
.external_source = "crm"
.synced_at = NOW()
END WITH
UPDATE "contacts", "id=" + record.id, id_update
ELSE
response = PUT "https://api.crm.com" + endpoint, crm_data
WITH sync_update = NEW OBJECT
.synced_at = NOW()
END WITH
UPDATE "contacts", "id=" + record.id, sync_update
END IF
' Log outbound sync
WITH sync_event = NEW OBJECT
.direction = "outbound"
.destination = "crm"
.event = operation
.record_type = "contact"
.record_id = record.id
.external_id = record.external_id
.timestamp = NOW()
.response_status = response.status
END WITH
INSERT "sync_events", sync_event
Tool 2: ERP Sync
Integration with ERP systems for orders, inventory, and accounting.
' erp-sync.bas
' ERP system synchronization tool
WEBHOOK "erp-webhook"
event_type = body.eventType
entity = body.entity
entity_id = body.entityId
payload = body.payload
' Authenticate request
api_key = headers.x_api_key
expected_key = GET BOT MEMORY "erp_webhook_key"
IF api_key <> expected_key THEN
WITH result = NEW OBJECT
.status = 401
.error = "Unauthorized"
END WITH
EXIT
END IF
' Route to appropriate handler
SELECT CASE entity
CASE "salesOrder"
CALL process_sales_order(event_type, entity_id, payload)
CASE "purchaseOrder"
CALL process_purchase_order(event_type, entity_id, payload)
CASE "invoice"
CALL process_invoice(event_type, entity_id, payload)
CASE "inventory"
CALL process_inventory_update(entity_id, payload)
CASE "shipment"
CALL process_shipment(event_type, entity_id, payload)
END SELECT
WITH result = NEW OBJECT
.status = "processed"
.entity = entity
.entity_id = entity_id
END WITH
' --- Sub-procedures ---
SUB process_sales_order(event_type, order_id, data)
WITH order = NEW OBJECT
.erp_order_id = order_id
.order_number = data.orderNumber
.customer_id = data.customerId
.order_date = data.orderDate
.ship_date = data.requestedShipDate
.status = data.status
.subtotal = data.subtotal
.tax = data.taxAmount
.shipping = data.shippingAmount
.total = data.total
.currency = data.currency
.updated_at = NOW()
END WITH
IF event_type = "created" THEN
INSERT "orders", order
' Create line items
FOR EACH item IN data.lineItems
WITH line = NEW OBJECT
.order_id = order_id
.sku = item.sku
.description = item.description
.quantity = item.quantity
.unit_price = item.unitPrice
.discount = item.discount
.total = item.lineTotal
END WITH
INSERT "order_lines", line
NEXT item
' Notify sales team
SEND MAIL "sales@company.com", "New Order: " + data.orderNumber, "Order total: $" + data.total
ELSE IF event_type = "updated" THEN
UPDATE "orders", "erp_order_id=" + order_id, order
' Check for status changes
old_order = FIND "orders", "erp_order_id=" + order_id
IF old_order.status <> data.status THEN
' Notify customer of status change
customer = FIND "customers", "id=" + data.customerId
SEND MAIL customer.email, "Order Update: " + data.orderNumber, "Your order status is now: " + data.status
END IF
END IF
END SUB
SUB process_inventory_update(sku, data)
WITH inventory = NEW OBJECT
.sku = sku
.quantity_on_hand = data.qtyOnHand
.quantity_available = data.qtyAvailable
.quantity_reserved = data.qtyReserved
.quantity_on_order = data.qtyOnOrder
.warehouse = data.warehouse
.bin_location = data.binLocation
.last_count_date = data.lastCountDate
.updated_at = NOW()
END WITH
SAVE "inventory", sku, inventory
' Check for low stock alert
product = FIND "products", "sku=" + sku
IF data.qtyAvailable < product.reorder_point THEN
WITH alert = NEW OBJECT
.sku = sku
.product_name = product.name
.current_qty = data.qtyAvailable
.reorder_point = product.reorder_point
.reorder_qty = product.reorder_quantity
.created_at = NOW()
END WITH
INSERT "stock_alerts", alert
SEND MAIL "purchasing@company.com", "Low Stock Alert: " + sku, "Product " + product.name + " is below reorder point. Current: " + data.qtyAvailable + ", Reorder at: " + product.reorder_point
END IF
END SUB
SUB process_shipment(event_type, shipment_id, data)
WITH shipment = NEW OBJECT
.erp_shipment_id = shipment_id
.order_id = data.orderId
.carrier = data.carrier
.tracking_number = data.trackingNumber
.ship_date = data.shipDate
.estimated_delivery = data.estimatedDelivery
.status = data.status
.updated_at = NOW()
END WITH
IF event_type = "created" THEN
INSERT "shipments", shipment
' Notify customer
order = FIND "orders", "erp_order_id=" + data.orderId
customer = FIND "customers", "id=" + order.customer_id
tracking_url = "https://track.carrier.com/" + data.trackingNumber
SEND MAIL customer.email, "Your Order Has Shipped!", "Good news! Your order " + order.order_number + " has shipped.\n\nTracking: " + data.trackingNumber + "\nCarrier: " + data.carrier + "\nEstimated Delivery: " + data.estimatedDelivery + "\n\nTrack your package: " + tracking_url
ELSE IF event_type = "updated" THEN
UPDATE "shipments", "erp_shipment_id=" + shipment_id, shipment
IF data.status = "delivered" THEN
' Update order status
WITH order_update = NEW OBJECT
.status = "delivered"
.delivered_at = NOW()
END WITH
UPDATE "orders", "erp_order_id=" + data.orderId, order_update
END IF
END IF
END SUB
Tool 3: Inventory Sync
Real-time inventory synchronization across multiple systems.
' inventory-sync.bas
' Real-time inventory synchronization
WEBHOOK "inventory-update"
source = body.source
sku = body.sku
warehouse = body.warehouse
adjustment_type = body.type
quantity = body.quantity
reason = body.reason
reference = body.reference
' Get current inventory
current = FIND "inventory", "sku=" + sku + " AND warehouse=" + warehouse
' Calculate new quantity based on adjustment type
SELECT CASE adjustment_type
CASE "receipt"
new_qty = current.quantity_on_hand + quantity
CASE "shipment"
new_qty = current.quantity_on_hand - quantity
CASE "adjustment"
new_qty = quantity
CASE "transfer_out"
new_qty = current.quantity_on_hand - quantity
CASE "transfer_in"
new_qty = current.quantity_on_hand + quantity
CASE "count"
new_qty = quantity
END SELECT
' Validate quantity
IF new_qty < 0 THEN
WITH result = NEW OBJECT
.status = 400
.error = "Inventory cannot be negative"
.current_qty = current.quantity_on_hand
.attempted_qty = new_qty
END WITH
EXIT
END IF
' Update local inventory
WITH inv_update = NEW OBJECT
.quantity_on_hand = new_qty
.updated_at = NOW()
.last_adjustment_type = adjustment_type
.last_adjustment_source = source
END WITH
UPDATE "inventory", "sku=" + sku + " AND warehouse=" + warehouse, inv_update
' Log the transaction
WITH transaction = NEW OBJECT
.sku = sku
.warehouse = warehouse
.adjustment_type = adjustment_type
.quantity_before = current.quantity_on_hand
.quantity_change = quantity
.quantity_after = new_qty
.reason = reason
.reference = reference
.source = source
.created_at = NOW()
END WITH
INSERT "inventory_transactions", transaction
' Sync to other systems based on source
systems_to_sync = ["erp", "ecommerce", "pos", "wms"]
FOR EACH system IN systems_to_sync
IF system <> source THEN
CALL sync_inventory_to_system(system, sku, warehouse, new_qty)
END IF
NEXT system
' Check for alerts
product = FIND "products", "sku=" + sku
IF new_qty <= product.reorder_point AND current.quantity_on_hand > product.reorder_point THEN
' Just crossed below reorder point
WITH alert_msg = NEW OBJECT
.text = "⚠️ *Low Stock Alert*\n\nSKU: " + sku + "\nProduct: " + product.name + "\nWarehouse: " + warehouse + "\nCurrent Qty: " + new_qty + "\nReorder Point: " + product.reorder_point
END WITH
POST "https://hooks.slack.com/services/xxx", alert_msg
END IF
IF new_qty = 0 THEN
' Out of stock
WITH alert_msg = NEW OBJECT
.text = "🚨 *Out of Stock*\n\nSKU: " + sku + "\nProduct: " + product.name + "\nWarehouse: " + warehouse
END WITH
POST "https://hooks.slack.com/services/xxx", alert_msg
' Disable on e-commerce
CALL disable_product_ecommerce(sku)
END IF
WITH result = NEW OBJECT
.status = "synced"
.sku = sku
.warehouse = warehouse
.new_quantity = new_qty
END WITH
' --- Helper procedures ---
SUB sync_inventory_to_system(system, sku, warehouse, qty)
SELECT CASE system
CASE "erp"
SET HEADER "Authorization", "Bearer " + GET BOT MEMORY "erp_api_key"
WITH erp_payload = NEW OBJECT
.sku = sku
.warehouseCode = warehouse
.qtyOnHand = qty
END WITH
PUT "https://erp.company.com/api/inventory/" + sku, erp_payload
CASE "ecommerce"
SET HEADER "Authorization", "Bearer " + GET BOT MEMORY "ecom_api_key"
WITH ecom_payload = NEW OBJECT
.inventory_quantity = qty
END WITH
PUT "https://api.shopify.com/products/" + sku + "/inventory", ecom_payload
CASE "pos"
SET HEADER "X-API-Key", GET BOT MEMORY "pos_api_key"
WITH pos_payload = NEW OBJECT
.item_id = sku
.quantity = qty
.location_id = warehouse
END WITH
POST "https://api.pos.com/inventory/update", pos_payload
CASE "wms"
SET HEADER "Authorization", "Bearer " + GET BOT MEMORY "wms_api_key"
WITH wms_payload = NEW OBJECT
.sku = sku
.location = warehouse
.qty = qty
END WITH
PUT "https://wms.company.com/api/inventory", wms_payload
END SELECT
' Log sync
WITH sync_log = NEW OBJECT
.system = system
.sku = sku
.warehouse = warehouse
.quantity = qty
.synced_at = NOW()
END WITH
INSERT "inventory_sync_log", sync_log
END SUB
SUB disable_product_ecommerce(sku)
SET HEADER "Authorization", "Bearer " + GET BOT MEMORY "ecom_api_key"
WITH update_payload = NEW OBJECT
.available = FALSE
.inventory_policy = "deny"
END WITH
PUT "https://api.shopify.com/products/" + sku, update_payload
END SUB
Tool 4: Conflict Resolver
Handle and resolve synchronization conflicts.
' conflict-resolver.bas
' Automated and manual sync conflict resolution
' Scheduled job to process conflicts
SET SCHEDULE "resolve-conflicts", "*/15 * * * *"
' Get pending conflicts
conflicts = FIND "sync_conflicts", "status=pending ORDER BY created_at ASC LIMIT 50"
FOR EACH conflict IN conflicts
resolution = CALL attempt_auto_resolve(conflict)
IF resolution.resolved THEN
' Apply the resolution
CALL apply_resolution(conflict, resolution)
' Update conflict status
WITH status_update = NEW OBJECT
.status = "resolved"
.resolution_type = "automatic"
.resolution_details = resolution.details
.resolved_at = NOW()
END WITH
UPDATE "sync_conflicts", "id=" + conflict.id, status_update
ELSE
' Escalate for manual review
IF conflict.escalated_at = "" THEN
CALL escalate_conflict(conflict)
WITH escalate_update = NEW OBJECT
.status = "escalated"
.escalated_at = NOW()
END WITH
UPDATE "sync_conflicts", "id=" + conflict.id, escalate_update
END IF
END IF
NEXT conflict
' --- Functions ---
FUNCTION attempt_auto_resolve(conflict)
WITH result = NEW OBJECT
.resolved = FALSE
.winner = ""
.details = ""
END WITH
' Rule 1: Timestamp-based (most recent wins)
time_diff = DATEDIFF(conflict.local_timestamp, conflict.remote_timestamp, "second")
IF ABS(time_diff) > 60 THEN
' Clear winner by timestamp
IF conflict.local_timestamp > conflict.remote_timestamp THEN
result.resolved = TRUE
result.winner = "local"
result.details = "Local data is " + ABS(time_diff) + " seconds newer"
ELSE
result.resolved = TRUE
result.winner = "remote"
result.details = "Remote data is " + ABS(time_diff) + " seconds newer"
END IF
RETURN result
END IF
' Rule 2: Field-level merge (non-conflicting changes)
local_changes = CALL get_changed_fields(conflict.original_data, conflict.local_data)
remote_changes = CALL get_changed_fields(conflict.original_data, conflict.remote_data)
' Check if changes affect different fields
overlap = FALSE
FOR EACH field IN local_changes
IF INSTR(remote_changes, field) > 0 THEN
overlap = TRUE
EXIT FOR
END IF
NEXT field
IF NOT overlap THEN
' Can merge without conflict
result.resolved = TRUE
result.winner = "merge"
result.details = "Field-level merge: local changed [" + local_changes + "], remote changed [" + remote_changes + "]"
RETURN result
END IF
' Rule 3: Source priority
priority_source = GET BOT MEMORY "sync_priority_source"
IF priority_source <> "" THEN
IF conflict.source = priority_source THEN
result.resolved = TRUE
result.winner = "remote"
result.details = "Priority source rule: " + priority_source + " wins"
ELSE
result.resolved = TRUE
result.winner = "local"
result.details = "Non-priority source: local wins"
END IF
RETURN result
END IF
' Cannot auto-resolve
result.details = "Manual resolution required: same fields modified within 60 seconds"
RETURN result
END FUNCTION
SUB apply_resolution(conflict, resolution)
SELECT CASE resolution.winner
CASE "local"
' Push local data to remote
CALL sync_to_remote(conflict.record_type, conflict.record_id, conflict.local_data)
CASE "remote"
' Apply remote data locally
UPDATE conflict.record_type, "id=" + conflict.record_id, conflict.remote_data
CASE "merge"
' Merge both changes
merged_data = CALL merge_records(conflict.original_data, conflict.local_data, conflict.remote_data)
UPDATE conflict.record_type, "id=" + conflict.record_id, merged_data
CALL sync_to_remote(conflict.record_type, conflict.record_id, merged_data)
END SELECT
END SUB
SUB escalate_conflict(conflict)
' Send notification to data admin
WITH notification = NEW OBJECT
.conflict_id = conflict.id
.record_type = conflict.record_type
.record_id = conflict.record_id
.local_timestamp = conflict.local_timestamp
.remote_timestamp = conflict.remote_timestamp
.local_summary = CALL summarize_data(conflict.local_data)
.remote_summary = CALL summarize_data(conflict.remote_data)
END WITH
email_body = "A sync conflict requires manual resolution.\n\n"
email_body = email_body + "Record: " + conflict.record_type + " #" + conflict.record_id + "\n"
email_body = email_body + "Local changes: " + notification.local_summary + "\n"
email_body = email_body + "Remote changes: " + notification.remote_summary + "\n\n"
email_body = email_body + "Please review at: https://admin.company.com/conflicts/" + conflict.id
SEND MAIL "data-admin@company.com", "Sync Conflict: " + conflict.record_type + " #" + conflict.record_id, email_body
' Also post to Slack
WITH slack_msg = NEW OBJECT
.text = "⚠️ *Sync Conflict Requires Review*\n\nRecord: " + conflict.record_type + " #" + conflict.record_id + "\n<https://admin.company.com/conflicts/" + conflict.id + "|Review Now>"
END WITH
POST "https://hooks.slack.com/services/xxx", slack_msg
END SUB
Tool 5: Sync Monitor
Monitor sync health and generate alerts.
' sync-monitor.bas
' Data sync health monitoring
SET SCHEDULE "sync-health-check", "*/5 * * * *"
' Check sync lag for each integration
integrations = ["crm", "erp", "ecommerce", "wms"]
WITH health_report = NEW OBJECT
.timestamp = NOW()
.status = "healthy"
.issues = []
END WITH
FOR EACH integration IN integrations
' Get latest sync event
latest = FIND "sync_events", "source=" + integration + " OR destination=" + integration + " ORDER BY timestamp DESC LIMIT 1"
lag_minutes = DATEDIFF(latest.timestamp, NOW(), "minute")
WITH integration_status = NEW OBJECT
.name = integration
.last_sync = latest.timestamp
.lag_minutes = lag_minutes
.status = "ok"
END WITH
' Check for concerning lag
max_lag = GET BOT MEMORY "max_sync_lag_" + integration
IF max_lag = "" THEN max_lag = 30 END IF
IF lag_minutes > max_lag THEN
integration_status.status = "warning"
health_report.status = "degraded"
WITH issue = NEW OBJECT
.integration = integration
.type = "sync_lag"
.message = integration + " sync lag: " + lag_minutes + " minutes (max: " + max_lag + ")"
END WITH
health_report.issues.ADD(issue)
END IF
' Check for recent errors
recent_errors = FIND "sync_events", "source=" + integration + " AND status='error' AND timestamp > DATEADD(NOW(), -1, 'hour')"
error_count = UBOUND(recent_errors)
IF error_count > 5 THEN
integration_status.status = "error"
health_report.status = "unhealthy"
WITH issue = NEW OBJECT
.integration = integration
.type = "high_error_rate"
.message = integration + " has " + error_count + " errors in the last hour"
END WITH
health_report.issues.ADD(issue)
END IF
integration_status.error_count_1h = error_count
NEXT integration
' Check pending conflicts
pending_conflicts = AGGREGATE "COUNT", "sync_conflicts", "status='pending'"
escalated_conflicts = AGGREGATE "COUNT", "sync_conflicts", "status='escalated'"
IF pending_conflicts > 100 THEN
health_report.status = "degraded"
WITH issue = NEW OBJECT
.type = "pending_conflicts"
.message = pending_conflicts + " sync conflicts pending resolution"
END WITH
health_report.issues.ADD(issue)
END IF
' Check queue depth
queue_depth = AGGREGATE "COUNT", "sync_queue", "status='pending'"
IF queue_depth > 1000 THEN
health_report.status = "degraded"
WITH issue = NEW OBJECT
.type = "queue_backlog"
.message = "Sync queue backlog: " + queue_depth + " items"
END WITH
health_report.issues.ADD(issue)
END IF
' Store health report
INSERT "sync_health_reports", health_report
' Alert if unhealthy
IF health_report.status = "unhealthy" THEN
alert_message = "🚨 *Data Sync Unhealthy*\n\n"
FOR EACH issue IN health_report.issues
alert_message = alert_message + "• " + issue.message + "\n"
NEXT issue
' Slack alert
WITH slack_alert = NEW OBJECT
.text = alert_message
.channel = "#ops-alerts"
END WITH
POST "https://hooks.slack.com/services/xxx", slack_alert
' PagerDuty for critical
WITH pagerduty = NEW OBJECT
.routing_key = GET BOT MEMORY "pagerduty_key"
.event_action = "trigger"
.payload.summary = "Data sync system unhealthy"
.payload.severity = "critical"
.payload.source = "sync-monitor"
END WITH
POST "https://events.pagerduty.com/v2/enqueue", pagerduty
ELSE IF health_report.status = "degraded" THEN
alert_message = "⚠️ *Data Sync Degraded*\n\n"
FOR EACH issue IN health_report.issues
alert_message = alert_message + "• " + issue.message + "\n"
NEXT issue
WITH slack_alert = NEW OBJECT
.text = alert_message
.channel = "#ops-alerts"
END WITH
POST "https://hooks.slack.com/services/xxx", slack_alert
END IF
' Generate dashboard data
WITH dashboard = NEW OBJECT
.timestamp = NOW()
.overall_status = health_report.status
.integrations = integration_statuses
.pending_conflicts = pending_conflicts
.escalated_conflicts = escalated_conflicts
.queue_depth = queue_depth
.events_last_hour = AGGREGATE "COUNT", "sync_events", "timestamp > DATEADD(NOW(), -1, 'hour')"
.errors_last_hour = AGGREGATE "COUNT", "sync_events", "status='error' AND timestamp > DATEADD(NOW(), -1, 'hour')"
END WITH
SAVE "sync_dashboard", "current", dashboard
Tool 6: Bulk Sync
Initial data load and bulk synchronization.
' bulk-sync.bas
' Bulk data synchronization tool
WEBHOOK "bulk-sync"
source_system = body.source
target_system = body.target
entity_type = body.entity
batch_size = body.batch_size
offset = body.offset
IF batch_size = "" THEN batch_size = 100 END IF
IF offset = "" THEN offset = 0 END IF
' Create sync job
job_id = "SYNC-" + FORMAT(NOW(), "YYYYMMDDHHmmss")
WITH job = NEW OBJECT
.id = job_id
.source = source_system
.target = target_system
.entity_type = entity_type
.status = "running"
.total_records = 0
.processed_records = 0
.error_count = 0
.started_at = NOW()
END WITH
INSERT "sync_jobs", job
' Fetch data from source
SET HEADER "Authorization", "Bearer " + GET BOT MEMORY source_system + "_api_key"
has_more = TRUE
total_processed = 0
total_errors = 0
WHILE has_more
source_url = CALL build_source_url(source_system, entity_type, batch_size, offset)
response = GET source_url
records = response.data
has_more = response.has_more
IF UBOUND(records) = 0 THEN
has_more = FALSE
ELSE
FOR EACH record IN records
' Transform record
transformed = CALL transform_record(record, source_system, target_system, entity_type)
' Send to target
success = CALL send_to_target(target_system, entity_type, transformed)
IF success THEN
total_processed = total_processed + 1
ELSE
total_errors = total_errors + 1
' Log error
WITH error_log = NEW OBJECT
.job_id = job_id
.record_id = record.id
.error = "Failed to sync to " + target_system
.created_at = NOW()
END WITH
INSERT "sync_errors", error_log
END IF
' Update progress every 100 records
IF (total_processed + total_errors) MOD 100 = 0 THEN
WITH progress = NEW OBJECT
.processed_records = total_processed
.error_count = total_errors
.updated_at = NOW()
END WITH
UPDATE "sync_jobs", "id=" + job_id, progress
END IF
NEXT record
offset = offset + batch_size
END IF
WEND
' Finalize job
WITH final_update = NEW OBJECT
.status = "completed"
.total_records = total_processed + total_errors
.processed_records = total_processed
.error_count = total_errors
.completed_at = NOW()
END WITH
UPDATE "sync_jobs", "id=" + job_id, final_update
' Send completion notification
completion_msg = "Bulk sync completed\n\n"
completion_msg = completion_msg + "Job ID: " + job_id + "\n"
completion_msg = completion_msg + "Source: " + source_system + "\n"
completion_msg = completion_msg + "Target: " + target_system + "\n"
completion_msg = completion_msg + "Entity: " + entity_type + "\n"
completion_msg = completion_msg + "Processed: " + total_processed + "\n"
completion_msg = completion_msg + "Errors: " + total_errors
SEND MAIL "data-admin@company.com", "Bulk Sync Complete: " + job_id, completion_msg
WITH result = NEW OBJECT
.status = "completed"
.job_id = job_id
.processed = total_processed
.errors = total_errors
END WITH
Configuration
Store sync configuration in bot memory:
' Configure sync settings
SET BOT MEMORY "crm_api_key", "your-crm-api-key"
SET BOT MEMORY "erp_api_key", "your-erp-api-key"
SET BOT MEMORY "ecom_api_key", "your-ecommerce-api-key"
SET BOT MEMORY "max_sync_lag_crm", "30"
SET BOT MEMORY "max_sync_lag_erp", "15"
SET BOT MEMORY "sync_priority_source", "erp"
See Also
- WEBHOOK - Creating webhook endpoints
- ON - Database trigger events
- SET SCHEDULE - Scheduled tasks
- Data Operations - Database keywords
- Consolidated Examples - More complete examples
Keyword Reference
This section lists every BASIC keyword implemented in the GeneralBots engine. Each keyword page includes:
- Syntax – Exact command format
- Parameters – Expected arguments
- Description – What the keyword does
- Example – A short snippet showing usage
The source code for each keyword lives in src/basic/keywords/. Only the keywords listed here exist in the system.
Important: Case Insensitivity
All variables in General Bots BASIC are case-insensitive. The preprocessor normalizes variable names to lowercase automatically.
' These all refer to the same variable
host = "https://api.example.com"
result = GET Host + "/endpoint"
TALK HOST
Keywords are also case-insensitive but conventionally written in UPPERCASE:
' Both work identically
TALK "Hello"
talk "Hello"
Configuration Variables (param-*)
Variables defined with param- prefix in config.csv are automatically available in scripts without the prefix:
name,value
param-host,https://api.example.com
param-limit,100
param-pages,50
' Access directly (lowercase, no param- prefix)
result = GET host + "/items?limit=" + limit
See Script Execution Flow for complete details.
Complete Keyword List (Flat Reference)
| Keyword | Category | Description |
|---|---|---|
ADD BOT | Multi-Agent | Add a bot to the current session with triggers |
ADD MEMBER | Communication | Add member to a group |
ADD SUGGESTION | UI | Add clickable suggestion button |
ADD TOOL | Tools | Register a tool for the session |
AGGREGATE | Data | Perform SUM, AVG, COUNT, MIN, MAX operations |
BOOK | Special | Book an appointment |
BOT REFLECTION | Multi-Agent | Enable agent self-analysis and improvement |
BROADCAST TO BOTS | Multi-Agent | Send message to all bots in session |
CLEAR HEADERS | HTTP | Clear all HTTP headers |
CLEAR KB | Knowledge | Unload knowledge base from session |
CLEAR SUGGESTIONS | UI | Remove all suggestion buttons |
CLEAR TOOLS | Tools | Remove all registered tools |
COMPRESS | Files | Create ZIP archive |
COPY | Files | Copy a file |
CREATE DRAFT | Communication | Create email draft |
CREATE SITE | Tools | Generate a website |
CREATE TASK | Tools | Create a task |
DELEGATE TO BOT | Multi-Agent | Send task to another bot |
DELETE | Data | Delete records from table |
DELETE FILE | Files | Delete a file |
DELETE HTTP | HTTP | Send HTTP DELETE request |
DOWNLOAD | Files | Download file from URL |
EXIT FOR | Control | Exit loop early |
EXTRACT | Files | Extract ZIP archive |
FILL | Data | Fill template with data |
FILTER | Data | Filter records by condition |
FIND | Data | Search in files or KB |
FIRST | Data | Get first element |
FOR EACH ... NEXT | Control | Loop through items |
FORMAT | Data | Format strings and dates |
GENERATE PDF | Files | Generate PDF from template |
GET | Variables | Get variable or API data |
GET BOT MEMORY | Memory | Retrieve bot-level persisted data |
GET USER MEMORY | Memory | Retrieve user-level persisted data (cross-bot) |
GRAPHQL | HTTP | Execute GraphQL query |
GROUP BY | Data | Group data by field |
HEAR | Dialog | Get input from user |
IF ... THEN ... ELSE ... END IF | Control | Conditional logic |
INSERT | Data | Insert new record |
INSTR | String | Find position of substring |
IS NUMERIC | String | Check if value is numeric |
JOIN | Data | Join two datasets |
LAST | Data | Get last element |
LIST | Files | List directory contents |
LLM | AI | Query language model |
MAP | Data | Map field names |
MERGE | Data | Merge data into table |
MERGE PDF | Files | Merge multiple PDFs |
MOVE | Files | Move or rename file |
ON | Events | Event handler |
PATCH | HTTP | Send HTTP PATCH request |
PIVOT | Data | Create pivot table |
POST | HTTP | Send HTTP POST request |
PRINT | Debug | Debug output to console |
PUT | HTTP | Send HTTP PUT request |
READ | Files | Read file content |
REMEMBER | Memory | Store user-specific memory |
RUN BASH | Code Execution | Execute Bash script in sandbox |
RUN JAVASCRIPT | Code Execution | Execute JavaScript in sandbox |
RUN PYTHON | Code Execution | Execute Python code in sandbox |
SAVE | Data | Save data to table (upsert) |
SAVE FROM UNSTRUCTURED | Data | Extract structured data from text |
SEND MAIL | Communication | Send email |
SET | Variables | Set variable value |
SET BOT MEMORY | Memory | Persist data at bot level |
SET CONTEXT | AI | Add context for LLM |
SET HEADER | HTTP | Set HTTP header |
SET SCHEDULE | Events | Schedule script execution |
SET USER | Session | Set user context |
SET USER FACT | Memory | Store a fact about the user |
SET USER MEMORY | Memory | Persist data at user level (cross-bot) |
SOAP | HTTP | Execute SOAP API call |
SWITCH ... CASE ... END SWITCH | Control | Switch statement |
SYNCHRONIZE | Data | Sync API data to table (planned) |
TALK | Dialog | Send message to user |
TRANSFER CONVERSATION | Multi-Agent | Hand off conversation to another bot |
UPDATE | Data | Update existing records |
USE MODEL | AI | Switch LLM model for subsequent operations |
USER FACTS | Memory | Get all stored user facts |
UPLOAD | Files | Upload file to storage |
USE KB | Knowledge | Load knowledge base |
USE TOOL | Tools | Register tool definition |
USE WEBSITE | Knowledge | Associate website with session |
WAIT | Control | Pause execution |
WEATHER | Special | Get weather information |
WEBHOOK | Events | Create webhook endpoint |
WHILE ... WEND | Control | While loop |
WRITE | Files | Write content to file |
Keywords by Category
Core Dialog Keywords
| Keyword | Syntax | Description |
|---|---|---|
| TALK | TALK "message" | Send message to user |
| HEAR | HEAR variable or HEAR variable AS TYPE | Get input from user |
| WAIT | WAIT seconds | Pause execution |
PRINT "debug message" | Debug output to console |
Variable & Memory
| Keyword | Syntax | Description |
|---|---|---|
| SET | SET variable = value or let variable = value | Set variable value |
| GET | result = GET "path" | Get variable or fetch data |
| SET BOT MEMORY | SET BOT MEMORY "key", value | Persist data at bot level |
| GET BOT MEMORY | value = GET BOT MEMORY("key") | Retrieve persisted data |
| SET USER MEMORY | SET USER MEMORY "key", value | Persist data at user level (cross-bot) |
| GET USER MEMORY | value = GET USER MEMORY("key") | Retrieve user-level data |
| SET USER FACT | SET USER FACT "key", value | Store fact about user |
| USER FACTS | facts = USER FACTS() | Get all user facts |
| REMEMBER | REMEMBER "key", value | Store user-specific memory |
AI & Context
| Keyword | Syntax | Description |
|---|---|---|
| LLM | result = LLM "prompt" | Query language model |
| SET CONTEXT | SET CONTEXT "name" AS "value" | Add context for LLM |
| SET USER | SET USER userid | Set user context |
| USE MODEL | USE MODEL "modelname" | Switch LLM model (fast/quality/code/auto) |
Multi-Agent Orchestration
| Keyword | Syntax | Description |
|---|---|---|
| ADD BOT | ADD BOT "name" TRIGGER ON "keywords" | Add bot with triggers |
| DELEGATE TO BOT | result = DELEGATE "message" TO BOT "name" | Send task to bot |
| BROADCAST TO BOTS | BROADCAST "message" TO BOTS | Message all bots |
| TRANSFER CONVERSATION | TRANSFER CONVERSATION TO "botname" | Hand off to bot |
| BOT REFLECTION | BOT REFLECTION true | Enable self-analysis |
| BOT REFLECTION INSIGHTS | insights = BOT REFLECTION INSIGHTS() | Get analysis results |
Code Execution (Sandboxed)
| Keyword | Syntax | Description |
|---|---|---|
| RUN PYTHON | result = RUN PYTHON "code" | Execute Python in sandbox |
| RUN JAVASCRIPT | result = RUN JAVASCRIPT "code" | Execute JS in sandbox |
| RUN BASH | result = RUN BASH "code" | Execute Bash in sandbox |
| RUN … WITH FILE | result = RUN PYTHON WITH FILE "script.py" | Run script file |
Knowledge Base
| Keyword | Syntax | Description |
|---|---|---|
| USE KB | USE KB "kbname" | Load knowledge base |
| CLEAR KB | CLEAR KB or CLEAR KB "kbname" | Unload knowledge base |
| USE WEBSITE | USE WEBSITE "url" | Associate website with session |
| FIND | result = FIND "file", "filter" | Search in files or KB |
Tools & Automation
| Keyword | Syntax | Description |
|---|---|---|
| ADD TOOL | ADD TOOL "toolname" | Register tool for session |
| USE TOOL | USE TOOL "toolname" | Load tool definition |
| CLEAR TOOLS | CLEAR TOOLS | Remove all registered tools |
| CREATE TASK | CREATE TASK "title", "description" | Create a task |
| CREATE SITE | CREATE SITE "alias", "template", "prompt" | Generate a website |
| CREATE DRAFT | CREATE DRAFT "to", "subject", "body" | Create email draft |
UI & Interaction
| Keyword | Syntax | Description |
|---|---|---|
| ADD SUGGESTION | ADD SUGGESTION "key" AS "display text" | Add clickable button |
| CLEAR SUGGESTIONS | CLEAR SUGGESTIONS | Remove all buttons |
Data Operations
| Keyword | Syntax | Description |
|---|---|---|
| SAVE | SAVE "table", var1, var2, var3 | Save data (upsert) |
| INSERT | result = INSERT "table", data | Insert new record |
| UPDATE | rows = UPDATE "table", "filter", data | Update records |
| DELETE | rows = DELETE "table", "filter" | Delete records |
| MERGE | result = MERGE "table", data, "key" | Merge data into table |
| FILTER | result = FILTER data, "condition" | Filter records |
| AGGREGATE | result = AGGREGATE "SUM", data, "field" | Aggregate operations |
| JOIN | result = JOIN left, right, "key" | Join datasets |
| PIVOT | result = PIVOT data, "row", "value" | Create pivot table |
| GROUP BY | result = GROUP BY data, "field" | Group data |
| SYNCHRONIZE | SYNCHRONIZE endpoint, table, key, pageVar, limitVar | Sync API to table |
| MAP | result = MAP data, "old->new" | Map field names |
| FILL | result = FILL data, template | Fill template |
| FIRST | result = FIRST collection | Get first element |
| LAST | result = LAST collection | Get last element |
| FORMAT | result = FORMAT value AS "pattern" | Format strings/dates |
File Operations
| Keyword | Syntax | Description |
|---|---|---|
| READ | content = READ "path" | Read file content |
| WRITE | WRITE "path", content | Write to file |
| DELETE FILE | DELETE FILE "path" | Delete a file |
| COPY | COPY "source", "destination" | Copy a file |
| MOVE | MOVE "source", "destination" | Move/rename file |
| LIST | files = LIST "path/" | List directory |
| UPLOAD | url = UPLOAD file, "path" | Upload file |
| DOWNLOAD | path = DOWNLOAD "url", "local" | Download file |
| COMPRESS | archive = COMPRESS files, "name.zip" | Create ZIP |
| EXTRACT | files = EXTRACT "archive.zip", "dest/" | Extract ZIP |
| GENERATE PDF | result = GENERATE PDF "template", data, "output.pdf" | Generate PDF |
| MERGE PDF | result = MERGE PDF files, "merged.pdf" | Merge PDFs |
HTTP & API Operations
| Keyword | Syntax | Description |
|---|---|---|
| POST | result = POST "url", data | HTTP POST request |
| PUT | result = PUT "url", data | HTTP PUT request |
| PATCH | result = PATCH "url", data | HTTP PATCH request |
| DELETE HTTP | result = DELETE HTTP "url" | HTTP DELETE request |
| SET HEADER | SET HEADER "name", "value" | Set HTTP header |
| CLEAR HEADERS | CLEAR HEADERS | Clear all headers |
| GRAPHQL | result = GRAPHQL "url", "query", vars | GraphQL query |
| SOAP | result = SOAP "wsdl", "operation", params | SOAP call |
Flow Control
| Keyword | Syntax | Description |
|---|---|---|
| IF…THEN…ELSE | IF condition THEN ... ELSE ... END IF | Conditional |
| FOR EACH…NEXT | FOR EACH item IN collection ... NEXT item | Loop |
| EXIT FOR | EXIT FOR | Exit loop early |
WHILE...WEND | WHILE condition ... WEND | While loop |
SWITCH...CASE | SWITCH value CASE x ... END SWITCH | Switch statement |
REPORT | SEND EMAIL admin, REPORT | Access sync statistics |
RESET REPORT | RESET REPORT | Clear sync statistics |
Events & Scheduling
| Keyword | Syntax | Description |
|---|---|---|
| ON | ON "event" CALL handler | Event handler |
| SET SCHEDULE | SET SCHEDULE "cron" | Schedule execution |
| WEBHOOK | WEBHOOK "endpoint" | Create webhook |
Communication
| Keyword | Syntax | Description |
|---|---|---|
| SEND MAIL | SEND MAIL "to", "subject", "body" | Send email |
| ADD MEMBER | ADD MEMBER "email", "group" | Add to group |
Special Functions
| Keyword | Syntax | Description |
|---|---|---|
| BOOK | BOOK "appointment" | Book appointment |
| WEATHER | weather = WEATHER "location" | Get weather |
| INSTR | pos = INSTR(string, search) | Find substring |
| IS NUMERIC | result = IS NUMERIC(value) | Check if numeric |
| SAVE FROM UNSTRUCTURED | data = SAVE FROM UNSTRUCTURED text, schema | Extract structured data |
Syntax Rules
DO ✅
' Variable names (no underscores in names)
let ticketnumber = "TKT001"
let useremail = "user@example.com"
' SAVE with field names = variable names
SAVE "table.csv", ticketnumber, useremail, status
' Keywords with spaces
SET BOT MEMORY "last_ticket", ticketnumber
SET CONTEXT "name" AS "description"
ADD SUGGESTION "key" AS "Display text"
CLEAR SUGGESTIONS
USE KB "myknowledge"
USE TOOL "mytool"
' GET BOT MEMORY as function
let lastticket = GET BOT MEMORY("last_ticket")
DON’T ❌
' NO: Complex object operations
SET object.field = value ' WRONG
SAVE "table", object.id, object ' WRONG
' NO: IF for input validation (use HEAR AS TYPE instead)
IF value = "" THEN ' OK for logic, but for input use:
HEAR value AS STRING ' Better - validates input type
Prompt Blocks
Special multi-line blocks for AI configuration and formatted output:
| Block | Purpose | Documentation |
|---|---|---|
BEGIN SYSTEM PROMPT ... END SYSTEM PROMPT | Define AI persona, rules, capabilities | Prompt Blocks |
BEGIN TALK ... END TALK | Formatted multi-line messages with Markdown | Prompt Blocks |
BEGIN SYSTEM PROMPT
You are a helpful assistant for AcmeStore.
Rules:
1. Always be polite
2. Never discuss competitors
END SYSTEM PROMPT
BEGIN TALK
**Welcome!** 🎉
I can help you with:
• Orders
• Tracking
• Returns
END TALK
Script Structure
No MAIN Function
Scripts execute from line 1 - no MAIN or entry point needed:
' ✅ CORRECT - Start directly
TALK "Hello!"
ADD TOOL "my-tool"
' ❌ WRONG - Don't use MAIN
SUB MAIN()
TALK "Hello"
END SUB
SUB and FUNCTION for Reuse
Use for helper code within tools, not as entry points:
FUNCTION CalculateTotal(price, quantity)
RETURN price * quantity
END FUNCTION
SUB NotifyAdmin(message)
SEND EMAIL admin1, message
END SUB
' Execution starts here
total = CalculateTotal(19.99, 3)
CALL NotifyAdmin("Order processed")
See Script Execution Flow for entry points and lifecycle.
Notes
- Keywords are case-insensitive (TALK = talk = Talk)
- Variables are case-insensitive (host = HOST = Host)
- String parameters can use double quotes or single quotes
- Comments start with REM or ’
- Line continuation uses underscore (_)
- Objects are created with
#{ key: value }syntax - Arrays use
[item1, item2, ...]syntax - param-* config values become global variables
See Also
- Script Execution Flow - Entry points and lifecycle
- Prompt Blocks - BEGIN SYSTEM PROMPT & BEGIN TALK
- Basics - Core concepts
- Examples - Real-world patterns
TALK
Sends a message to the current conversation or to a specific recipient on any supported channel.
Syntax
TALK message
TALK TO recipient, message
Parameters
| Parameter | Type | Description |
|---|---|---|
| message | String | The message to send |
| recipient | String | Channel and address in format channel:address |
Description
TALK is the primary keyword for sending messages in General Bots BASIC.
- TALK message - Sends to the current conversation (web chat, WhatsApp, etc.)
- TALK TO recipient, message - Sends to a specific recipient on any channel
TALK - Current Conversation
Send a message to whoever is currently chatting with the bot:
TALK "Hello! How can I help you today?"
name = "João"
TALK "Welcome, " + name + "!"
total = 299.90
TALK "Your total is $" + total
TALK TO - Specific Recipients
Send messages to specific people on specific channels using the format channel:address.
Supported Channels
| Channel | Format | Example |
|---|---|---|
whatsapp:+phone | whatsapp:+5511999887766 | |
| Teams | teams:user@domain | teams:john@company.com |
email:address | email:customer@example.com | |
| Web Session | web:session_id | web:abc123 |
instagram:username | instagram:@myuser |
WhatsApp Examples
' Send simple message
TALK TO "whatsapp:+5511999887766", "Your order has been shipped!"
' Send with variables
phone = "+5511999887766"
customer_name = "Maria"
TALK TO "whatsapp:" + phone, "Hello " + customer_name + "! Your order is ready."
' Send formatted message (WhatsApp supports markdown-like formatting)
message = "*Order Confirmed* ✅\n\n"
message = message + "Order: #12345\n"
message = message + "Total: R$ 299,90\n\n"
message = message + "_Thank you for your purchase!_"
TALK TO "whatsapp:" + customer_phone, message
WhatsApp Message Formatting
WhatsApp supports rich text formatting:
| Format | Syntax | Result |
|---|---|---|
| Bold | *text* | text |
| Italic | _text_ | text |
| Strikethrough | ~text~ | |
| Monospace | `text` | text |
| Line break | \n | New line |
' Example with all formatting
msg = "🎉 *PROMOTION!*\n\n"
msg = msg + "~R$ 199,90~ *R$ 149,90*\n"
msg = msg + "_Limited time offer!_\n\n"
msg = msg + "Use code: `PROMO2024`"
TALK TO "whatsapp:" + phone, msg
Microsoft Teams Examples
' Send to Teams user
TALK TO "teams:john.smith@company.com", "Meeting reminder: 3pm today"
' Send with formatting (Teams supports markdown)
msg = "**Project Update**\n\n"
msg = msg + "- Task 1: ✅ Complete\n"
msg = msg + "- Task 2: 🔄 In Progress\n"
msg = msg + "- Task 3: ⏳ Pending"
TALK TO "teams:" + manager_email, msg
Email Examples
' Simple email (uses SEND MAIL internally for full email)
TALK TO "email:customer@example.com", "Your password has been reset."
' For full email with subject, use SEND MAIL instead
SEND MAIL "customer@example.com", "Password Reset", "Your password has been reset successfully."
Complete Examples
Order Notification System
WEBHOOK "order-status"
order_id = body.order_id
customer_phone = body.phone
status = body.status
SELECT CASE status
CASE "confirmed"
msg = "✅ *Order Confirmed*\n\n"
msg = msg + "Order #" + order_id + "\n"
msg = msg + "We're preparing your order!"
CASE "shipped"
tracking = body.tracking_number
msg = "📦 *Order Shipped*\n\n"
msg = msg + "Order #" + order_id + "\n"
msg = msg + "Tracking: " + tracking + "\n"
msg = msg + "Track at: https://track.example.com/" + tracking
CASE "delivered"
msg = "🎉 *Order Delivered*\n\n"
msg = msg + "Order #" + order_id + "\n"
msg = msg + "Enjoy your purchase!\n\n"
msg = msg + "_Rate your experience: reply 1-5_"
CASE ELSE
msg = "Order #" + order_id + " status: " + status
END SELECT
TALK TO "whatsapp:" + customer_phone, msg
result_status = "ok"
Support Ticket Notifications
SUB NotifyCustomer(phone, ticket_id, message)
full_msg = "🎫 *Ticket #" + ticket_id + "*\n\n"
full_msg = full_msg + message
TALK TO "whatsapp:" + phone, full_msg
END SUB
SUB NotifyAgent(agent_email, ticket_id, customer_name, issue)
msg = "New ticket assigned:\n\n"
msg = msg + "Ticket: #" + ticket_id + "\n"
msg = msg + "Customer: " + customer_name + "\n"
msg = msg + "Issue: " + issue
TALK TO "teams:" + agent_email, msg
END SUB
' Usage
CALL NotifyCustomer("+5511999887766", "TKT-001", "Your ticket has been created. We'll respond within 24 hours.")
CALL NotifyAgent("support@company.com", "TKT-001", "João Silva", "Payment issue")
Multi-Channel Broadcast
SUB Broadcast(message, channels)
FOR EACH channel IN channels
TALK TO channel, message
WAIT 1 ' Rate limiting
NEXT channel
END SUB
' Send to multiple recipients
promo = "🎉 *Flash Sale!* 50% off everything today only!"
recipients = [
"whatsapp:+5511999887766",
"whatsapp:+5511888776655",
"teams:marketing@company.com"
]
CALL Broadcast(promo, recipients)
Appointment Reminders
WEBHOOK "send-reminder"
appointment_id = body.id
appointment = FIND "appointments", "id=" + appointment_id
phone = appointment.customer_phone
name = appointment.customer_name
service = appointment.service
date_time = FORMAT(appointment.datetime, "DD/MM/YYYY HH:mm")
reminder = "📅 *Appointment Reminder*\n\n"
reminder = reminder + "Hi " + name + "!\n\n"
reminder = reminder + "You have an appointment scheduled:\n\n"
reminder = reminder + "📋 " + service + "\n"
reminder = reminder + "🗓️ " + date_time + "\n\n"
reminder = reminder + "Reply *CONFIRM* to confirm or *CANCEL* to cancel."
TALK TO "whatsapp:" + phone, reminder
result_status = "ok"
Notes
- TALK sends to the current active conversation
- TALK TO can send to any supported channel
- WhatsApp requires phone numbers in international format with country code
- Teams requires valid email addresses from your organization
- Message formatting varies by channel (WhatsApp uses different syntax than Teams)
- Rate limiting may apply - use
WAITbetween bulk messages
Related Keywords
- SEND FILE TO - Send files to specific recipients
- SEND MAIL - Send emails with subject and attachments
- HEAR - Receive input from users
- PRINT - Alias for TALK (debug output)
See Also
- Universal Messaging - Multi-channel messaging overview
- WEBHOOK - Create API endpoints
HEAR Keyword
The HEAR keyword pauses script execution and waits for user input. With optional type validation, it automatically verifies and normalizes input, retrying with helpful error messages when validation fails.
Basic Syntax
HEAR variable_name
HEAR variable_name AS TYPE
HEAR variable_name AS "Option1", "Option2", "Option3"
The simplest form accepts any input. Adding AS TYPE enables automatic validation with user-friendly retry prompts.
Simple HEAR
TALK "What would you like to know?"
HEAR question
TALK "You asked: " + question
The script waits for any user message and stores it in the variable.
Validated Input Types
When using HEAR AS <TYPE>, the system validates input automatically, retries up to 3 times with helpful messages, and returns normalized values.
Text Types
EMAIL validates email format and normalizes to lowercase:
TALK "What's your email address?"
HEAR email AS EMAIL
TALK "We'll send confirmation to: " + email
Accepts: User@Example.COM → Returns: user@example.com
NAME validates name format with proper capitalization:
TALK "What's your full name?"
HEAR name AS NAME
Accepts letters, spaces, hyphens, apostrophes. Auto-capitalizes: john doe → John Doe
URL validates and normalizes URLs:
TALK "Enter your website:"
HEAR website AS URL
Auto-adds https:// if protocol missing.
PASSWORD validates minimum strength:
TALK "Create a password (minimum 8 characters):"
HEAR password AS PASSWORD
Requires 8+ characters. Never echoes the actual password back.
COLOR accepts color names or hex values:
HEAR color AS COLOR
Accepts: red, #FF0000, rgb(255, 0, 0) → Returns: #FF0000
Numeric Types
INTEGER validates whole numbers:
TALK "How many items?"
HEAR quantity AS INTEGER
Removes formatting (commas, spaces). Returns numeric value.
FLOAT / DECIMAL validates decimal numbers:
TALK "Enter the temperature:"
HEAR temperature AS FLOAT
Handles both . and , as decimal separators.
MONEY / CURRENCY / AMOUNT validates monetary values:
TALK "How much to transfer?"
HEAR amount AS MONEY
Accepts: 100, 1,234.56, R$ 100,00, $100.00 → Returns: 1234.56
CREDITCARD / CARD validates card numbers with Luhn algorithm:
TALK "Enter your card number:"
HEAR card AS CREDITCARD
Returns masked format: 4111 **** **** 1111
Date and Time Types
DATE validates and parses dates:
TALK "When is your birthday?"
HEAR birthday AS DATE
Accepts: 25/12/2024, 12/25/2024, 2024-12-25, December 25, 2024, today, tomorrow, hoje, amanhã
Returns: ISO format YYYY-MM-DD
HOUR / TIME validates time input:
TALK "What time for the meeting?"
HEAR meeting_time AS HOUR
Accepts: 14:30, 2:30 PM → Returns: 14:30
Brazilian Document Types
CPF validates Brazilian individual taxpayer ID:
TALK "Enter your CPF:"
HEAR cpf AS CPF
Validates 11 digits with mod 11 check. Returns: 123.456.789-09
CNPJ validates Brazilian company taxpayer ID:
TALK "Enter your company's CNPJ:"
HEAR cnpj AS CNPJ
Validates 14 digits. Returns: 12.345.678/0001-95
Contact Types
MOBILE / PHONE validates phone numbers:
TALK "What's your phone number?"
HEAR phone AS MOBILE
Accepts 10-15 digits, auto-formats based on detected country.
ZIPCODE / CEP / POSTALCODE validates postal codes:
HEAR cep AS ZIPCODE
Supports Brazilian CEP, US ZIP, UK postcode formats.
Menu Selection
Provide options directly in the HEAR statement:
TALK "Choose your fruit:"
HEAR fruit AS "Apple", "Banana", "Orange", "Mango"
Accepts exact match, case-insensitive match, numeric selection (1, 2, 3), or partial match if unique.
BOOLEAN validates yes/no responses:
TALK "Do you agree to the terms?"
HEAR agreed AS BOOLEAN
IF agreed THEN
TALK "Thank you!"
END IF
True: yes, y, sim, ok, sure, confirm
False: no, n, não, cancel, deny
LANGUAGE validates language codes:
HEAR language AS LANGUAGE
Accepts: en, pt, English, Português → Returns: ISO 639-1 code
Media Types
IMAGE / PHOTO waits for image upload:
TALK "Send a photo of your document:"
HEAR document_photo AS IMAGE
Returns URL to uploaded image.
QRCODE waits for image and decodes QR:
TALK "Send me the QR code:"
HEAR qr_data AS QRCODE
Uses vision API to decode. Returns decoded data.
AUDIO / VOICE transcribes audio input:
TALK "Send a voice message:"
HEAR transcription AS AUDIO
Uses Whisper for transcription. Returns text.
VIDEO analyzes video content:
TALK "Send a video of the issue:"
HEAR video_description AS VIDEO
Uses vision API to describe. Returns description.
FILE / DOCUMENT waits for file upload:
TALK "Upload your contract:"
HEAR contract AS DOCUMENT
Accepts PDF, DOC, DOCX, XLS, XLSX, PPT, PPTX, TXT, CSV. Returns URL.
Authentication
LOGIN waits for OAuth completion:
TALK "Click the link to authenticate:"
HEAR user AS LOGIN
Returns user object with tokens after OAuth callback.
Complete Examples
Registration Flow
TALK "Let's create your account!"
TALK "What's your full name?"
HEAR name AS NAME
TALK "Enter your email address:"
HEAR email AS EMAIL
TALK "Enter your CPF:"
HEAR cpf AS CPF
TALK "What's your phone number?"
HEAR phone AS MOBILE
TALK "Choose a password:"
HEAR password AS PASSWORD
TALK "What's your birth date?"
HEAR birthdate AS DATE
TALK "Select your gender:"
HEAR gender AS "Male", "Female", "Other", "Prefer not to say"
SAVE "users.csv", name, email, cpf, phone, birthdate, gender, NOW()
TALK "Account created for " + name + "!"
Payment Flow
TALK "Enter the amount:"
HEAR amount AS MONEY
IF amount < 1 THEN
TALK "Minimum payment is R$ 1.00"
RETURN
END IF
TALK "How would you like to pay?"
HEAR method AS "Credit Card", "Debit Card", "PIX", "Boleto"
TALK "Confirm payment of R$ " + FORMAT(amount, "#,##0.00") + "?"
HEAR confirm AS BOOLEAN
IF confirm THEN
TALK "Processing payment..."
ELSE
TALK "Payment cancelled."
END IF
Validation Behavior
When validation fails, the system automatically prompts for correction:
User: my email
Bot: Please enter a valid email address (e.g., user@example.com)
User: test@example.com
Bot: Email confirmed!
After 3 failed attempts, execution continues with an empty value. Check for this:
HEAR email AS EMAIL
IF email = "" THEN
TALK "Unable to validate email. Please contact support."
RETURN
END IF
Best Practices
Always use appropriate types — automatic validation is safer than manual checking:
' Good
HEAR email AS EMAIL
' Avoid
HEAR email
IF NOT email CONTAINS "@" THEN ...
Provide context before HEAR — users should know what to enter:
TALK "Enter the transfer amount (minimum R$ 1.00):"
HEAR amount AS MONEY
Use menus for limited options:
HEAR method AS "Credit Card", "Debit Card", "PIX"
Combine with SET CONTEXT for AI-enhanced input handling:
SET CONTEXT "You are a banking assistant. Confirm amounts before processing."
HEAR amount AS MONEY
Validation Summary
| Type | Example Input | Normalized Output |
|---|---|---|
User@Example.COM | user@example.com | |
| NAME | john DOE | John Doe |
| INTEGER | 1,234 | 1234 |
| MONEY | R$ 1.234,56 | 1234.56 |
| DATE | 25/12/2024 | 2024-12-25 |
| HOUR | 2:30 PM | 14:30 |
| BOOLEAN | yes / sim | true |
| CPF | 12345678909 | 123.456.789-09 |
| MOBILE | 11999998888 | (11) 99999-8888 |
| CREDITCARD | 4111111111111111 | 4111 **** **** 1111 |
| QRCODE | [image] | decoded data |
| AUDIO | [audio file] | transcribed text |
See Also
- TALK Keyword - Output messages
- Dialog Basics - Conversation patterns
- Template Variables - Variable substitution
SET_CONTEXT Keyword
The SET_CONTEXT keyword defines the operational context for the bot’s current session.
It allows scripts to switch between different logical modes or workflows, influencing how subsequent commands are interpreted.
Syntax
SET_CONTEXT "context-name"
Parameters
"context-name"— A string representing the new context.
Common examples include"sales_mode","support_mode", or"training_mode".
Description
SET_CONTEXT updates the bot’s internal state to reflect a specific operational context.
Contexts are used to modify behavior dynamically — for example, changing which tools are active, which memory entries are prioritized, or which prompts are used for LLM responses.
When a context is set, the bot automatically adjusts its logic and available commands to match that mode.
This enables modular dialog design and flexible automation workflows.
If the context name does not exist, the system creates a new one automatically and stores it in the session cache.
Example
' Switch to sales mode
SET_CONTEXT "sales_mode"
' Perform a context-specific action
TALK "Welcome to the sales assistant. How can I help you today?"
' Later, switch to support mode
SET_CONTEXT "support_mode"
TALK "Support mode activated. Please describe your issue."
Implementation Notes
- Implemented in Rust under
src/context/mod.rsandsrc/context/langcache.rs. - The keyword interacts with the session manager and context cache to update the active context.
- Contexts are stored in memory and optionally persisted in cache component or a local cache file.
- Changing context may trigger automatic loading of associated tools or memory entries.
Related Keywords
SET BOT MEMORY— Stores persistent data for the bot or user.GET BOT MEMORY— Retrieves stored memory entries.SET SCHEDULE— Defines scheduled tasks that may depend on context.
Summary
SET_CONTEXT is a key command for managing dynamic behavior in GeneralBots.
It enables flexible, modular workflows by allowing scripts to switch between operational modes seamlessly.
GET BOT MEMORY
Retrieve persistent key-value data stored at the bot level.
Syntax
GET BOT MEMORY key
Parameters
| Parameter | Type | Description |
|---|---|---|
key | String | The identifier of the memory item to retrieve |
Description
The GET BOT MEMORY keyword retrieves values previously stored with SET BOT MEMORY. These values are:
- Persistent across all user sessions
- Shared between all users of the same bot
- Stored in the database permanently
- Available until explicitly updated or cleared
If the key doesn’t exist, returns an empty string.
Examples
Retrieve Simple Values
welcome = GET BOT MEMORY "welcome_message"
IF welcome = "" THEN
welcome = "Welcome to our bot!"
END IF
TALK welcome
Read Configuration
max_retries = GET BOT MEMORY "max_retries"
IF max_retries = "" THEN
max_retries = "3"
END IF
timeout = GET BOT MEMORY "timeout_seconds"
IF timeout = "" THEN
timeout = "30"
END IF
Retrieve and Parse JSON
user_data = GET BOT MEMORY "user_profile"
IF user_data <> "" THEN
' Parse JSON data
name = JSON_GET(user_data, "name")
level = JSON_GET(user_data, "level")
TALK "Welcome back, " + name + "! You are level " + level
END IF
Counter Management
' Get current visitor count
count = GET BOT MEMORY "visitor_count"
IF count = "" THEN
count = "0"
END IF
count = VAL(count) + 1
SET BOT MEMORY "visitor_count", STR(count)
TALK "Visitor #" + STR(count)
Dynamic Keys
today = FORMAT(NOW(), "YYYY-MM-DD")
daily_stats = GET BOT MEMORY "stats_" + today
IF daily_stats = "" THEN
TALK "No statistics for today yet"
ELSE
TALK "Today's count: " + daily_stats
END IF
Configuration with Defaults
' Function to get config with default
FUNCTION GetConfig(key, default_value)
value = GET BOT MEMORY key
IF value = "" THEN
value = default_value
SET BOT MEMORY key, default_value
END IF
RETURN value
END FUNCTION
' Use the function
email_server = GetConfig("email_server", "mail.example.com")
email_port = GetConfig("email_port", "587")
Return Value
Returns a string containing:
- The stored value if the key exists
- Empty string (“”) if the key doesn’t exist
- Empty string if database error occurs
Performance
- Direct database lookup with indexed keys
- Single query execution
- Synchronous operation (blocks until complete)
- Cached at database level for repeated access
Best Practices
-
Always Check for Empty Values
value = GET BOT MEMORY "some_key" IF value = "" THEN ' Handle missing value value = "default" END IF -
Use Consistent Key Naming
' Good: hierarchical keys server = GET BOT MEMORY "config:email:server" port = GET BOT MEMORY "config:email:port" ' Bad: inconsistent naming ' srv = GET BOT MEMORY "emailSrv" ' p = GET BOT MEMORY "mail_port" -
Cache Frequently Used Values
' At start of conversation config_timeout = GET BOT MEMORY "timeout" config_retries = GET BOT MEMORY "retries" ' Use cached values throughout IF elapsed > VAL(config_timeout) THEN TALK "Request timed out" END IF -
Validate Retrieved Data
max_items = GET BOT MEMORY "max_items" IF max_items = "" OR NOT IS_NUMERIC(max_items) THEN max_items = "10" END IF
Error Handling
- Database connection failures return empty string
- Invalid bot IDs return empty string
- Non-existent keys return empty string
- All errors are logged for debugging
Use Cases
Global Configuration
api_key = GET BOT MEMORY "api_key"
api_url = GET BOT MEMORY "api_url"
Feature Flags
feature_enabled = GET BOT MEMORY "feature:new_ui"
IF feature_enabled = "true" THEN
' Show new interface
ELSE
' Show old interface
END IF
Shared Counters
total_processed = GET BOT MEMORY "total_processed"
daily_limit = GET BOT MEMORY "daily_limit"
IF VAL(total_processed) >= VAL(daily_limit) THEN
TALK "Daily limit reached"
END IF
Bot State
maintenance_mode = GET BOT MEMORY "maintenance_mode"
IF maintenance_mode = "true" THEN
TALK "System is under maintenance. Please try again later."
EXIT
END IF
Related Keywords
- SET BOT MEMORY - Store bot-level memory
- SET - Set user-scoped variables
- GET - Get user variables
- REMEMBER - Store user-specific memories
Implementation
Located in src/basic/keywords/bot_memory.rs
The implementation:
- Performs synchronous database query
- Uses connection pooling for efficiency
- Returns empty string on any error
- Validates bot ID before querying
SET BOT MEMORY
Store persistent key-value data at the bot level that persists across all conversations.
Syntax
SET BOT MEMORY key, value
Parameters
| Parameter | Type | Description |
|---|---|---|
key | String | Unique identifier for the memory item |
value | String | Value to store (can be any string data) |
Description
The SET BOT MEMORY keyword stores data that is:
- Persistent across all user sessions
- Shared between all users of the same bot
- Stored in the database permanently
- Available until explicitly updated or cleared
Bot memory is useful for:
- Configuration settings
- Global counters and statistics
- Shared state between users
- Bot-wide preferences
- Cached data that applies to all conversations
Examples
Store Simple Values
SET BOT MEMORY "welcome_message", "Hello! Welcome to our service."
SET BOT MEMORY "support_email", "support@example.com"
SET BOT MEMORY "business_hours", "9 AM - 5 PM EST"
Store Counters
current_count = GET BOT MEMORY "visitor_count"
IF current_count = "" THEN
current_count = "0"
END IF
new_count = VAL(current_count) + 1
SET BOT MEMORY "visitor_count", STR(new_count)
TALK "You are visitor number " + STR(new_count)
Store JSON Data
user_data = '{"name": "John", "level": 5, "points": 1200}'
SET BOT MEMORY "user_profile", user_data
Dynamic Keys
today = FORMAT(NOW(), "YYYY-MM-DD")
daily_key = "stats_" + today
SET BOT MEMORY daily_key, "25"
Configuration Management
' Store bot configuration
SET BOT MEMORY "max_retries", "3"
SET BOT MEMORY "timeout_seconds", "30"
SET BOT MEMORY "api_version", "v2"
' Later, read configuration
max_retries = GET BOT MEMORY "max_retries"
timeout = GET BOT MEMORY "timeout_seconds"
Database Storage
Bot memories are stored in the bot_memories table:
id: UUID primary keybot_id: Reference to the botkey: Memory key (indexed for fast lookup)value: Memory value (text)created_at: Timestamp of creationupdated_at: Timestamp of last update
Performance Considerations
- Keys are indexed for fast retrieval
- Values are stored as text (no size limit in PostgreSQL)
- Updates are asynchronous to avoid blocking
- Consider using structured keys for organization
Best Practices
-
Use Descriptive Keys: Make keys self-documenting
SET BOT MEMORY "config:email:smtp_server", "mail.example.com" SET BOT MEMORY "stats:daily:2024-01-15", "150" -
Handle Missing Values: Always check if memory exists
value = GET BOT MEMORY "some_key" IF value = "" THEN ' Initialize with default SET BOT MEMORY "some_key", "default_value" value = "default_value" END IF -
Avoid Sensitive Data: Don’t store passwords or tokens
' BAD: Don't do this ' SET BOT MEMORY "admin_password", "secret123" ' GOOD: Store non-sensitive config SET BOT MEMORY "admin_email", "admin@example.com" -
Structure Complex Data: Use JSON for complex structures
settings = '{"theme": "dark", "language": "en", "notifications": true}' SET BOT MEMORY "user_preferences", settings -
Clean Up Old Data: Remove unused memories periodically
' Remove old daily stats old_date = FORMAT(DATE_ADD(NOW(), -30, "days"), "YYYY-MM-DD") SET BOT MEMORY "stats_" + old_date, ""
Differences from User Memory
| Aspect | Bot Memory | User Memory |
|---|---|---|
| Scope | All users of the bot | Single user |
| Persistence | Permanent | Session or permanent |
| Use Case | Global settings | Personal data |
| Access | Any conversation | User’s conversations only |
Error Handling
- If database connection fails, operation is logged but doesn’t crash
- Invalid bot IDs are logged as errors
- Duplicate keys update existing values
- Empty keys are not allowed
Related Keywords
- GET BOT MEMORY - Retrieve stored bot memory
- SET - Set user-scoped variables
- REMEMBER - Store user-specific memories
Implementation
Located in src/basic/keywords/bot_memory.rs
The implementation:
- Uses async database operations
- Handles updates atomically with transactions
- Validates bot ID format
- Logs all operations for debugging
GET USER MEMORY
Retrieves data stored at the user level, accessible across sessions and bots. This is the companion to SET USER MEMORY for reading persistent user data.
Syntax
value = GET USER MEMORY("key")
Parameters
| Parameter | Type | Description |
|---|---|---|
key | String | The identifier for the stored value |
Returns
The stored value, or empty string ("") if the key doesn’t exist.
Description
GET USER MEMORY retrieves persistent data associated with a specific user. This data:
- Persists across sessions - Available when user returns days/weeks later
- Persists across bots - Accessible from any bot the user interacts with
- Returns original type - Objects, arrays, strings, numbers preserved
- Returns empty on miss - No error if key doesn’t exist
Examples
Basic Usage
' Retrieve user preferences
language = GET USER MEMORY("language")
timezone = GET USER MEMORY("timezone")
theme = GET USER MEMORY("theme")
TALK "Your settings: " + language + ", " + timezone + ", " + theme
Check If User Is Returning
' Personalized greeting based on stored name
name = GET USER MEMORY("name")
IF name = "" THEN
TALK "Hello! I don't think we've met. What's your name?"
HEAR name
SET USER MEMORY "name", name
ELSE
TALK "Welcome back, " + name + "! How can I help you today?"
END IF
Retrieve Complex Objects
' Get stored user profile
profile = GET USER MEMORY("profile")
IF profile <> "" THEN
TALK "Hello " + profile.name + "!"
TALK "Your plan: " + profile.plan
TALK "Member since: " + profile.signupDate
ELSE
TALK "Please complete your profile first."
END IF
Cross-Bot Data Access
' Support bot accessing sales data
lastPurchase = GET USER MEMORY("lastPurchase")
IF lastPurchase <> "" THEN
TALK "I can see your recent order #" + lastPurchase.orderId
TALK "Purchased on: " + lastPurchase.date
TALK "Amount: $" + lastPurchase.amount
TALK "How can I help with this order?"
ELSE
TALK "I don't see any recent purchases. How can I help?"
END IF
Retrieve User Facts for AI Context
' Load user facts into context for personalization
occupation = GET USER MEMORY("fact_occupation")
interests = GET USER MEMORY("fact_interests")
company = GET USER MEMORY("fact_company")
IF occupation <> "" THEN
SET CONTEXT "user_occupation" AS occupation
END IF
IF interests <> "" THEN
SET CONTEXT "user_interests" AS interests
END IF
' Now AI responses will be personalized based on these facts
Default Values Pattern
' Get with fallback to default
language = GET USER MEMORY("language")
IF language = "" THEN
language = "en-US"
END IF
' Or use inline default
theme = GET USER MEMORY("theme")
IF theme = "" THEN theme = "light"
TALK "Using language: " + language + ", theme: " + theme
Session Continuity
' Resume conversation from previous session
lastTopic = GET USER MEMORY("lastTopic")
lastQuestion = GET USER MEMORY("lastQuestion")
IF lastTopic <> "" THEN
TALK "Last time we were discussing " + lastTopic
TALK "You asked: " + lastQuestion
TALK "Would you like to continue from there?"
HEAR continueChoice AS BOOLEAN
IF continueChoice THEN
' Resume previous conversation
SET CONTEXT "topic" AS lastTopic
END IF
END IF
Related Keywords
| Keyword | Description |
|---|---|
SET USER MEMORY | Store user-level persistent data |
GET BOT MEMORY | Retrieve bot-level data |
SET BOT MEMORY | Store data at bot level |
USER FACTS | Get all stored user facts |
Comparison: User Memory vs Bot Memory
| Aspect | User Memory | Bot Memory |
|---|---|---|
| Scope | Per user, across all bots | Per bot, across all users |
| Use case | User preferences, profile | Bot state, counters |
| Access | Any bot can read/write | Only owning bot |
| Example | language, name, timezone | totalOrders, lastDeployed |
Error Handling
' GET USER MEMORY never throws - returns empty on missing key
value = GET USER MEMORY("nonexistent_key")
' value = ""
' Always check for empty before using
data = GET USER MEMORY("important_data")
IF data = "" THEN
TALK "Data not found. Please provide it."
' Handle missing data case
ELSE
' Use the data
END IF
Best Practices
- Always check for empty - Keys may not exist for new users
- Use consistent key naming -
user_namevsuserNamevsname - Document your keys - Keep track of what data you’re storing
- Handle missing gracefully - New users won’t have stored data
- Don’t assume structure - Stored objects might have missing fields
See Also
- Memory Management - Complete memory architecture
- Multi-Agent Orchestration - Cross-bot data sharing
- User Context - User vs system context
SET USER MEMORY
Persists data at the user level, accessible across sessions and bots. Unlike SET BOT MEMORY which stores data per-bot, user memory follows the user wherever they go.
Syntax
SET USER MEMORY "key", value
Parameters
| Parameter | Type | Description |
|---|---|---|
key | String | Unique identifier for the stored value |
value | Any | The value to store (string, number, object) |
Description
SET USER MEMORY stores persistent data associated with a specific user. This data:
- Persists across sessions - Available when user returns days/weeks later
- Persists across bots - Accessible from any bot the user interacts with
- Survives restarts - Stored in the database, not just memory
- Supports TTL - Optional time-to-live for automatic expiration
This is ideal for user preferences, profile data, and cross-bot personalization.
Examples
Basic Usage
' Store user preferences
SET USER MEMORY "language", "pt-BR"
SET USER MEMORY "timezone", "America/Sao_Paulo"
SET USER MEMORY "theme", "dark"
TALK "Preferences saved!"
Store Complex Objects
' Store user profile
profile = #{
name: username,
email: useremail,
plan: "premium",
signupDate: NOW()
}
SET USER MEMORY "profile", profile
TALK "Profile updated successfully!"
Cross-Bot Data Sharing
' In sales-bot: Store purchase history
purchase = #{
orderId: orderid,
amount: total,
date: NOW()
}
SET USER MEMORY "lastPurchase", purchase
' In support-bot: Access the same data
lastPurchase = GET USER MEMORY("lastPurchase")
TALK "I see your last order was #" + lastPurchase.orderId
User Preferences for Personalization
' Check if returning user
name = GET USER MEMORY("name")
IF name = "" THEN
TALK "Welcome! What's your name?"
HEAR name
SET USER MEMORY "name", name
TALK "Nice to meet you, " + name + "!"
ELSE
TALK "Welcome back, " + name + "!"
END IF
Store User Facts
' Store facts about the user for AI context
SET USER MEMORY "fact_occupation", "software engineer"
SET USER MEMORY "fact_interests", "AI, automation, productivity"
SET USER MEMORY "fact_company", "Acme Corp"
' These can be used to personalize AI responses
Related Keywords
| Keyword | Description |
|---|---|
GET USER MEMORY | Retrieve user-level persisted data |
SET BOT MEMORY | Store data at bot level |
GET BOT MEMORY | Retrieve bot-level data |
USER FACTS | Get all stored user facts |
Database Storage
User memory is stored in the user_memory table with the following structure:
| Column | Description |
|---|---|
user_id | The user’s unique identifier |
key | The memory key |
value | JSON-encoded value |
memory_type | Type classification (preference, fact, context) |
ttl | Optional expiration timestamp |
created_at | When the memory was created |
updated_at | Last modification time |
Config.csv Options
name,value
user-memory-enabled,true
user-memory-max-keys,1000
user-memory-default-ttl,0
| Option | Default | Description |
|---|---|---|
user-memory-enabled | true | Enable/disable user memory |
user-memory-max-keys | 1000 | Maximum keys per user |
user-memory-default-ttl | 0 | Default TTL in seconds (0 = no expiry) |
Best Practices
- Use descriptive keys -
user_languagenotlang - Prefix related keys -
pref_theme,pref_language,fact_name - Don’t store sensitive data - No passwords or tokens
- Consider TTL for temporary data - Session-specific data should expire
- Keep values reasonable size - Don’t store large files or blobs
See Also
- Memory Management - Complete memory architecture
- Multi-Agent Orchestration - Cross-bot data sharing
- User Context - User vs system context
REMEMBER / RECALL Keywords
The REMEMBER and RECALL keywords provide a powerful time-based memory system for storing and retrieving data associated with users. Unlike standard memory operations, REMEMBER supports automatic expiration of stored values.
Syntax
REMEMBER
REMEMBER key, value, duration
RECALL
result = RECALL key
Parameters
REMEMBER Parameters
| Parameter | Type | Description |
|---|---|---|
key | String | Unique identifier for the memory entry |
value | Any | Data to store (string, number, boolean, array, or object) |
duration | String | How long to remember the value |
Duration Formats
| Format | Example | Description |
|---|---|---|
N seconds | "30 seconds" | Expires after N seconds |
N minutes | "5 minutes" | Expires after N minutes |
N hours | "2 hours" | Expires after N hours |
N days | "7 days" | Expires after N days |
N weeks | "2 weeks" | Expires after N weeks |
N months | "3 months" | Expires after ~N×30 days |
N years | "1 year" | Expires after ~N×365 days |
forever | "forever" | Never expires |
permanent | "permanent" | Never expires (alias) |
| Plain number | "30" | Interpreted as days |
Examples
Basic Usage
' Remember user's preferred language for 30 days
REMEMBER "preferred_language", "Spanish", "30 days"
' Later, recall the preference
language = RECALL "preferred_language"
TALK "Your language preference is: " + language
Session-Based Memory
' Remember a temporary verification code for 5 minutes
code = RANDOM(100000, 999999)
REMEMBER "verification_code", code, "5 minutes"
TALK "Your verification code is: " + code
' Verify the code later
HEAR user_code
stored_code = RECALL "verification_code"
IF user_code = stored_code THEN
TALK "Code verified successfully!"
ELSE
TALK "Invalid or expired code."
END IF
Storing Complex Data
' Store user preferences as an array
preferences = ["dark_mode", "notifications_on", "english"]
REMEMBER "user_preferences", preferences, "1 year"
' Store a shopping cart temporarily
cart = ["item1", "item2", "item3"]
REMEMBER "shopping_cart", cart, "2 hours"
Permanent Storage
' Store important user information permanently
REMEMBER "account_created", NOW(), "forever"
REMEMBER "user_tier", "premium", "permanent"
Promotional Campaigns
' Track if user has seen a promotional message
has_seen = RECALL "promo_summer_2024"
IF has_seen = null THEN
TALK "🎉 Special summer offer: 20% off all products!"
REMEMBER "promo_summer_2024", true, "30 days"
END IF
Rate Limiting
' Simple rate limiting for API calls
call_count = RECALL "api_calls_today"
IF call_count = null THEN
call_count = 0
END IF
IF call_count >= 100 THEN
TALK "You've reached your daily API limit. Please try again tomorrow."
ELSE
call_count = call_count + 1
REMEMBER "api_calls_today", call_count, "24 hours"
' Process the API call
END IF
How It Works
-
Storage: Data is stored in the
bot_memoriesdatabase table with:- User ID and Bot ID association
- JSON-serialized value
- Creation timestamp
- Optional expiration timestamp
-
Retrieval: When
RECALLis called:- System checks if the key exists for the user/bot combination
- Verifies the entry hasn’t expired
- Returns the value or
nullif not found/expired
-
Automatic Cleanup: Expired entries are not returned and can be periodically cleaned up by maintenance tasks.
Database Schema
The REMEMBER keyword uses the following database structure:
CREATE TABLE bot_memories (
id TEXT PRIMARY KEY,
user_id TEXT NOT NULL,
bot_id TEXT NOT NULL,
session_id TEXT,
key TEXT NOT NULL,
value JSONB NOT NULL,
created_at TEXT NOT NULL,
expires_at TEXT,
UNIQUE(user_id, bot_id, key)
);
Comparison with Other Memory Keywords
| Keyword | Scope | Persistence | Expiration |
|---|---|---|---|
SET USER MEMORY | User | Permanent | No |
SET BOT MEMORY | Bot (all users) | Permanent | No |
REMEMBER | User | Configurable | Yes |
REMEMBER USER FACT | User | Permanent | No |
Best Practices
-
Use descriptive keys: Choose meaningful key names like
"last_login"instead of"ll". -
Set appropriate durations: Match the duration to your use case:
- Session data: minutes to hours
- Preferences: weeks to months
- Important data:
forever
-
Handle null values: Always check if
RECALLreturnsnull:value = RECALL "some_key" IF value = null THEN ' Handle missing/expired data END IF -
Avoid storing sensitive data: Don’t store passwords, API keys, or other secrets.
Error Handling
' REMEMBER returns a confirmation message on success
result = REMEMBER "key", "value", "1 day"
' result = "Remembered 'key' for 1 day"
' RECALL returns null if key doesn't exist or has expired
value = RECALL "nonexistent_key"
' value = null
Related Keywords
- SET USER MEMORY - Permanent user-scoped storage
- GET USER MEMORY - Retrieve permanent user data
- SET BOT MEMORY - Bot-wide storage
- GET BOT MEMORY - Retrieve bot-wide data
See Also
BOOK / BOOK_MEETING / CHECK_AVAILABILITY Keywords
The BOOK family of keywords provides calendar and scheduling functionality, allowing bots to create appointments, schedule meetings with attendees, and check availability.
Keywords Overview
| Keyword | Purpose |
|---|---|
BOOK | Create a simple calendar appointment |
BOOK_MEETING | Schedule a meeting with multiple attendees |
CHECK_AVAILABILITY | Find available time slots |
BOOK
Creates a calendar appointment for the current user.
Syntax
result = BOOK title, description, start_time, duration_minutes, location
Parameters
| Parameter | Type | Description |
|---|---|---|
title | String | Title/subject of the appointment |
description | String | Detailed description of the appointment |
start_time | String | When the appointment starts (see Time Formats) |
duration_minutes | Integer | Duration in minutes (default: 30) |
location | String | Location or meeting room |
Example
' Book a dentist appointment
result = BOOK "Dentist Appointment", "Annual checkup", "2024-03-15 14:00", 60, "123 Medical Center"
TALK "Your appointment has been booked: " + result
' Book a quick meeting
result = BOOK "Team Sync", "Weekly standup", "tomorrow 10:00", 30, "Conference Room A"
BOOK_MEETING
Schedules a meeting with multiple attendees, sending calendar invites.
Syntax
result = BOOK_MEETING meeting_details, attendees
Parameters
| Parameter | Type | Description |
|---|---|---|
meeting_details | JSON String | Meeting configuration object |
attendees | Array | List of attendee email addresses |
Meeting Details Object
{
"title": "Meeting Title",
"description": "Meeting description",
"start_time": "2024-03-15 14:00",
"duration": 60,
"location": "Conference Room B",
"reminder_minutes": 15,
"recurrence": "weekly"
}
Example
' Schedule a team meeting
meeting = '{
"title": "Sprint Planning",
"description": "Plan next sprint tasks and priorities",
"start_time": "Monday 09:00",
"duration": 90,
"location": "Main Conference Room",
"reminder_minutes": 30
}'
attendees = ["alice@company.com", "bob@company.com", "carol@company.com"]
result = BOOK_MEETING meeting, attendees
TALK "Meeting scheduled with " + LEN(attendees) + " attendees"
CHECK_AVAILABILITY
Finds available time slots for a given date and duration.
Syntax
available_slots = CHECK_AVAILABILITY date, duration_minutes
Parameters
| Parameter | Type | Description |
|---|---|---|
date | String | The date to check availability |
duration_minutes | Integer | Required duration for the meeting |
Example
' Check availability for a 1-hour meeting tomorrow
slots = CHECK_AVAILABILITY "tomorrow", 60
TALK "Available time slots:"
FOR EACH slot IN slots
TALK " - " + slot
NEXT
Time Formats
The BOOK keywords support flexible time formats:
Absolute Formats
| Format | Example |
|---|---|
| ISO 8601 | "2024-03-15T14:00:00" |
| Date + Time | "2024-03-15 14:00" |
| Date + Time (12h) | "2024-03-15 2:00 PM" |
Relative Formats
| Format | Example |
|---|---|
| Day name | "Monday 10:00" |
| Relative day | "tomorrow 14:00" |
| Next week | "next Tuesday 09:00" |
Complete Example: Appointment Scheduling Bot
' appointment-bot.bas
' A complete appointment scheduling workflow
TALK "Welcome to our scheduling assistant!"
TALK "What type of appointment would you like to book?"
HEAR appointment_type
SWITCH appointment_type
CASE "consultation"
duration = 60
description = "Initial consultation meeting"
CASE "follow-up"
duration = 30
description = "Follow-up discussion"
CASE "review"
duration = 45
description = "Project review session"
DEFAULT
duration = 30
description = appointment_type
END SWITCH
TALK "When would you like to schedule this?"
HEAR preferred_date
' Check available slots
slots = CHECK_AVAILABILITY preferred_date, duration
IF LEN(slots) = 0 THEN
TALK "Sorry, no availability on that date. Please try another day."
ELSE
TALK "Available times:"
index = 1
FOR EACH slot IN slots
TALK index + ". " + slot
index = index + 1
NEXT
TALK "Which time slot would you prefer? (enter number)"
HEAR choice
selected_time = slots[choice - 1]
TALK "Where would you like the meeting to take place?"
HEAR location
' Book the appointment
result = BOOK appointment_type, description, selected_time, duration, location
TALK "✅ Your appointment has been booked!"
TALK "Details: " + result
END IF
Meeting with Recurrence
' Schedule a recurring weekly meeting
meeting = '{
"title": "Weekly Team Standup",
"description": "Daily sync on project progress",
"start_time": "Monday 09:00",
"duration": 15,
"location": "Virtual - Teams",
"reminder_minutes": 5,
"recurrence": {
"frequency": "weekly",
"interval": 1,
"count": 12,
"by_day": ["MO", "WE", "FR"]
}
}'
attendees = ["team@company.com"]
result = BOOK_MEETING meeting, attendees
Event Status
Calendar events can have the following statuses:
| Status | Description |
|---|---|
Confirmed | Event is confirmed and scheduled |
Tentative | Event is tentatively scheduled |
Cancelled | Event has been cancelled |
Calendar Event Structure
When an event is created, it contains:
{
"id": "uuid",
"title": "Meeting Title",
"description": "Description",
"start_time": "2024-03-15T14:00:00Z",
"end_time": "2024-03-15T15:00:00Z",
"location": "Conference Room",
"organizer": "user@example.com",
"attendees": ["attendee1@example.com"],
"reminder_minutes": 15,
"recurrence_rule": null,
"status": "Confirmed",
"created_at": "2024-03-10T10:00:00Z",
"updated_at": "2024-03-10T10:00:00Z"
}
Configuration
To enable calendar functionality, configure the following in config.csv:
| Key | Description |
|---|---|
calendar-provider | Calendar service (google, outlook, caldav) |
calendar-client-id | OAuth client ID |
calendar-client-secret | OAuth client secret |
calendar-default-reminder | Default reminder time in minutes |
Error Handling
' Handle booking errors gracefully
ON ERROR GOTO handle_error
result = BOOK "Meeting", "Description", "invalid-date", 30, "Location"
TALK "Booked: " + result
END
handle_error:
TALK "Sorry, I couldn't book that appointment. Please check the date and time format."
TALK "Error: " + ERROR_MESSAGE
END
Best Practices
-
Always check availability first: Before booking, use
CHECK_AVAILABILITYto ensure the time slot is free. -
Use descriptive titles: Make appointment titles clear and searchable.
-
Set appropriate reminders: Configure reminder times based on appointment importance.
-
Handle time zones: Be explicit about time zones when scheduling across regions.
-
Validate inputs: Check user-provided dates and times before attempting to book.
Related Keywords
- SET SCHEDULE - Schedule recurring bot tasks
- WAIT - Pause execution for a duration
- SEND MAIL - Send meeting confirmations via email
See Also
WEATHER / FORECAST Keywords
Get weather information for any location using OpenWeatherMap API.
WEATHER
result = WEATHER "London"
TALK result
Returns current conditions: temperature, humidity, wind, visibility.
FORECAST
result = FORECAST "Paris", 5
TALK result
Returns multi-day forecast with high/low temps and rain chance.
Configuration
Add to config.csv:
weather-api-key,your-openweathermap-api-key
Get a free API key at openweathermap.org.
See Also
- Weather API Integration - Full documentation
ADD BOT Keywords
Dynamically add bots to a session with specific triggers, tools, or schedules.
Keywords
| Keyword | Purpose |
|---|---|
ADD BOT ... WITH TRIGGER | Add bot activated by keyword |
ADD BOT ... WITH TOOLS | Add bot with specific tools |
ADD BOT ... WITH SCHEDULE | Add bot on a schedule |
REMOVE BOT | Remove bot from session |
ADD BOT WITH TRIGGER
ADD BOT "sales-bot" WITH TRIGGER "pricing"
When user mentions “pricing”, sales-bot activates.
ADD BOT WITH TOOLS
ADD BOT "data-bot" WITH TOOLS "database,spreadsheet,charts"
ADD BOT WITH SCHEDULE
ADD BOT "report-bot" WITH SCHEDULE "0 9 * * MON"
Adds bot that runs every Monday at 9 AM (cron format).
REMOVE BOT
REMOVE BOT "sales-bot"
Example: Multi-Bot Setup
' Set up specialized bots for different topics
ADD BOT "orders-bot" WITH TRIGGER "order status, shipping, delivery"
ADD BOT "support-bot" WITH TRIGGER "help, problem, issue, broken"
ADD BOT "sales-bot" WITH TRIGGER "pricing, quote, purchase"
TALK "I've set up our specialist team. Just ask about orders, support, or sales!"
See Also
- DELEGATE TO BOT - Includes A2A Protocol details
ADD MEMBER Keywords
Manage team and group membership within bots.
Keywords
| Keyword | Purpose |
|---|---|
ADD_MEMBER | Add user to a group with role |
REMOVE_MEMBER | Remove user from group |
CREATE_TEAM | Create a new team |
LIST_MEMBERS | List group members |
ADD_MEMBER
result = ADD_MEMBER group_id, user_email, role
Parameters
| Parameter | Type | Description |
|---|---|---|
group_id | String | Team or group identifier |
user_email | String | Email of user to add |
role | String | Role: “admin”, “member”, “viewer” |
Example
result = ADD_MEMBER "team-sales", "john@company.com", "member"
TALK "Added user: " + result
REMOVE_MEMBER
result = REMOVE_MEMBER "team-sales", "john@company.com"
CREATE_TEAM
members = ["alice@company.com", "bob@company.com"]
result = CREATE_TEAM "Project Alpha", "Development team", members
LIST_MEMBERS
members = LIST_MEMBERS "team-sales"
FOR EACH member IN members
TALK member.email + " - " + member.role
NEXT
Roles
| Role | Permissions |
|---|---|
admin | Full control, manage members |
member | Standard access |
viewer | Read-only access |
See Also
ADD SUGGESTION / CLEAR SUGGESTIONS Keywords
Display quick-reply suggestion buttons to users during conversations.
Keywords
| Keyword | Purpose |
|---|---|
ADD SUGGESTION | Add a suggestion button |
CLEAR SUGGESTIONS | Remove all suggestions |
ADD SUGGESTION
ADD SUGGESTION "Yes"
ADD SUGGESTION "No"
ADD SUGGESTION "Maybe later"
With action data:
ADD SUGGESTION "View Order", "action:view_order"
ADD SUGGESTION "Track Package", "action:track"
CLEAR SUGGESTIONS
CLEAR SUGGESTIONS
Example: Product Selection
TALK "What type of product are you interested in?"
ADD SUGGESTION "Electronics"
ADD SUGGESTION "Clothing"
ADD SUGGESTION "Home & Garden"
ADD SUGGESTION "Books"
HEAR choice
CLEAR SUGGESTIONS
TALK "Great! Let me show you our " + choice + " collection."
Example: Confirmation Flow
TALK "Your order total is $99.00. Would you like to proceed?"
ADD SUGGESTION "Confirm Order"
ADD SUGGESTION "Modify Cart"
ADD SUGGESTION "Cancel"
HEAR response
CLEAR SUGGESTIONS
Behavior
- Suggestions appear as clickable buttons in supported channels
- Clicking a suggestion sends its text as user input
- Suggestions persist until cleared or new ones are added
- Maximum suggestions varies by channel (typically 3-10)
Channel Support
| Channel | Supported | Max Buttons |
|---|---|---|
| ✅ | 3 | |
| Telegram | ✅ | 8 |
| Web Chat | ✅ | 10 |
| SMS | ❌ | N/A |
See Also
MODEL ROUTE Keywords
Route LLM requests to different models based on task type, cost, or capability requirements.
Keywords
| Keyword | Purpose |
|---|---|
USE MODEL | Select a specific model for next request |
SET MODEL ROUTING | Configure routing strategy |
GET CURRENT MODEL | Get active model name |
LIST MODELS | List available models |
USE MODEL
USE MODEL "fast"
response = ASK "Quick question about the weather"
USE MODEL "quality"
response = ASK "Analyze this complex legal document"
SET MODEL ROUTING
SET MODEL ROUTING "auto"
SET MODEL ROUTING "cost"
SET MODEL ROUTING "manual"
Routing Strategies
| Strategy | Description |
|---|---|
manual | Explicitly specify model per request |
auto | Auto-select based on task complexity |
cost | Prefer cheaper models when possible |
quality | Always use highest quality model |
GET CURRENT MODEL
model = GET CURRENT MODEL
TALK "Currently using: " + model
LIST MODELS
models = LIST MODELS
FOR EACH m IN models
TALK m.name + " - " + m.description
NEXT
Configuration
Add to config.csv:
llm-models,default;fast;quality;code
model-routing-strategy,auto
model-default,claude-sonnet-4.5
model-fast,gemini-flash
model-quality,claude-opus-4.5
model-code,claude-sonnet-4.5
Example: Task-Based Routing
USE MODEL "code"
code_review = ASK "Review this function for bugs: " + code
USE MODEL "fast"
TALK "Here's what I found:"
TALK code_review
See Also
SEND TEMPLATE Keywords
Send templated messages across multiple channels (email, WhatsApp, SMS, Telegram, push notifications).
Keywords
| Keyword | Purpose |
|---|---|
SEND_TEMPLATE | Send template to single recipient |
SEND_TEMPLATE_TO | Send template to multiple recipients |
CREATE_TEMPLATE | Create a new message template |
GET_TEMPLATE | Retrieve template by name |
SEND_TEMPLATE
result = SEND_TEMPLATE "welcome", "user@example.com", "email"
With variables:
vars = {"name": "John", "order_id": "12345"}
result = SEND_TEMPLATE "order_confirmation", "+1234567890", "whatsapp", vars
SEND_TEMPLATE_TO
Send to multiple recipients:
recipients = ["user1@example.com", "user2@example.com", "user3@example.com"]
result = SEND_TEMPLATE_TO "newsletter", recipients, "email"
TALK "Sent: " + result.sent + ", Failed: " + result.failed
Supported Channels
| Channel | Recipient Format |
|---|---|
email | Email address |
whatsapp | Phone number with country code |
sms | Phone number with country code |
telegram | Telegram user ID or username |
push | Device token or user ID |
CREATE_TEMPLATE
template_body = "Hello {{name}}, your order {{order_id}} has shipped!"
result = CREATE_TEMPLATE "shipping_notification", template_body, "transactional"
Template Variables
Use {{variable_name}} syntax in templates:
vars = {
"customer_name": "Alice",
"amount": "$99.00",
"date": "March 15, 2024"
}
result = SEND_TEMPLATE "receipt", "alice@example.com", "email", vars
Example: Order Notification
' Send order confirmation across multiple channels
order_vars = {
"order_id": order.id,
"total": order.total,
"items": order.item_count
}
SEND_TEMPLATE "order_placed", customer.email, "email", order_vars
SEND_TEMPLATE "order_placed", customer.phone, "whatsapp", order_vars
Response Object
{
"success": true,
"message_id": "msg_123abc",
"channel": "email",
"recipient": "user@example.com"
}
For batch sends:
{
"total": 100,
"sent": 98,
"failed": 2,
"errors": [...]
}
See Also
SET USER Keyword
Switch the current user context within a script execution.
Syntax
SET USER user_id
Parameters
| Parameter | Type | Description |
|---|---|---|
user_id | String (UUID) | The UUID of the user to switch to |
Description
The SET USER keyword changes the active user context for subsequent operations in the script. This is useful for administrative scripts that need to perform actions on behalf of different users.
Example
' Admin script to update user preferences
SET USER "550e8400-e29b-41d4-a716-446655440000"
SET USER MEMORY "theme", "dark"
SET USER MEMORY "language", "pt-BR"
TALK "User preferences updated."
Example: Batch User Operations
' Process multiple users
users = GET "SELECT id FROM users WHERE needs_update = true"
FOR EACH user IN users
SET USER user.id
SET USER MEMORY "migrated", "true"
SEND MAIL user.email, "Account Updated", "Your account has been migrated."
NEXT
Use Cases
- Administrative batch operations
- Multi-tenant management scripts
- User impersonation for support
- Scheduled maintenance tasks
Security
- Requires admin privileges to execute
- All actions are logged with original admin identity
- Cannot escalate privileges beyond script permissions
See Also
USE MODEL
Dynamically switches the LLM model used for AI operations within a script. Enables model routing based on task requirements, cost optimization, or performance needs.
Syntax
USE MODEL "modelname"
USE MODEL "auto"
Parameters
| Parameter | Type | Description |
|---|---|---|
modelname | String | Name of the model to use, or “auto” for automatic routing |
Description
USE MODEL allows scripts to dynamically select which language model to use for subsequent AI operations. This is essential for:
- Cost optimization - Use smaller/cheaper models for simple tasks
- Quality control - Use powerful models for complex reasoning
- Speed optimization - Use fast models for real-time responses
- Specialized tasks - Use code-specific models for programming
When set to "auto", the system automatically routes queries to the most appropriate model based on task complexity, latency requirements, and cost considerations.
Examples
Basic Model Selection
' Use a fast model for simple queries
USE MODEL "fast"
response = LLM "What time is it in New York?"
TALK response
' Switch to quality model for complex analysis
USE MODEL "quality"
analysis = LLM "Analyze the market trends for Q4 and provide recommendations"
TALK analysis
Automatic Model Routing
' Let the system choose the best model
USE MODEL "auto"
' Simple query -> routes to fast model
greeting = LLM "Say hello"
' Complex query -> routes to quality model
report = LLM "Generate a detailed financial analysis with projections"
Code Generation
' Use code-specialized model
USE MODEL "code"
code = LLM "Write a Python function to calculate fibonacci numbers"
TALK code
Cost-Aware Processing
' Process bulk items with cheap model
USE MODEL "fast"
FOR EACH item IN items
summary = LLM "Summarize in one sentence: " + item.text
item.summary = summary
NEXT item
' Final review with quality model
USE MODEL "quality"
review = LLM "Review these summaries for accuracy: " + summaries
Model Fallback Pattern
' Try preferred model first
USE MODEL "claude-sonnet-4.5"
ON ERROR GOTO fallback
response = LLM prompt
GOTO done
fallback:
' Fall back to local model if API fails
USE MODEL "local"
response = LLM prompt
done:
TALK response
Model Routing Strategies
The system supports several routing strategies configured in config.csv:
| Strategy | Description |
|---|---|
manual | Explicit model selection only |
auto | Automatic routing based on query analysis |
load-balanced | Distribute across models for throughput |
fallback | Try models in order until one succeeds |
Built-in Model Aliases
| Alias | Description | Use Case |
|---|---|---|
fast | Optimized for speed | Simple queries, real-time chat |
quality | Optimized for accuracy | Complex reasoning, analysis |
code | Code-specialized model | Programming tasks |
local | Local GGUF model | Offline/private operation |
auto | System-selected | Let routing decide |
Config.csv Options
name,value
model-routing-strategy,auto
model-default,fast
model-fast,DeepSeek-R3-Distill-Qwen-1.5B-Q3_K_M.gguf
model-quality,claude-sonnet-4.5
model-code,codellama-7b.gguf
model-fallback-enabled,true
model-fallback-order,quality,fast,local
| Option | Default | Description |
|---|---|---|
model-routing-strategy | auto | Routing strategy to use |
model-default | fast | Default model when not specified |
model-fast | (configured) | Model for fast/simple tasks |
model-quality | (configured) | Model for quality/complex tasks |
model-code | (configured) | Model for code generation |
model-fallback-enabled | true | Enable automatic fallback |
model-fallback-order | quality,fast,local | Order to try on failure |
Auto-Routing Criteria
When USE MODEL "auto" is active, the system considers:
- Query complexity - Token count, reasoning required
- Task type - Code, analysis, chat, translation
- Latency requirements - Real-time vs batch
- Cost budget - Per-query and daily limits
- Model availability - Health checks, rate limits
Related Keywords
| Keyword | Description |
|---|---|
LLM | Query the language model |
SET CONTEXT | Add context for LLM |
BEGIN SYSTEM PROMPT | Define AI persona |
Performance Considerations
- Model switching has minimal overhead
- Auto-routing adds ~10ms for classification
- Consider batching similar queries under one model
- Local models avoid network latency
Best Practices
- Start with auto - Let the system optimize, then tune
- Batch by model - Group similar tasks to reduce switching
- Monitor costs - Track per-model usage in analytics
- Test fallbacks - Ensure graceful degradation
- Profile your queries - Understand which need quality vs speed
See Also
- LLM Configuration - Model setup
- Multi-Agent Orchestration - Model routing in multi-agent systems
- Cost Tracking - Monitor model costs
DELEGATE TO BOT
Delegates a task or message to another bot in a multi-agent system. This enables agent-to-agent communication using the A2A (Agent-to-Agent) protocol.
Syntax
DELEGATE "message" TO BOT "botname"
DELEGATE "message" TO BOT "botname" TIMEOUT seconds
result = DELEGATE "message" TO BOT "botname"
Parameters
| Parameter | Type | Description |
|---|---|---|
message | String | The task or message to send to the target bot |
botname | String | Name of the target bot to delegate to |
seconds | Number | Optional timeout in seconds (default: 30) |
Description
DELEGATE TO BOT sends a message or task to another bot and optionally waits for a response. This is the core keyword for multi-agent orchestration, enabling:
- Task specialization - Route tasks to specialized bots
- Agent collaboration - Multiple bots working together
- Workload distribution - Spread tasks across agents
- Expert consultation - Query domain-specific bots
The delegation uses the A2A (Agent-to-Agent) protocol which handles:
- Message routing between agents
- Correlation IDs for request/response matching
- Timeout handling
- Error propagation
Examples
Basic Delegation
' Delegate a translation task to a specialized bot
DELEGATE "Translate 'Hello World' to Portuguese" TO BOT "translator-bot"
TALK "Translation request sent!"
Get Response from Delegated Bot
' Ask the finance bot for a calculation
result = DELEGATE "Calculate ROI for investment of $10000 with 12% annual return over 5 years" TO BOT "finance-bot"
TALK "The finance expert says: " + result
Delegation with Timeout
' Long-running task with extended timeout
result = DELEGATE "Analyze this quarterly report and provide insights" TO BOT "analyst-bot" TIMEOUT 120
TALK result
Multi-Bot Workflow
' Customer support escalation workflow
issue = "Customer reports billing discrepancy"
' First, check with billing bot
billingInfo = DELEGATE "Check account status for customer " + customerid TO BOT "billing-bot" TIMEOUT 30
IF INSTR(billingInfo, "discrepancy") > 0 THEN
' Escalate to senior support
resolution = DELEGATE "Priority: " + issue + " Details: " + billingInfo TO BOT "senior-support-bot" TIMEOUT 60
TALK "A senior agent is handling your case: " + resolution
ELSE
TALK "Your account looks fine: " + billingInfo
END IF
Parallel Expert Consultation
' Get opinions from multiple specialized bots
question = "What's the best approach for this investment portfolio?"
' Delegate to multiple experts
stockAnalysis = DELEGATE question TO BOT "stock-analyst"
bondAnalysis = DELEGATE question TO BOT "bond-analyst"
riskAssessment = DELEGATE question TO BOT "risk-assessor"
' Combine insights
BEGIN TALK
**Investment Analysis Summary**
📈 **Stock Analysis:** {stockAnalysis}
📊 **Bond Analysis:** {bondAnalysis}
⚠️ **Risk Assessment:** {riskAssessment}
END TALK
Conditional Routing
' Route to appropriate specialist based on query type
HEAR userquery
' Use LLM to classify the query
category = LLM "Classify this query into one of: billing, technical, sales, general. Query: " + userquery
SWITCH category
CASE "billing"
response = DELEGATE userquery TO BOT "billing-bot"
CASE "technical"
response = DELEGATE userquery TO BOT "tech-support-bot"
CASE "sales"
response = DELEGATE userquery TO BOT "sales-bot"
CASE ELSE
response = DELEGATE userquery TO BOT "general-assistant"
END SWITCH
TALK response
Chain of Delegation
' Research assistant that coordinates multiple bots
topic = "renewable energy trends 2025"
' Step 1: Gather data
rawData = DELEGATE "Search for recent data on " + topic TO BOT "research-bot" TIMEOUT 60
' Step 2: Analyze data
analysis = DELEGATE "Analyze this data and identify key trends: " + rawData TO BOT "analyst-bot" TIMEOUT 45
' Step 3: Generate report
report = DELEGATE "Create an executive summary from this analysis: " + analysis TO BOT "writer-bot" TIMEOUT 30
TALK report
A2A Protocol Details
When you use DELEGATE TO BOT, the system creates an A2A message with:
| Field | Description |
|---|---|
from_agent | The current bot’s identifier |
to_agent | The target bot name |
message_type | Delegate for task delegation |
payload | The message content |
correlation_id | Unique ID to match response |
timestamp | When the message was sent |
Error Handling
' Handle delegation failures gracefully
ON ERROR RESUME NEXT
result = DELEGATE "Process payment" TO BOT "payment-bot" TIMEOUT 30
IF ERROR THEN
TALK "I'm having trouble reaching our payment system. Please try again in a moment."
' Log the error
PRINT "Delegation failed: " + ERROR_MESSAGE
ELSE
TALK result
END IF
Related Keywords
| Keyword | Description |
|---|---|
ADD BOT | Add a bot to the current session |
BROADCAST TO BOTS | Send message to all bots |
TRANSFER CONVERSATION | Hand off conversation to another bot |
Config.csv Options
name,value
a2a-enabled,true
a2a-timeout,30
a2a-max-hops,5
a2a-retry-count,3
| Option | Default | Description |
|---|---|---|
a2a-enabled | true | Enable agent-to-agent communication |
a2a-timeout | 30 | Default timeout in seconds |
a2a-max-hops | 5 | Maximum delegation chain depth |
a2a-retry-count | 3 | Number of retry attempts on failure |
Best Practices
- Set appropriate timeouts - Long tasks need longer timeouts
- Handle failures gracefully - Always have a fallback
- Avoid circular delegation - Bot A → Bot B → Bot A
- Keep delegation chains short - Max 3-4 hops recommended
- Log delegations - Helps with debugging multi-agent flows
- Use descriptive bot names -
billing-botnotbot2
Limitations
- Maximum message size: 1MB
- Maximum timeout: 300 seconds (5 minutes)
- Maximum concurrent delegations: 10 per session
- Target bot must be registered and active
See Also
- Multi-Agent Orchestration - Complete multi-agent guide
- A2A Protocol - Technical protocol details
- Bot Configuration - Bot setup
BOT REFLECTION
Enables agent self-analysis and improvement by using LLM to evaluate conversation quality, identify issues, and suggest improvements. This is a key feature for continuous agent optimization.
Syntax
BOT REFLECTION enabled
BOT REFLECTION ON "metric"
insights = BOT REFLECTION INSIGHTS()
Parameters
| Parameter | Type | Description |
|---|---|---|
enabled | Boolean | true to enable, false to disable reflection |
metric | String | Specific metric to analyze (e.g., “conversation_quality”, “response_accuracy”) |
Description
BOT REFLECTION activates the agent self-improvement system, which periodically analyzes conversations and provides actionable insights. When enabled, the system:
- Analyzes conversation quality - Tone, clarity, helpfulness
- Identifies issues - Misunderstandings, incomplete answers, user frustration
- Suggests improvements - Better responses, missing information, tone adjustments
- Tracks metrics over time - Quality scores, resolution rates
This creates a continuous improvement loop where agents learn from their interactions.
Examples
Enable Basic Reflection
' Enable reflection for this bot session
BOT REFLECTION true
' Normal conversation proceeds
TALK "Hello! How can I help you today?"
HEAR userquery
response = LLM userquery
TALK response
' Reflection runs automatically in background
Monitor Specific Metrics
' Enable reflection on conversation quality
BOT REFLECTION ON "conversation_quality"
' Enable reflection on response accuracy
BOT REFLECTION ON "response_accuracy"
' Enable reflection on user satisfaction
BOT REFLECTION ON "user_satisfaction"
Retrieve Reflection Insights
' Get insights from reflection analysis
insights = BOT REFLECTION INSIGHTS()
IF insights <> "" THEN
PRINT "Reflection Insights:"
PRINT insights.summary
PRINT "Quality Score: " + insights.qualityScore
PRINT "Issues Found: " + insights.issuesCount
FOR EACH suggestion IN insights.suggestions
PRINT "Suggestion: " + suggestion
NEXT suggestion
END IF
Use Insights for Self-Improvement
' Periodic reflection check
BOT REFLECTION true
' After conversation ends, check insights
insights = BOT REFLECTION INSIGHTS()
IF insights.qualityScore < 0.7 THEN
' Log for review
PRINT "Low quality conversation detected"
PRINT "Issues: " + insights.issues
' Store for analysis
SET BOT MEMORY "reflection_" + conversationid, insights
END IF
Admin Dashboard Integration
' Script for admin to review bot performance
insights = BOT REFLECTION INSIGHTS()
BEGIN TALK
**Bot Performance Report**
📊 **Quality Score:** {insights.qualityScore}/1.0
📈 **Metrics:**
- Response Accuracy: {insights.responseAccuracy}%
- User Satisfaction: {insights.userSatisfaction}%
- Resolution Rate: {insights.resolutionRate}%
⚠️ **Issues Identified:**
{insights.issues}
💡 **Improvement Suggestions:**
{insights.suggestions}
END TALK
Conditional Reflection
' Only reflect on complex conversations
messageCount = GET BOT MEMORY("messageCount")
IF messageCount > 5 THEN
' Enable reflection for longer conversations
BOT REFLECTION true
BOT REFLECTION ON "conversation_quality"
END IF
Reflection with Alerts
' Enable reflection with alerting
BOT REFLECTION true
' Check for critical issues periodically
insights = BOT REFLECTION INSIGHTS()
IF insights.criticalIssues > 0 THEN
' Alert admin
SEND MAIL admin, "Bot Alert: Critical Issues Detected", insights.summary
END IF
Reflection Metrics
| Metric | Description | Score Range |
|---|---|---|
conversation_quality | Overall conversation effectiveness | 0.0 - 1.0 |
response_accuracy | How accurate/correct responses are | 0.0 - 1.0 |
user_satisfaction | Estimated user satisfaction | 0.0 - 1.0 |
tone_appropriateness | Whether tone matches context | 0.0 - 1.0 |
resolution_rate | Whether user issues were resolved | 0.0 - 1.0 |
response_time | Average response latency | milliseconds |
Insights Object Structure
insights = BOT REFLECTION INSIGHTS()
' Available properties:
insights.qualityScore ' Overall quality (0-1)
insights.summary ' Text summary of analysis
insights.issues ' Array of identified issues
insights.issuesCount ' Number of issues found
insights.suggestions ' Array of improvement suggestions
insights.metrics ' Object with detailed metrics
insights.criticalIssues ' Count of critical problems
insights.conversationId ' ID of analyzed conversation
insights.timestamp ' When analysis was performed
Config.csv Options
name,value
reflection-enabled,true
reflection-interval,10
reflection-min-messages,3
reflection-model,quality
reflection-store-insights,true
| Option | Default | Description |
|---|---|---|
reflection-enabled | true | Enable/disable reflection globally |
reflection-interval | 10 | Messages between reflection runs |
reflection-min-messages | 3 | Minimum messages before reflecting |
reflection-model | quality | LLM model for reflection analysis |
reflection-store-insights | true | Store insights in database |
How Reflection Works
- Collection - Conversation history is collected
- Analysis - LLM analyzes the conversation against metrics
- Scoring - Quality scores are calculated
- Identification - Issues and patterns are identified
- Suggestion - Improvement suggestions are generated
- Storage - Results stored for dashboards and trends
Related Keywords
| Keyword | Description |
|---|---|
LLM | Query the language model |
SET BOT MEMORY | Store bot-level data |
PRINT | Debug output |
Performance Considerations
- Reflection uses LLM calls (affects cost/latency)
- Run reflection periodically, not on every message
- Use smaller models for reflection when possible
- Consider async reflection for production
Best Practices
- Enable for complex bots - Most valuable for customer-facing agents
- Review insights regularly - Use dashboards to spot trends
- Act on suggestions - Update prompts and tools based on insights
- Set appropriate intervals - Balance insight freshness vs cost
- Store for analysis - Track improvements over time
Limitations
- Reflection adds LLM cost per analysis
- Analysis quality depends on model capability
- Cannot analyze real-time user emotions
- Historical only (not predictive)
See Also
- Multi-Agent Orchestration - Multi-agent systems
- Observability - Monitoring and metrics
- LLM Configuration - Model setup
RUN PYTHON / RUN JAVASCRIPT / RUN BASH
Executes code in a sandboxed environment. Enables safe execution of dynamic code for data processing, calculations, and automation tasks.
Syntax
result = RUN PYTHON "code"
result = RUN JAVASCRIPT "code"
result = RUN BASH "code"
result = RUN PYTHON WITH FILE "script.py"
result = RUN JAVASCRIPT WITH FILE "script.js"
result = RUN BASH WITH FILE "script.sh"
Parameters
| Parameter | Type | Description |
|---|---|---|
code | String | Inline code to execute |
filepath | String | Path to script file (with WITH FILE variant) |
Returns
The output (stdout) from the executed code as a string.
Description
The RUN keywords execute code in isolated, sandboxed environments. This provides:
- Security - Code runs in isolated containers (LXC, Docker, or Firecracker)
- Flexibility - Use the right language for the task
- Safety - Resource limits prevent runaway processes
- Integration - Pass data between BASIC and other languages
The sandbox prevents:
- File system access outside designated areas
- Network access (unless explicitly enabled)
- System calls and privilege escalation
- Excessive CPU or memory usage
Examples
Basic Python Execution
' Simple calculation
result = RUN PYTHON "print(2 + 2)"
TALK "2 + 2 = " + result
' Data processing
code = "
import json
data = [1, 2, 3, 4, 5]
print(json.dumps({'sum': sum(data), 'avg': sum(data)/len(data)}))
"
stats = RUN PYTHON code
TALK "Statistics: " + stats
JavaScript for JSON Processing
' Parse and transform JSON
jsonData = GET "https://api.example.com/data"
code = "
const data = JSON.parse('" + jsonData + "');
const transformed = data.items.map(i => ({
id: i.id,
name: i.name.toUpperCase()
}));
console.log(JSON.stringify(transformed));
"
result = RUN JAVASCRIPT code
TALK result
Bash for System Tasks
' List files and get disk usage
result = RUN BASH "ls -la /data && df -h"
TALK "System info:\n" + result
Run Script from File
' Execute a Python script from .gbdrive
result = RUN PYTHON WITH FILE "scripts/analyze_data.py"
TALK "Analysis complete: " + result
' Run a bash script
output = RUN BASH WITH FILE "scripts/backup.sh"
PRINT "Backup output: " + output
Data Pipeline
' Fetch data, process with Python, store result
rawData = GET "https://api.example.com/sales"
pythonCode = "
import json
import statistics
data = json.loads('''" + rawData + "''')
sales = [item['amount'] for item in data]
result = {
'total': sum(sales),
'average': statistics.mean(sales),
'median': statistics.median(sales),
'std_dev': statistics.stdev(sales) if len(sales) > 1 else 0
}
print(json.dumps(result))
"
analysis = RUN PYTHON pythonCode
SAVE "sales_analysis.csv", analysis
TALK "Sales analysis saved!"
Machine Learning Inference
' Run ML model for prediction
inputData = #{ features: [1.5, 2.3, 4.1, 0.8] }
code = "
import json
import pickle
# Load pre-trained model (stored in sandbox)
with open('/data/model.pkl', 'rb') as f:
model = pickle.load(f)
input_data = " + JSON(inputData) + "
prediction = model.predict([input_data['features']])[0]
print(json.dumps({'prediction': float(prediction)}))
"
result = RUN PYTHON code
prediction = JSON_PARSE(result)
TALK "Predicted value: " + prediction.prediction
Image Processing
' Process an uploaded image
imagePath = UPLOAD userImage, "uploads/"
code = "
from PIL import Image
import json
img = Image.open('/data/" + imagePath + "')
width, height = img.size
format = img.format
# Resize if too large
if width > 1920:
ratio = 1920 / width
new_size = (1920, int(height * ratio))
img = img.resize(new_size)
img.save('/data/resized_" + imagePath + "')
print(json.dumps({
'original_size': [width, height],
'format': format,
'resized': width > 1920
}))
"
result = RUN PYTHON code
TALK "Image processed: " + result
Multi-Language Pipeline
' Use different languages for different strengths
data = GET "https://api.example.com/raw-data"
' Step 1: Clean data with Python (pandas)
cleanCode = "
import pandas as pd
import json
df = pd.read_json('''" + data + "''')
df = df.dropna()
df = df[df['value'] > 0]
print(df.to_json(orient='records'))
"
cleanedData = RUN PYTHON cleanCode
' Step 2: Transform with JavaScript (fast JSON manipulation)
transformCode = "
const data = JSON.parse('" + cleanedData + "');
const result = data.reduce((acc, item) => {
acc[item.category] = (acc[item.category] || 0) + item.value;
return acc;
}, {});
console.log(JSON.stringify(result));
"
aggregated = RUN JAVASCRIPT transformCode
TALK "Results: " + aggregated
Sandbox Configuration
Runtime Options
The sandbox supports multiple isolation backends:
| Runtime | Security | Performance | Requirements |
|---|---|---|---|
LXC | High | Excellent | LXC installed |
Docker | High | Good | Docker daemon |
Firecracker | Highest | Good | Firecracker binary |
Process | Low | Best | None (fallback) |
Config.csv Options
name,value
sandbox-runtime,lxc
sandbox-timeout,30
sandbox-memory-mb,512
sandbox-cpu-percent,50
sandbox-network,false
sandbox-python-packages,numpy,pandas,pillow
sandbox-allowed-paths,/data,/tmp
| Option | Default | Description |
|---|---|---|
sandbox-runtime | lxc | Isolation backend to use |
sandbox-timeout | 30 | Maximum execution time (seconds) |
sandbox-memory-mb | 512 | Memory limit in MB |
sandbox-cpu-percent | 50 | CPU usage limit |
sandbox-network | false | Allow network access |
sandbox-python-packages | (none) | Pre-installed Python packages |
sandbox-allowed-paths | /data,/tmp | Accessible filesystem paths |
Security Considerations
What’s Blocked
- Direct file system access outside sandbox
- Network connections (unless
sandbox-network=true) - System calls (fork, exec, etc.)
- Environment variable access
- Process spawning
What’s Allowed
- Standard library operations
- File I/O within
/dataand/tmp - Computation up to resource limits
- Pre-approved packages
Input Sanitization
' IMPORTANT: Always sanitize user input before embedding in code
userInput = HEAR input
' Remove potential code injection
safeInput = REPLACE(userInput, "'", "\'")
safeInput = REPLACE(safeInput, '"', '\"')
code = "print('User said: " + safeInput + "')"
result = RUN PYTHON code
Error Handling
' Handle execution errors
ON ERROR RESUME NEXT
result = RUN PYTHON "
import nonexistent_module
print('hello')
"
IF ERROR THEN
TALK "Code execution failed: " + ERROR_MESSAGE
' Fall back to alternative approach
ELSE
TALK result
END IF
Resource Limits
| Resource | Default | Maximum |
|---|---|---|
| Execution time | 30s | 300s |
| Memory | 512 MB | 4096 MB |
| CPU | 50% | 100% |
| Output size | 1 MB | 10 MB |
| File writes | 10 MB | 100 MB |
Related Keywords
| Keyword | Description |
|---|---|
LLM | AI-generated code execution |
GET | Fetch data for processing |
SAVE | Store processed results |
Best Practices
- Keep code snippets small - Large scripts should use
WITH FILE - Sanitize all inputs - Never trust user data in code strings
- Set appropriate timeouts - Match timeout to expected execution time
- Use the right language - Python for data, JS for JSON, Bash for files
- Handle errors gracefully - Code can fail for many reasons
- Pre-install packages - Don’t pip install in every execution
- Log execution times - Monitor for performance issues
Limitations
- No persistent state between executions
- No GPU access (use dedicated ML endpoints instead)
- No interactive input (stdin)
- No graphical output (use file output instead)
- Package installation not allowed at runtime
See Also
- Code Sandbox Architecture - Technical details
- Security Features - Sandbox security model
- Data Operations - Alternative data processing keywords
USE KB
Activate a knowledge base collection for semantic search.
Syntax
USE KB "collection_name"
USE KB collection_variable
Parameters
| Parameter | Type | Description |
|---|---|---|
collection_name | String | Name of folder inside .gbkb/ |
Description
Loads a knowledge base collection, enabling automatic semantic search for that content. Once active, the LLM searches this collection when answering questions - no explicit search code needed.
Examples
Basic Usage
USE KB "policies"
' Bot now answers questions using policy documents
Multiple Collections
USE KB "products"
USE KB "pricing"
USE KB "support"
' All three collections searchable
Conditional Loading
dept = GET user_department
IF dept = "HR" THEN
USE KB "hr_policies"
ELSE IF dept = "IT" THEN
USE KB "it_docs"
END IF
Dynamic Collection
topic = HEAR "What topic?"
USE KB topic
How It Works
- User asks question
- System searches active collections
- Top matching chunks added to LLM context
- LLM generates informed response
Collection Structure
bot.gbkb/
├── policies/ → USE KB "policies"
├── products/ → USE KB "products"
└── support/ → USE KB "support"
Supported File Types
PDF, DOCX, TXT, MD, HTML, CSV, JSON
Performance
- Each collection uses ~50MB RAM when active
- First search: 100-200ms
- Subsequent: 20-50ms (cached)
Tip: Load only what’s needed, clear when done.
Common Patterns
Role-Based
SWITCH GET user_role
CASE "manager"
USE KB "management"
CASE "developer"
USE KB "documentation"
CASE "customer"
USE KB "products"
END SWITCH
With Context
USE KB "technical_docs"
SET CONTEXT "You are a technical expert" AS prompt
With Website
USE WEBSITE "https://docs.example.com"
USE KB "documentation"
' Fresh web content now searchable
Error Handling
TRY
USE KB user_requested_kb
CATCH
TALK "That knowledge base doesn't exist"
END TRY
See Also
- CLEAR KB - Deactivate collections
- Knowledge Base System - Technical details
- Semantic Search - How search works
CLEAR KB
Remove knowledge bases from the current session.
Syntax
CLEAR KB "collection_name" ' Remove specific collection
CLEAR KB ALL ' Remove all collections
Parameters
| Parameter | Type | Description |
|---|---|---|
collection_name | String | Name of KB to remove (optional) |
ALL | Keyword | Removes all active KBs |
Description
CLEAR KB removes previously loaded knowledge bases from the session’s context. This frees memory and ensures subsequent queries don’t search unwanted collections.
Examples
Clear Specific KB
USE KB "policies"
USE KB "products"
' Later, remove just policies
CLEAR KB "policies"
' Only products remains active
Clear All KBs
USE KB "hr-docs"
USE KB "it-docs"
USE KB "finance"
CLEAR KB ALL
' All collections removed
Context Switching
' Support flow
USE KB "troubleshooting"
USE KB "known-issues"
' ... handle support ...
' Switch to sales
CLEAR KB ALL
USE KB "products"
USE KB "pricing"
Return Value
Returns true if cleared successfully, false if KB wasn’t loaded.
Best Practices
| Do | Don’t |
|---|---|
| Clear when switching topics | Leave large KBs active unnecessarily |
| Clear before loading new context | Assume collections auto-clear |
Use ALL for clean slate | Clear one-by-one when ALL works |
Session Scope
- Only affects current session
- Other sessions keep their KBs
- KBs remain in database for future use
- Can reload cleared KBs anytime
See Also
- USE KB - Load knowledge bases
- Knowledge Base System - Technical details
USE WEBSITE Keyword
Syntax
USE WEBSITE "https://example.com"
Parameters
"url"– A valid HTTP or HTTPS URL pointing to a website that should be made available in the conversation context.
Description
USE WEBSITE operates in two distinct modes:
-
Preprocessing Mode (Script Compilation): When found in a BASIC script during compilation, it registers the website for background crawling. The crawler service will fetch, extract, and index the website’s content into a vector database collection. This ensures the website content is ready before any conversation starts.
-
Runtime Mode (Conversation Execution): During a conversation,
USE WEBSITEassociates an already-crawled website collection with the current session, making it available for queries viaFINDorLLMcalls. This behaves similarly toUSE KB- it’s a session-scoped association.
If a website hasn’t been registered during preprocessing, the runtime execution will fail with an appropriate error message.
Example
' In script preprocessing, this registers the website for crawling
USE WEBSITE "https://docs.example.com"
' During conversation, this makes the crawled content available
USE WEBSITE "https://docs.example.com"
FIND "deployment procedures"
TALK "I found information about deployment procedures in the documentation."
Preprocessing Behavior
When the script is compiled:
- The URL is validated
- The website is registered in the
website_crawlstable - The crawler service picks it up and indexes the content
- Status can be: pending (0), crawled (1), or failed (2)
Runtime Behavior
When executed in a conversation:
- Checks if the website has been crawled
- Associates the website collection with the current session
- Makes the content searchable via
FINDand available toLLM
With LLM Integration
USE WEBSITE "https://company.com/policies"
question = HEAR "What would you like to know about our policies?"
FIND question
answer = LLM "Based on the search results, provide a clear answer"
TALK answer
Related Keywords
- CLEAR WEBSITES - Remove all website associations from session
- USE KB - Similar functionality for knowledge base files
- FIND - Search within loaded websites and KBs
- LLM - Process search results with AI
USE TOOL
Syntax
USE TOOL tool-name
Parameters
| Parameter | Type | Description |
|---|---|---|
| tool-name | String | Name of the tool to load (without .bas extension) |
Description
Loads a tool definition and makes it available to the LLM for the current session. Tools extend the bot’s capabilities with specific functions like calculations, API calls, or data processing.
Examples
Basic Usage
' Load weather tool
USE TOOL "weather"
' Now system AI can use weather functions during conversations
TALK "What weather information would you like?"
' System AI automatically uses the tool when needed
Multiple Tools
' Load several tools
USE TOOL "calculator"
USE TOOL "translator"
USE TOOL "date-time"
' System AI has access to all loaded tools during conversations
TALK "I can help you with calculations, translations, and date/time information."
' System AI automatically uses the appropriate tools when needed
Tool Definition Format
Tools are defined as BASIC scripts with PARAM declarations:
' weather.bas
PARAM location AS string LIKE "Tokyo" DESCRIPTION "City name"
DESCRIPTION "Get current weather for a location"
' Tool logic here
temp = GET_TEMPERATURE(location)
conditions = GET_CONDITIONS(location)
result = location + ": " + temp + "°, " + conditions
RETURN result
Notes
- Tools remain active for the entire session
- Use CLEAR TOOLS to remove all loaded tools
- Tool names should be descriptive
- Tools are loaded from the .gbdialog/tools/ directory
Related
CLEAR TOOLS Keyword
Syntax
CLEAR TOOLS
Parameters
None – This keyword takes no arguments.
Description
CLEAR TOOLS removes every tool that has been added to the current conversation session. It clears the list of active tools stored in the session‑tool association table, effectively resetting the tool environment for the dialog. After execution, no previously added tools (via USE TOOL) remain available.
Example
USE TOOL "enrollment.bas"
TALK "Enrollment tool added."
CLEAR TOOLS
TALK "All tools have been cleared from this conversation."
After CLEAR TOOLS runs, the enrollment.bas tool is no longer accessible in the same session.
GET Keyword
The GET keyword retrieves content from a specified source — either a remote URL or a local file stored in the bot’s configured storage system.
It is used to fetch data dynamically during script execution.
Syntax
variable = GET "source"
Parameters
"source"— The location of the content to retrieve.
This can be:- An HTTP/HTTPS URL (e.g.,
"https://api.example.com/data") - A relative path to a file stored in the bot’s drive bucket or local storage.
- An HTTP/HTTPS URL (e.g.,
variable— The variable that will receive the fetched content.
Description
GET performs a read operation from the specified source.
If the source is a URL, the bot sends an HTTP GET request and retrieves the response body.
If the source is a file path, the bot reads the file content directly from its configured storage (e.g., drive component or local filesystem).
The command automatically handles text extraction from PDF and DOCX files, converting them to plain UTF‑8 text.
If the request fails or the file cannot be found, an error message is returned.
This keyword is essential for integrating external APIs, reading stored documents, and dynamically loading data into scripts.
Example
' Fetch data from a remote API
GET "https://api.example.com/users" INTO RESPONSE
PRINT RESPONSE
' Read a local file from the bot’s storage
GET "reports/summary.txt" INTO CONTENT
TALK CONTENT
Implementation Notes
- Implemented in Rust under
src/file/mod.rsandsrc/web_automation/crawler.rs. - Uses the
reqwestlibrary for HTTP requests with timeout and error handling. - Automatically detects file type and performs extraction for supported formats (PDF, DOCX, TXT).
- Validates paths to prevent directory traversal or unsafe access.
- Runs in a separate thread to avoid blocking the main engine.
Related Keywords
FIND— Searches for data within the current context.FORMAT— Formats retrieved data for display.PRINT— Outputs data to the console or chat.
Summary
GET is a versatile keyword for retrieving external or stored content.
It enables bots to access APIs, read documents, and integrate dynamic data sources seamlessly within BASIC scripts.
SET
Assign values to variables in BASIC dialogs.
Syntax
SET variable = value
or simply:
variable = value
Parameters
| Parameter | Type | Description |
|---|---|---|
variable | Identifier | Variable name to assign to |
value | Any | Value to assign (string, number, boolean, array, object) |
Description
The SET keyword assigns values to variables within BASIC dialog scripts. Variables are dynamically typed and can hold any type of value. The SET keyword is optional - you can use direct assignment with =.
Variables are scoped to the current dialog execution and persist throughout the conversation session until explicitly changed or the session ends.
Examples
Basic Assignment
SET name = "John Doe"
SET age = 25
SET is_premium = true
SET score = 98.5
Direct Assignment (without SET)
name = "Jane Smith"
count = 0
message = "Welcome!"
Array Assignment
SET colors = ["red", "green", "blue"]
SET numbers = [1, 2, 3, 4, 5]
SET mixed = ["text", 123, true]
Object/Map Assignment
SET user = {
"name": "Alice",
"email": "alice@example.com",
"age": 30,
"active": true
}
Dynamic Values
SET current_time = NOW()
SET user_input = HEAR "What's your name?"
SET calculated = price * quantity * tax_rate
SET formatted = FORMAT("Hello, {0}!", username)
Variable Types
BASIC supports these variable types:
- String: Text values
- Number: Integers and decimals
- Boolean: true/false
- Array: Ordered lists
- Object: Key-value maps
- Null: Empty/undefined
Variable Naming
Valid variable names:
- Start with letter or underscore
- Contain letters, numbers, underscores
- Case-sensitive
- No reserved keywords
Examples:
SET userName = "John"
SET user_name = "John"
SET _private = true
SET value123 = 456
SET firstName = "Jane"
Invalid names:
' These will cause errors
SET 123name = "error" ' Starts with number
SET user-name = "error" ' Contains hyphen
SET if = "error" ' Reserved keyword
Variable Scope
Session Variables
Regular variables exist for the session:
SET session_data = "persists during conversation"
Global Variables
Use special prefixes for broader scope:
SET $global_var = "accessible across dialogs"
SET @bot_var = "bot-level variable"
Temporary Variables
SET _temp = "temporary use"
' Prefix with underscore for temporary/internal use
Type Conversion
Variables automatically convert types when needed:
SET text = "123"
SET number = text + 0 ' Converts to number: 123
SET back_to_text = number + "" ' Converts to string: "123"
SET boolean = number > 100 ' Converts to boolean: true
Operations on Variables
String Operations
SET full_name = first_name + " " + last_name
SET uppercase = UPPER(name)
SET length = LEN(message)
SET substring = MID(text, 1, 5)
Numeric Operations
SET sum = a + b
SET difference = a - b
SET product = a * b
SET quotient = a / b
SET remainder = a MOD b
SET power = a ^ b
Array Operations
SET first = colors[0]
SET last = colors[LEN(colors) - 1]
colors[1] = "yellow" ' Modify array element
SET combined = array1 + array2 ' Concatenate
Object/Map Operations
SET email = user["email"]
SET age = user.age
user["status"] = "active"
user.last_login = NOW()
Conditional Assignment
SET status = IF(score >= 70, "pass", "fail")
SET discount = IF(is_member, 0.2, 0.1)
SET greeting = IF(hour < 12, "Good morning", "Good afternoon")
Common Patterns
Counter Variables
SET counter = 0
FOR i = 1 TO 10
counter = counter + 1
NEXT
Flag Variables
SET is_complete = false
' ... process ...
SET is_complete = true
Accumulator Variables
SET total = 0
FOR EACH item IN cart
total = total + item.price
NEXT
State Variables
SET state = "initial"
' ... logic ...
SET state = "processing"
' ... more logic ...
SET state = "complete"
Best Practices
- Use descriptive names:
customer_emailinstead ofe - Initialize variables: Set initial values before use
- Use consistent naming: camelCase or snake_case
- Avoid global pollution: Use local variables when possible
- Clean up large variables: Set to null when done
- Document complex variables: Add comments
- Validate before use: Check if variable exists
Error Handling
' Check if variable exists
IF EXISTS(user_data) THEN
SET name = user_data.name
ELSE
SET name = "Guest"
END IF
' Safe assignment with default
SET value = GET_VALUE_OR_DEFAULT(config.setting, "default")
Memory Management
' Clear large variables when done
SET big_data = LOAD_FILE("large.json")
' ... use big_data ...
SET big_data = null ' Free memory
Related Keywords
- GET - Retrieve data from external sources
- HEAR - Get user input into variable
- FORMAT - Format values for assignment
- SET BOT MEMORY - Persistent storage
Implementation Notes
Variables are stored in the BASIC engine’s scope map and persist for the duration of the dialog execution. The SET keyword is syntactic sugar - the parser treats both SET x = y and x = y identically.
ON Keyword
Syntax
ON trigger-type OF "table-name"
Parameters
trigger-type– The type of database trigger to listen for. Valid values are:INSERTUPDATEDELETE
"table-name"– The name of the database table to monitor.
Description
ON registers a database trigger for the current session. When the specified event occurs on the given table, the engine records the trigger in the system_automations table, linking it to the session. This enables scripts to react to data changes by executing associated actions (e.g., sending a notification, updating a variable).
The keyword performs the following steps:
- Validates the
trigger-typeand converts it to the internalTriggerKindenum. - Constructs a parameter name in the form
<table>_<trigger>.rhai(e.g.,orders_insert.rhai). - Inserts a row into
system_automationswith the trigger kind, target table, and parameter name. - Returns the number of rows affected (normally
1on success).
If the trigger type is invalid, the keyword raises a runtime error.
Example
ON INSERT OF "orders"
TALK "A new order was added. Processing..."
After execution, any new row inserted into the orders table will cause the session to be notified, allowing the script to handle the event.
Implementation Notes
- The keyword runs synchronously but performs the database insertion on a separate thread to avoid blocking.
- Errors during insertion are logged and returned as runtime errors.
SET SCHEDULE
Schedule a script or task to run at specified times using natural language or cron expressions.
Syntax
SET SCHEDULE expression
Parameters
| Parameter | Type | Description |
|---|---|---|
expression | String | Natural language schedule or cron expression |
Description
The SET SCHEDULE keyword schedules the current script to run automatically at specified intervals. It supports natural language expressions that are automatically converted to cron format, making scheduling intuitive and readable.
Natural Language Patterns
Time Intervals
SET SCHEDULE "every minute"
SET SCHEDULE "every 5 minutes"
SET SCHEDULE "every 15 minutes"
SET SCHEDULE "every 30 minutes"
SET SCHEDULE "every hour"
SET SCHEDULE "every 2 hours"
SET SCHEDULE "every 6 hours"
SET SCHEDULE "every day"
SET SCHEDULE "every week"
SET SCHEDULE "every month"
SET SCHEDULE "every year"
Aliases
SET SCHEDULE "hourly" ' Same as "every hour"
SET SCHEDULE "daily" ' Same as "every day"
SET SCHEDULE "weekly" ' Same as "every week"
SET SCHEDULE "monthly" ' Same as "every month"
SET SCHEDULE "yearly" ' Same as "every year"
Specific Times
SET SCHEDULE "at 9am"
SET SCHEDULE "at 9:30am"
SET SCHEDULE "at 2pm"
SET SCHEDULE "at 14:00"
SET SCHEDULE "at midnight"
SET SCHEDULE "at noon"
Day-Specific
SET SCHEDULE "every monday"
SET SCHEDULE "every friday"
SET SCHEDULE "every sunday"
SET SCHEDULE "every monday at 9am"
SET SCHEDULE "every friday at 5pm"
Weekdays & Weekends
SET SCHEDULE "weekdays" ' Monday-Friday at midnight
SET SCHEDULE "every weekday" ' Same as above
SET SCHEDULE "weekdays at 8am" ' Monday-Friday at 8 AM
SET SCHEDULE "weekends" ' Saturday & Sunday at midnight
SET SCHEDULE "weekends at 10am" ' Saturday & Sunday at 10 AM
Combined Patterns
SET SCHEDULE "every day at 9am"
SET SCHEDULE "every day at 6:30pm"
SET SCHEDULE "every hour from 9 to 17"
Business Hours
SET SCHEDULE "business hours" ' Every hour 9-17, Mon-Fri
SET SCHEDULE "every hour during business hours" ' Same as above
SET SCHEDULE "every 30 minutes during business hours" ' Every 30 min, 9-17, Mon-Fri
SET SCHEDULE "every 15 minutes during business hours"
Raw Cron (Advanced)
You can still use standard cron expressions for maximum flexibility:
SET SCHEDULE "0 * * * *" ' Every hour at minute 0
SET SCHEDULE "*/5 * * * *" ' Every 5 minutes
SET SCHEDULE "0 9-17 * * 1-5" ' Hourly 9AM-5PM on weekdays
SET SCHEDULE "0 0 1 * *" ' First day of each month
Cron Expression Format (Reference)
┌───────────── minute (0-59)
│ ┌───────────── hour (0-23)
│ │ ┌───────────── day of month (1-31)
│ │ │ ┌───────────── month (1-12)
│ │ │ │ ┌───────────── day of week (0-6, Sunday=0)
│ │ │ │ │
* * * * *
Quick Reference Table
| Natural Language | Cron Equivalent | Description |
|---|---|---|
every minute | * * * * * | Runs every minute |
every 5 minutes | */5 * * * * | Every 5 minutes |
every hour | 0 * * * * | Start of every hour |
hourly | 0 * * * * | Same as every hour |
every day | 0 0 * * * | Daily at midnight |
daily | 0 0 * * * | Same as every day |
at 9am | 0 9 * * * | Daily at 9 AM |
at 9:30am | 30 9 * * * | Daily at 9:30 AM |
at noon | 0 12 * * * | Daily at noon |
at midnight | 0 0 * * * | Daily at midnight |
every monday | 0 0 * * 1 | Monday at midnight |
every monday at 9am | 0 9 * * 1 | Monday at 9 AM |
weekdays | 0 0 * * 1-5 | Mon-Fri at midnight |
weekdays at 8am | 0 8 * * 1-5 | Mon-Fri at 8 AM |
weekends | 0 0 * * 0,6 | Sat-Sun at midnight |
every week | 0 0 * * 0 | Sunday at midnight |
weekly | 0 0 * * 0 | Same as every week |
every month | 0 0 1 * * | 1st of month |
monthly | 0 0 1 * * | Same as every month |
business hours | 0 9-17 * * 1-5 | Hourly 9-5 weekdays |
every hour from 9 to 17 | 0 9-17 * * * | Hourly 9 AM - 5 PM |
Examples
Daily Report at 9 AM
SET SCHEDULE "every day at 9am"
data = GET "reports/daily.json"
summary = LLM "Summarize key metrics: " + data
SEND MAIL "team@company.com", "Daily Report", summary
Hourly Data Sync
SET SCHEDULE "every hour"
fresh_data = GET "https://api.example.com/data"
SET BOT MEMORY "cached_data", fresh_data
PRINT "Data refreshed at " + NOW()
Every 15 Minutes Monitoring
SET SCHEDULE "every 15 minutes"
status = GET "https://api.example.com/health"
IF status.healthy = false THEN
SEND MAIL "ops@company.com", "Alert: Service Down", status.message
END IF
Weekly Newsletter (Monday 10 AM)
SET SCHEDULE "every monday at 10am"
subscribers = FIND "subscribers", "active=true"
content = LLM "Generate weekly newsletter with latest updates"
FOR EACH email IN subscribers
SEND MAIL email.address, "Weekly Update", content
NEXT
Business Hours Support Check
SET SCHEDULE "every 30 minutes during business hours"
tickets = FIND "support_tickets", "status=open AND priority=high"
IF LEN(tickets) > 5 THEN
TALK TO "support-manager", "High priority ticket queue: " + LEN(tickets) + " tickets waiting"
END IF
Weekend Backup
SET SCHEDULE "weekends at 3am"
PRINT "Starting weekend backup..."
result = POST "https://backup.service/run", { "type": "full" }
SET BOT MEMORY "last_backup", NOW()
SEND MAIL "admin@company.com", "Backup Complete", result
End of Month Report
SET SCHEDULE "monthly"
' Runs on 1st of each month at midnight
month_data = AGGREGATE "sales", "SUM(amount)", "month=" + MONTH(DATEADD("month", -1, NOW()))
report = LLM "Generate monthly sales report for: " + month_data
SEND MAIL "finance@company.com", "Monthly Sales Report", report
Best Practices
-
Use Natural Language: Prefer readable expressions like
"every day at 9am"over cron syntax -
Stagger Tasks: Avoid scheduling all tasks at the same time
' Good: Different times SET SCHEDULE "every day at 2am" ' Cleanup SET SCHEDULE "every day at 3am" ' Backup SET SCHEDULE "every day at 4am" ' Reports -
Consider Time Zones: Schedule times are in server’s local time
-
Error Handling: Always include error recovery
SET SCHEDULE "every hour" TRY PROCESS_DATA() CATCH PRINT "Schedule failed: " + ERROR_MESSAGE SEND MAIL "admin@example.com", "Schedule Error", ERROR_DETAILS END TRY -
Idempotency: Make scheduled tasks safe to re-run
last_run = GET BOT MEMORY "last_process_time" IF DATEDIFF("minute", last_run, NOW()) > 55 THEN PROCESS() SET BOT MEMORY "last_process_time", NOW() END IF
Cancel Schedule
Schedules are automatically canceled when SET SCHEDULE is removed from the .bas file. Simply delete or comment out the line:
' SET SCHEDULE "every hour" ' Commented out = disabled
Limitations
- Maximum 100 scheduled tasks per bot
- Minimum interval: 1 minute
- Scripts timeout after 5 minutes by default
- Time zone is server’s local time
Monitoring
Scheduled tasks are logged automatically:
- Execution start/end times
- Success/failure status
- Error messages if any
- Performance metrics
Related Keywords
- GET BOT MEMORY - Store schedule state
- SET BOT MEMORY - Persist data between runs
- LLM - Process data in scheduled tasks
- SEND MAIL - Send scheduled reports
- GET - Fetch data for processing
Implementation
Located in src/basic/keywords/set_schedule.rs
The implementation:
- Uses a fast rule-based natural language parser (no LLM required)
- Falls back to raw cron if input is already in cron format
- Validates expressions before saving
- Integrates with system scheduler
- Persists schedules in database
- Handles concurrent execution
- Provides retry logic for failures
CREATE SITE Keyword
Syntax
CREATE SITE "alias", "template-dir", "prompt"
Parameters
"alias"– Name of the new site (used as a folder name under the configured site path)."template-dir"– Relative path to a directory containing HTML template files that will be combined."prompt"– Text prompt sent to the LLM to generate the final site content.
Description
CREATE SITE generates a new static website based on existing HTML templates and an LLM‑generated prompt. The keyword performs the following steps:
- Creates a directory for the new site at
<site_path>/<alias>. - Reads all
.htmlfiles from<site_path>/<template-dir>and concatenates their contents, separating each with a clear delimiter. - Constructs a prompt that includes the combined template content and the user‑provided
prompt. - Sends the prompt to the configured LLM provider (
utils::call_llm) and receives generated HTML. - Writes the LLM output to
<site_path>/<alias>/index.html.
The resulting site can be served directly from the site_path directory. Errors during directory creation, file reading, or LLM generation are logged and returned as error messages.
Example
CREATE SITE "my_blog", "templates/blog", "Generate a modern blog homepage for a tech writer."
TALK "Site created at /my_blog. Access it via the web server."
After execution, a folder my_blog is created with an index.html containing the LLM‑generated page, ready to be served.
CREATE DRAFT Keyword
Syntax
CREATE DRAFT "to-address", "subject", "reply-text"
Parameters
"to-address"– Email address of the recipient."subject"– Subject line for the draft email."reply-text"– Body content for the draft. If a previous email exists in the user’s mailbox to the same address, its content is appended after a separator.
Description
CREATE DRAFT composes an email draft and saves it to the user’s mailbox. It first checks whether a prior email has been sent to the same recipient using the GET_LATEST_SENT_TO helper. If such an email exists, its body (converted to HTML line breaks) is appended to the new reply text, separated by <br><hr><br>. The combined content is then stored as a draft via the email service configured in the application (save_email_draft). The keyword returns a success message or an error string.
Example
CREATE DRAFT "john.doe@example.com", "Project Update", "Here is the latest status..."
TALK "Draft created and saved."
If an earlier email to john.doe@example.com exists, the draft will contain the new reply followed by the previous email content, allowing the user to continue the conversation seamlessly.
CREATE TASK
Create and assign tasks within the task management system.
Syntax
CREATE TASK title, description, assignee, due_date, priority
Parameters
| Parameter | Type | Description |
|---|---|---|
title | String | Task title/name |
description | String | Detailed task description |
assignee | String | Email or user ID of the assignee |
due_date | String | Due date in format “YYYY-MM-DD” or relative like “tomorrow”, “next week” |
priority | String | Task priority: “low”, “medium”, “high”, “urgent” |
Description
The CREATE TASK keyword creates tasks in the task engine system with automatic assignment to users or groups, due date tracking and reminders, priority-based organization, integration with the calendar system, email notifications to assignees, and progress tracking capabilities.
Examples
Basic Task Creation
CREATE TASK "Review proposal", "Review and provide feedback on Q4 proposal", "john@example.com", "2024-01-15", "high"
Task with Current User
user_email = GET "user.email"
CREATE TASK "Follow up", "Contact customer about renewal", user_email, "tomorrow", "medium"
Bulk Task Creation
team = ["alice@example.com", "bob@example.com", "carol@example.com"]
FOR EACH member IN team
CREATE TASK "Complete training", "Finish security awareness training", member, "next week", "medium"
NEXT
Task from User Input
task_info = HEAR "What task should I create?"
CREATE TASK task_info, "User requested task", "support@example.com", "today", "high"
TALK "Task created and assigned to support team"
Return Value
The keyword returns a task object containing the task_id as a unique task identifier, status indicating the task state (such as “created”, “assigned”, “in_progress”, or “completed”), created_at with the creation timestamp, url providing a link to the task in the web interface, and reminder_set indicating whether a reminder was configured.
Task Statuses
Tasks progress through a defined lifecycle. The created status indicates initial creation, followed by assigned when the task has been assigned to a user. Once work begins, the status changes to in_progress. If the task is waiting on a dependency, it enters the blocked state. When finished, it reaches completed, or alternatively cancelled if the task was terminated without completion.
Integration Points
Calendar Integration
Tasks automatically appear in the assignee’s calendar when a due date is specified, calendar integration is enabled, and the user has calendar permissions.
Email Notifications
The system sends notifications for task assignment, due date reminders, status changes, and when comments are added.
Task Dependencies
Tasks can be linked together to create parent-child relationships:
parent_task = CREATE TASK "Project", "Main project", "pm@example.com", "next month", "high"
subtask = CREATE TASK "Research", "Initial research", "analyst@example.com", "next week", "medium"
LINK_TASKS parent_task.task_id, subtask.task_id
Priority Levels
| Priority | Description | SLA |
|---|---|---|
urgent | Immediate attention required | 4 hours |
high | Important, time-sensitive | 1 day |
medium | Standard priority | 3 days |
low | Non-urgent | 1 week |
Date Formats
The keyword supports multiple date formats. Absolute dates can be specified as “2024-01-15” or “01/15/2024”. Relative dates include “today”, “tomorrow”, “next week”, and “in 3 days”. Natural language formats like “Monday”, “next Friday”, and “end of month” are also supported.
Error Handling
The keyword validates that the assignee exists in the system, checks that the date is in the future, verifies the priority is valid, returns an error if task creation fails, and handles permission issues gracefully.
Permissions
To create tasks, the user must have task creation permission, project member status, admin privileges, or delegation rights from the assignee.
Best Practices
Use clear, action-oriented titles that describe what needs to be done. Include detailed descriptions with acceptance criteria so the assignee understands the requirements. Set realistic deadlines that can actually be achieved. Reserve high and urgent priorities for tasks that truly warrant them rather than marking everything as urgent. Verify the assignee can handle the task before assignment. Follow up periodically to check task status and provide assistance if needed.
Advanced Usage
Task Templates
template = GET_TASK_TEMPLATE("customer_onboarding")
CREATE TASK template.title, template.description, assigned_user, due_date, template.priority
Conditional Creation
IF urgency = "high" AND department = "support" THEN
CREATE TASK "Urgent Support", issue_description, "support-lead@example.com", "today", "urgent"
ELSE
CREATE TASK "Support Request", issue_description, "support@example.com", "tomorrow", "medium"
END IF
Task with Attachments
task = CREATE TASK "Review document", "Please review attached", reviewer, deadline, "high"
' Note: Use document sharing systems for attachments
Related Keywords
The BOOK keyword schedules meetings instead of tasks. Use SET SCHEDULE to create recurring tasks. The SEND MAIL keyword sends task notifications, and ADD MEMBER adds users to task groups.
Database Tables
Tasks are stored across several database tables. The tasks table holds main task records. User assignments are tracked in task_assignments. Discussions happen in task_comments. Related files are referenced in task_attachments. The task_history table records status changes over time.
Implementation
The CREATE TASK keyword is implemented in src/basic/keywords/create_task.rs. It integrates with the task engine module for task management, the calendar engine for scheduling, the email module for notifications, and the storage module for attachments.
Debug output keyword. PRINT is an alias for TALK - both send messages to the current conversation.
Note:
TALKare equivalent. UseTALKfor user-facing messages andTALKfor clarity.
Syntax
PRINT expression
Parameters
| Parameter | Type | Description |
|---|---|---|
| expression | Any | The value to output (string, number, or any expression) |
Description
PRINT outputs a message to the current conversation. It is functionally identical to TALK but conventionally used for debugging and logging purposes.
For sending messages to specific recipients on other channels, use:
TALK TO- Send to a specific recipient (WhatsApp, Teams, etc.)SEND FILE TO- Send files to a specific recipient
Examples
Basic Debug Output
x = 10
y = 20
PRINT "Debug: x = " + x + ", y = " + y
result = x + y
PRINT "Result: " + result
Logging During Processing
WEBHOOK "process-order"
order_id = body.order_id
PRINT "Processing order: " + order_id
' Process the order
customer = FIND "customers", "id=" + body.customer_id
PRINT "Found customer: " + customer.name
' More processing...
PRINT "Order processing complete"
result_status = "ok"
Variable Inspection
data = GET "https://api.example.com/data"
PRINT "API Response: " + data
items = FIND "products", "stock < 10"
PRINT "Low stock items count: " + UBOUND(items)
Equivalent Keywords
| Keyword | Description |
|---|---|
TALK | Same as PRINT - send message to conversation |
TALK TO | Send message to specific recipient |
Example: PRINT vs TALK TO
' Debug output (goes to current conversation)
PRINT "Starting order notification..."
' User-facing message to specific WhatsApp number
TALK TO "whatsapp:+5511999887766", "Your order is confirmed!"
' More debug output
PRINT "Notification sent successfully"
Best Practices
- Use TALK for production - More readable for user-facing messages
- Use PRINT for debugging - Makes it clear this is debug/log output
- Use TALK TO for channels - When sending to specific recipients
' Good: Clear intent
PRINT "Debug: Processing started"
TALK "Welcome! How can I help you?"
TALK TO "whatsapp:" + phone, "Your order is ready!"
' Also valid but less clear:
PRINT "Welcome! How can I help you?" ' Works but TALK is clearer
See Also
- TALK - Primary message output keyword
- TALK TO - Send to specific recipients
- SEND FILE TO - Send files to recipients
WAIT Keyword
The WAIT keyword pauses script execution for a specified duration.
It is used to introduce delays between actions, synchronize processes, or control timing in automation flows.
Syntax
WAIT seconds
Parameters
seconds— The number of seconds to pause execution.
Can be an integer or floating-point value.
The maximum allowed duration is 300 seconds (5 minutes).
Description
WAIT suspends the script for the specified duration.
During this time, the bot does not process other commands or messages.
This keyword is useful for pacing interactions, waiting for external events, or throttling API calls.
If the provided value is invalid (negative or non-numeric), the command raises a runtime error.
The system automatically caps the wait time to prevent excessively long pauses.
Example
' Wait for 2 seconds before continuing
TALK "Processing your request..."
WAIT 2
TALK "Done!"
Implementation Notes
- Implemented in Rust under
src/basic/mod.rsandsrc/shared/utils.rs. - Uses
std::thread::sleepwith aDurationderived from the provided seconds. - The engine ensures that the wait does not exceed the configured timeout limit.
- During the wait, no other BASIC commands are executed.
Related Keywords
SET SCHEDULE— Defines scheduled tasks for automation.PRINT— Outputs messages or debugging information.TALK— Sends messages to the user.HEAR— Receives user input after a delay.
Summary
WAIT is a simple but essential keyword for controlling timing in BASIC scripts.
It allows developers to create natural pauses, synchronize workflows, and manage execution pacing effectively.
FORMAT Keyword
The FORMAT keyword formats numbers, dates, and text for display. Use it when you need a quick, readable representation without writing custom code.
Syntax
RESULT = FORMAT(VALUE, PATTERN)
BASIC EXAMPLE
NUMBER = 1234.56
TEXT = "John"
DATE = "2024-03-15 14:30:00"
TALK FORMAT(NUMBER, "n") ' 1234.56
TALK FORMAT(TEXT, "Hello @!") ' Hello John!
TALK FORMAT(DATE, "dd/MM/yyyy") ' 15/03/2024
- VALUE – any number, date string (
YYYY‑MM‑DD HH:MM:SS), or text. - PATTERN – a short format string (see tables below).
Quick Reference
Numeric Patterns
| Pattern | Example | Output |
|---|---|---|
n | FORMAT(1234.5, "n") | 1234.50 |
F | FORMAT(1234.5, "F") | 1234.50 |
f | FORMAT(1234.5, "f") | 1234 |
0% | FORMAT(0.85, "0%") | 85% |
C2[en] | FORMAT(1234.5, "C2[en]") | $1,234.50 |
C2[pt] | FORMAT(1234.5, "C2[pt]") | R$ 1.234,50 |
Date Patterns
| Code | Meaning | Example |
|---|---|---|
yyyy | 4‑digit year | 2024 |
yy | 2‑digit year | 24 |
MM | month (01‑12) | 03 |
M | month (1‑12) | 3 |
dd | day (01‑31) | 05 |
d | day (1‑31) | 5 |
HH | 24‑hour (00‑23) | 14 |
hh | 12‑hour (01‑12) | 02 |
mm | minutes (00‑59) | 05 |
ss | seconds (00‑59) | 09 |
tt | AM/PM | PM |
Example
DATE = "2024-03-15 14:30:25"
TALK FORMAT(DATE, "dd/MM/yyyy HH:mm") ' 15/03/2024 14:30
Text Patterns
| Placeholder | Effect |
|---|---|
@ | Insert original text |
! | Upper‑case |
& | Lower‑case |
Example
NAME = "Maria"
TALK FORMAT(NAME, "Hello, !") ' Hello, MARIA
Practical Tips
- Test each pattern in isolation before combining.
- Locale codes (
en,pt,fr, …) go insideC2[…]for currency. - Dates must follow
YYYY‑MM‑DD HH:MM:SS; otherwise formatting fails. - Combine patterns by nesting calls:
TALK FORMAT(FORMAT(VALUE, "C2[en]"), "!") ' $1,234.50 (uppercase not needed here)
Common Pitfalls
- Using a date pattern on a non‑date string → returns the original string.
- Forgetting locale brackets (
C2[en]) → defaults to system locale. - Mixing placeholders (
@,!,&) in the same pattern – only the last one applies.
Use FORMAT whenever you need a clean, user‑friendly output without extra code. It keeps scripts short and readable.
FIRST Keyword
Syntax
FIRST "text"
Parameters
"text"– A string expression from which the first word will be extracted.
Description
FIRST returns the first whitespace‑separated token of the provided string. If the string is empty or contains only whitespace, the result is an empty string. The keyword is useful for extracting a leading command or identifier from user input.
Example
SET command = FIRST user_input
TALK "You entered the command: " + command
If user_input is "search books about Rust", FIRST returns "search".
Implementation Notes
- The keyword splits the string on any whitespace (spaces, tabs, newlines) and returns the first element.
- It does not modify the original string.
- Case‑insensitive; the returned word preserves the original casing.
LAST Keyword
Syntax
LAST "text"
Parameters
"text"– A string expression from which the last word will be extracted.
Description
LAST returns the final whitespace‑separated token of the provided string. If the string is empty or contains only whitespace, the result is an empty string. This keyword is useful for retrieving the trailing part of a user’s input or any delimited text.
Example
SET command = LAST user_input
TALK "You entered the last word: " + command
If user_input is "search books about Rust", LAST returns "Rust".
Implementation Notes
- The keyword splits the string on any whitespace (spaces, tabs, newlines) and returns the last element.
- It does not modify the original string.
- Case‑insensitive; the returned word preserves the original casing.
FOR EACH Keyword
Syntax
FOR EACH $var IN $collection
// block of statements
NEXT $var
Parameters
$var– Identifier that will hold each element of the collection during iteration.$collection– An array or iterable expression whose items will be traversed.
Description
FOR EACH iterates over every element of the supplied collection, assigning the current element to the loop variable $var for the duration of the block. The block is executed once per element. After the loop finishes, execution continues after the matching NEXT $var statement.
If the collection is not an array, the keyword raises a runtime error indicating the expected type.
Example
SET numbers = [1, 2, 3, 4, 5]
FOR EACH n IN numbers
TALK "Number: " + n
NEXT n
TALK "All numbers processed."
The script outputs each number in the list sequentially and then prints a final message.
Control Flow
EXIT FORcan be used inside the block to break out of the loop early.- Nested
FOR EACHloops are supported; each must have a distinct loop variable.
Implementation Notes
- The keyword evaluates the collection expression once before entering the loop.
- The loop variable is scoped to the block; it does not affect variables outside the loop.
EXIT FOR Keyword
Syntax
EXIT FOR
Parameters
None – This keyword takes no arguments.
Description
EXIT FOR terminates the execution of the nearest enclosing FOR EACH … IN … NEXT loop prematurely. When the interpreter encounters EXIT FOR, it stops iterating over the collection and continues execution after the NEXT statement that matches the loop variable.
Example
FOR EACH item IN my_list
IF item = "stop" THEN
EXIT FOR
ENDIF
TALK item
NEXT item
TALK "Loop ended."
In this script, the loop stops as soon as item equals "stop", and the subsequent TALK "Loop ended." is executed.
Usage Notes
EXIT FORcan only be used inside aFOR EACH … IN … NEXTblock.- It does not accept any parameters; it simply signals an early exit.
- The keyword is case‑insensitive;
exit forworks the same way.
SEND MAIL
Send email messages with optional attachments and HTML formatting.
Syntax
SEND MAIL to, subject, body
Parameters
| Parameter | Type | Description |
|---|---|---|
to | String | Recipient email address(es), comma-separated for multiple |
subject | String | Email subject line |
body | String | Email body (plain text or HTML) |
Description
The SEND MAIL keyword sends emails using the SMTP configuration defined in config.csv. It supports:
- Plain text and HTML emails
- Multiple recipients
- CC and BCC (via extended syntax)
- File attachments
- Email templates
- Delivery tracking
Configuration
Email settings in config.csv:
name,value
email-from,noreply@example.com
email-server,smtp.example.com
email-port,587
email-user,smtp-user@example.com
email-pass,smtp-password
Examples
Simple Text Email
SEND MAIL "user@example.com", "Welcome!", "Thank you for signing up."
Multiple Recipients
recipients = "john@example.com, jane@example.com, bob@example.com"
SEND MAIL recipients, "Team Update", "Meeting tomorrow at 3 PM"
HTML Email
body = "<h1>Welcome!</h1><p>Thank you for joining us.</p>"
body = body + "<ul><li>Step 1: Complete profile</li>"
body = body + "<li>Step 2: Verify email</li></ul>"
SEND MAIL "user@example.com", "Getting Started", body
Dynamic Content
order_id = GET "order_id"
subject = "Order #" + order_id + " Confirmation"
body = "Hello " + user_name + ", your order has been confirmed."
SEND MAIL user_email, subject, body
With Error Handling
email = HEAR "Enter your email address:"
IF email CONTAINS "@" AND email CONTAINS "." THEN
SEND MAIL email, "Verification Code", "Your code is: 123456"
TALK "Email sent successfully!"
ELSE
TALK "Invalid email address"
END IF
Notification System
' Send notification to admin when issue detected
admin_email = GET BOT MEMORY "admin_email"
IF error_detected THEN
subject = "Bot Alert"
body = "Issue detected at " + NOW()
SEND MAIL admin_email, subject, body
END IF
Bulk Email with Personalization
subscribers = GET "subscribers.list"
FOR EACH email IN subscribers
body = "Dear " + username + ", here's your weekly update..."
SEND MAIL email, "Weekly Newsletter", body
WAIT 1 ' Rate limiting
NEXT
Extended Syntax
With CC and BCC
' Using structured format
email_data = {
"to": "primary@example.com",
"cc": "copy@example.com",
"bcc": "hidden@example.com",
"subject": "Report",
"body": "Please review attached report."
}
SEND MAIL email_data
With Attachments
' Attach file from drive
email_data = {
"to": "user@example.com",
"subject": "Invoice",
"body": "Please find invoice attached.",
"attachments": ["invoice.pdf"]
}
SEND MAIL email_data
Using Templates
' Load and fill template
template = LOAD_TEMPLATE "welcome_email"
template = REPLACE(template, "{{name}}", user_name)
template = REPLACE(template, "{{date}}", TODAY())
SEND MAIL user_email, "Welcome!", template
Email Validation
Always validate email addresses before sending:
email = HEAR "Your email:"
IF email CONTAINS "@" AND email CONTAINS "." THEN
parts = SPLIT(email, "@")
IF LENGTH(parts) = 2 THEN
domain = parts[1]
IF domain CONTAINS "." THEN
SEND MAIL email, "Test", "This is a test"
ELSE
TALK "Please enter a valid email"
END IF
ELSE
TALK "Please enter a valid email"
END IF
ELSE
TALK "Please enter a valid email"
END IF
Delivery Status
Check email delivery status:
status = SEND MAIL "user@example.com", "Test", "Message"
IF status = "sent" THEN
TALK "Email delivered successfully"
ELSE IF status = "queued" THEN
TALK "Email queued for delivery"
ELSE
TALK "Email delivery failed: " + status
END IF
Rate Limiting
Implement rate limiting to avoid spam:
last_sent = GET BOT MEMORY "last_email_time"
IF TIME_DIFF(NOW(), last_sent) < 60 THEN
TALK "Please wait before sending another email"
ELSE
SEND MAIL email, subject, body
SET BOT MEMORY "last_email_time", NOW()
END IF
Error Handling
Common error scenarios:
' Check email format before sending
IF recipient CONTAINS "@" AND recipient CONTAINS "." THEN
status = SEND MAIL recipient, subject, body
IF status = "sent" THEN
TALK "Email sent successfully"
ELSE IF status = "smtp_error" THEN
TALK "Email server is unavailable"
ELSE IF status = "auth_error" THEN
TALK "Email authentication failed"
LOG "Check SMTP credentials in config.csv"
ELSE
TALK "Failed to send email: " + status
END IF
ELSE
TALK "The email address is invalid"
END IF
Best Practices
- Validate Recipients: Always validate email addresses
- Rate Limit: Implement delays for bulk emails
- Handle Failures: Use try-catch for error handling
- Log Attempts: Keep records of sent emails
- Test Configuration: Verify SMTP settings before production
- Use Templates: Maintain consistent formatting
- Respect Privacy: Use BCC for multiple recipients
- Include Unsubscribe: Add opt-out links for marketing emails
Security Considerations
- Never log email passwords
- Use environment variables for sensitive data
- Implement SPF, DKIM, and DMARC for deliverability
- Sanitize user input in email bodies
- Use TLS/SSL for SMTP connections
Troubleshooting
Email Not Sending
- Check SMTP configuration in
config.csv - Verify firewall allows port 587/465
- Test credentials manually
- Check email server logs
Authentication Failed
Check SMTP configuration:
- Verify credentials in
config.csv - Ensure SMTP server allows your connection
- Check if port 587/465 is open
- Verify TLS/SSL settings match server requirements
Emails Going to Spam
- Set proper FROM address
- Include text version with HTML
- Avoid spam trigger words
- Configure domain authentication (SPF/DKIM)
Related Keywords
- GET - Retrieve user data for emails
- FORMAT - Format email content
- WAIT - Rate limiting between emails
- SET SCHEDULE - Schedule email sending
Implementation
Located in src/basic/keywords/send_mail.rs
The implementation uses:
lettrecrate for SMTP- Async email sending
- Connection pooling for performance
- Retry logic for failed attempts
- HTML sanitization for security
FIND
Search and retrieve data from database tables using filter criteria.
Syntax
result = FIND "table_name", "filter_criteria"
Parameters
table_name- The name of the database table to searchfilter_criteria- Filter expression in the format “field=value”
Description
FIND searches database tables for records matching specified criteria. It returns an array of matching records that can be iterated over using FOR EACH loops.
Examples
Basic Search
' Find records with specific action
items = FIND "gb.rob", "ACTION=EMUL"
FOR EACH item IN items
TALK "Found: " + item.company
NEXT
Single Field Filter
' Find pending orders
orders = FIND "orders", "status=pending"
FOR EACH order IN orders
TALK "Order #" + order.id + " is pending"
NEXT
Working with Results
' Find and process customer records
customers = FIND "customers", "city=Seattle"
FOR EACH customer IN customers
TALK customer.name + " from " + customer.address
' Access fields with dot notation
email = customer.email
phone = customer.phone
' Update related data
SET "contacts", "id=" + customer.id, "last_contacted=" + NOW()
NEXT
Return Value
FIND returns an array of records from the specified table. Each record is an object with fields accessible via dot notation.
- Returns empty array if no matches found
- Returns array of matching records if successful
- Each record contains all columns from the table
Field Access
Access fields in returned records using dot notation:
items = FIND "products", "category=electronics"
FOR EACH item IN items
' Access fields directly
TALK item.name
TALK item.price
TALK item.description
' Use null coalescing for optional fields
website = item.website ?? ""
' Check field existence
IF item.discount != "" THEN
TALK "On sale: " + item.discount + "% off"
END IF
NEXT
Common Patterns
Process All Matching Records
tasks = FIND "tasks", "status=open"
FOR EACH task IN tasks
' Process each task
TALK "Processing task: " + task.title
' Update task status
SET "tasks", "id=" + task.id, "status=in_progress"
NEXT
Check If Records Exist
users = FIND "users", "email=john@example.com"
IF LENGTH(users) > 0 THEN
TALK "User exists"
ELSE
TALK "User not found"
END IF
Data Enrichment
companies = FIND "companies", "needs_update=true"
FOR EACH company IN companies
' Get additional data
website = company.website ?? ""
IF website == "" THEN
' Look up website
website = WEBSITE OF company.name
' Update record
SET "companies", "id=" + company.id, "website=" + website
END IF
' Fetch and process website data
page = GET website
' Process page content...
NEXT
Batch Processing with Delays
emails = FIND "email_queue", "sent=false"
FOR EACH email IN emails
' Send email
SEND MAIL email.to, email.subject, email.body
' Mark as sent
SET "email_queue", "id=" + email.id, "sent=true"
' Rate limiting
WAIT 1000
NEXT
Filter Expressions
The filter parameter uses simple equality expressions:
"field=value"- Match exact value- Multiple conditions must be handled in BASIC code after retrieval
' Get all records then filter in BASIC
all_orders = FIND "orders", "status=active"
FOR EACH order IN all_orders
' Additional filtering in code
IF order.amount > 1000 AND order.priority == "high" THEN
' Process high-value orders
TALK "Priority order: " + order.id
END IF
NEXT
Working with Different Data Types
products = FIND "products", "active=true"
FOR EACH product IN products
' String fields
name = product.name
' Numeric fields
price = product.price
quantity = product.quantity
' Date fields
created = product.created_at
' Boolean-like fields (stored as strings)
IF product.featured == "true" THEN
TALK "Featured: " + name
END IF
NEXT
Error Handling
' Handle potential errors
items = FIND "inventory", "warehouse=main"
IF items == null THEN
TALK "Error accessing inventory data"
ELSE IF LENGTH(items) == 0 THEN
TALK "No items found in main warehouse"
ELSE
TALK "Found " + LENGTH(items) + " items"
' Process items...
END IF
Performance Considerations
- Limit Results: The system automatically limits to 10 results for safety
- Use Specific Filters: More specific filters reduce processing time
- Avoid Full Table Scans: Always provide a filter criterion
- Process in Batches: For large datasets, process in chunks
' Process records in batches
batch = FIND "large_table", "processed=false"
count = 0
FOR EACH record IN batch
' Process record
SET "large_table", "id=" + record.id, "processed=true"
count = count + 1
IF count >= 10 THEN
EXIT FOR ' Process max 10 at a time
END IF
NEXT
Integration with Other Keywords
With SET for Updates
users = FIND "users", "newsletter=true"
FOR EACH user IN users
' Update last_notified field
SET "users", "id=" + user.id, "last_notified=" + NOW()
NEXT
With LLM for Processing
articles = FIND "articles", "needs_summary=true"
FOR EACH article IN articles
summary = LLM "Summarize: " + article.content
SET "articles", "id=" + article.id, "summary=" + summary
NEXT
With CREATE SITE
companies = FIND "companies", "needs_site=true"
FOR EACH company IN companies
alias = LLM "Create URL alias for: " + company.name
CREATE SITE alias, "template", "Create site for " + company.name
SET "companies", "id=" + company.id, "site_url=" + alias
NEXT
Limitations
- Maximum 10 records returned per query (system limit)
- Filter supports simple equality only
- Complex queries require post-processing in BASIC
- Table must exist in the database
- User must have read permissions on the table
Best Practices
✅ Always check results - Verify FIND returned data before processing
✅ Use specific filters - Reduce result set size with precise criteria
✅ Handle empty results - Check LENGTH before iterating
✅ Update as you go - Mark records as processed to avoid reprocessing
❌ Don’t assume order - Results may not be sorted
❌ Don’t ignore limits - Remember the 10-record limit
❌ Don’t use without filter - Always provide filter criteria
See Also
- SET - Update database records
- GET - Retrieve single values
- FOR EACH - Iterate over results
- LLM - Process found data with AI
INSTR
The INSTR keyword returns the position of a substring within a string, following classic BASIC semantics.
Syntax
position = INSTR(string, substring)
position = INSTR(start, string, substring)
Parameters
| Parameter | Type | Description |
|---|---|---|
start | number | Optional. Starting position for the search (1-based) |
string | string | The string to search in |
substring | string | The substring to find |
Return Value
- Returns the 1-based position of the first occurrence of
substringinstring - Returns
0if the substring is not found - Returns
0if either string is empty
Description
INSTR searches for the first occurrence of a substring within another string. Unlike zero-based indexing in many modern languages, INSTR uses 1-based positioning consistent with traditional BASIC.
When the optional start parameter is provided, the search begins at that position rather than at the beginning of the string.
Examples
Basic Usage
text = "Hello, General Bots!"
pos = INSTR(text, "General")
TALK "Found 'General' at position: " + pos
' Output: Found 'General' at position: 8
Checking if Substring Exists
email = HEAR "Enter your email:"
IF INSTR(email, "@") > 0 THEN
TALK "Valid email format"
ELSE
TALK "Email must contain @"
END IF
Starting Search at Position
text = "one two one three one"
first = INSTR(text, "one") ' Returns 1
second = INSTR(5, text, "one") ' Returns 9 (starts after first "one")
third = INSTR(10, text, "one") ' Returns 19
Extracting Data
data = "Name: John Smith"
colon_pos = INSTR(data, ":")
IF colon_pos > 0 THEN
' Get everything after ": "
name = MID(data, colon_pos + 2)
TALK "Extracted name: " + name
END IF
Case-Sensitive Search
text = "General Bots"
pos1 = INSTR(text, "bots") ' Returns 0 (not found - case matters)
pos2 = INSTR(text, "Bots") ' Returns 9 (found)
Finding Multiple Occurrences
text = "apple,banana,cherry,apple"
search = "apple"
count = 0
pos = 1
DO WHILE pos > 0
pos = INSTR(pos, text, search)
IF pos > 0 THEN
count = count + 1
pos = pos + 1 ' Move past current match
END IF
LOOP
TALK "Found '" + search + "' " + count + " times"
Validating Input Format
phone = HEAR "Enter phone number (XXX-XXX-XXXX):"
dash1 = INSTR(phone, "-")
dash2 = INSTR(dash1 + 1, phone, "-")
IF dash1 = 4 AND dash2 = 8 THEN
TALK "Phone format is correct"
ELSE
TALK "Invalid format. Use XXX-XXX-XXXX"
END IF
Comparison with Other Keywords
| Keyword | Purpose |
|---|---|
INSTR | Find position of substring |
FORMAT | Format strings with patterns |
FIRST | Get first element of array |
LAST | Get last element of array |
Notes
- 1-based indexing: Position 1 is the first character, not 0
- Case-sensitive: “ABC” and “abc” are different
- Empty strings: Returns 0 if either string is empty
- Not found: Returns 0 when substring doesn’t exist
Error Handling
text = HEAR "Enter text to search:"
search = HEAR "Enter search term:"
pos = INSTR(text, search)
IF pos = 0 THEN
TALK "'" + search + "' was not found in your text"
ELSE
TALK "Found at position " + pos
END IF
See Also
- FORMAT - String formatting
- SET - Variable assignment
- IS NUMERIC - Check if string is numeric
IS NUMERIC
The IS NUMERIC function tests whether a string value can be converted to a number. This is essential for input validation before performing mathematical operations.
Syntax
result = IS NUMERIC(value)
Parameters
| Parameter | Type | Description |
|---|---|---|
value | string | The value to test for numeric content |
Return Value
- Returns
trueif the value can be parsed as a number - Returns
falseif the value contains non-numeric characters
Description
IS NUMERIC examines a string to determine if it represents a valid numeric value. It recognizes:
- Integers:
42,-17,0 - Decimals:
3.14,-0.5,.25 - Scientific notation:
1e10,2.5E-3
Empty strings and strings containing letters or special characters (except -, ., e, E) return false.
Examples
Basic Validation
input = HEAR "Enter a number:"
IF IS NUMERIC(input) THEN
TALK "You entered: " + input
ELSE
TALK "That's not a valid number"
END IF
Bot Memory with Default Value
max_items = GET BOT MEMORY "max_items"
IF max_items = "" OR NOT IS NUMERIC(max_items) THEN
max_items = "10"
END IF
TALK "Processing up to " + max_items + " items"
Input Loop Until Valid
valid = false
DO WHILE NOT valid
age = HEAR "Enter your age:"
IF IS NUMERIC(age) THEN
valid = true
ELSE
TALK "Please enter a number"
END IF
LOOP
TALK "Your age is " + age
Combined Conditions with OR NOT
quantity = HEAR "How many items?"
IF quantity = "" OR NOT IS NUMERIC(quantity) THEN
TALK "Invalid quantity, using default of 1"
quantity = "1"
END IF
Validating Multiple Fields
price = HEAR "Enter price:"
quantity = HEAR "Enter quantity:"
IF IS NUMERIC(price) AND IS NUMERIC(quantity) THEN
total = price * quantity
TALK "Total: $" + total
ELSE
IF NOT IS NUMERIC(price) THEN
TALK "Price must be a number"
END IF
IF NOT IS NUMERIC(quantity) THEN
TALK "Quantity must be a number"
END IF
END IF
Configuration Validation
' Load timeout from config, validate it's numeric
timeout = GET BOT MEMORY "api_timeout"
IF NOT IS NUMERIC(timeout) THEN
timeout = "30"
SET BOT MEMORY "api_timeout", timeout
TALK "Set default timeout to 30 seconds"
END IF
Range Checking After Validation
rating = HEAR "Rate from 1-5:"
IF NOT IS NUMERIC(rating) THEN
TALK "Please enter a number"
ELSE IF rating < 1 OR rating > 5 THEN
TALK "Rating must be between 1 and 5"
ELSE
TALK "Thank you for your rating of " + rating
SET BOT MEMORY "last_rating", rating
END IF
What IS NUMERIC Accepts
| Input | Result | Notes |
|---|---|---|
"42" | true | Integer |
"-17" | true | Negative integer |
"3.14" | true | Decimal |
".5" | true | Leading decimal |
"1e10" | true | Scientific notation |
"2.5E-3" | true | Scientific with decimal |
"" | false | Empty string |
"abc" | false | Letters |
"12abc" | false | Mixed content |
"$100" | false | Currency symbol |
"1,000" | false | Thousands separator |
" 42 " | true | Whitespace trimmed |
Common Patterns
Default Value Pattern
value = GET BOT MEMORY key
IF value = "" OR NOT IS NUMERIC(value) THEN
value = default_value
END IF
Safe Division
divisor = HEAR "Enter divisor:"
IF NOT IS NUMERIC(divisor) THEN
TALK "Must be a number"
ELSE IF divisor = 0 THEN
TALK "Cannot divide by zero"
ELSE
result = 100 / divisor
TALK "Result: " + result
END IF
Percentage Validation
percent = HEAR "Enter percentage (0-100):"
IF IS NUMERIC(percent) THEN
IF percent >= 0 AND percent <= 100 THEN
TALK "Discount: " + percent + "%"
ELSE
TALK "Must be between 0 and 100"
END IF
ELSE
TALK "Enter a number without %"
END IF
Notes
- Whitespace: Leading and trailing spaces are trimmed before checking
- Empty strings: Always return
false - Locale: Uses period (.) as decimal separator
- Currency: Does not recognize currency symbols ($, €, etc.)
- Separators: Does not recognize thousands separators (commas)
Error Prevention
Using IS NUMERIC prevents runtime errors when converting strings to numbers:
' Without validation - could cause error
value = HEAR "Enter number:"
result = value * 2 ' Error if value is "abc"
' With validation - safe
value = HEAR "Enter number:"
IF IS NUMERIC(value) THEN
result = value * 2
ELSE
TALK "Invalid input"
END IF
See Also
- GET BOT MEMORY - Retrieve stored values
- SET BOT MEMORY - Store values
- INSTR - Find substring position
- FORMAT - Format numbers as strings
SWITCH
The SWITCH statement provides multi-way branching based on a value, allowing clean handling of multiple conditions without nested IF statements.
Syntax
SWITCH expression
CASE value1
' statements for value1
CASE value2
' statements for value2
CASE value3, value4
' statements for value3 or value4
DEFAULT
' statements if no case matches
END SWITCH
Parameters
| Element | Description |
|---|---|
expression | The value to evaluate |
CASE value | A specific value to match |
CASE value1, value2 | Multiple values for the same case |
DEFAULT | Optional fallback when no case matches |
Description
SWITCH evaluates an expression once and compares it against multiple CASE values. When a match is found, the corresponding statements execute. Unlike some languages, General Bots BASIC does not require explicit BREAK statements - execution automatically stops after the matched case.
If no case matches and a DEFAULT block exists, those statements execute. If no case matches and there’s no DEFAULT, execution continues after END SWITCH.
Examples
Role-Based Knowledge Base Selection
role = GET role
SWITCH role
CASE "manager"
USE KB "management"
USE KB "reports"
CASE "developer"
USE KB "documentation"
USE KB "apis"
CASE "customer"
USE KB "products"
USE KB "support"
DEFAULT
USE KB "general"
END SWITCH
Menu Navigation
TALK "Select an option:"
TALK "1. Check balance"
TALK "2. Transfer funds"
TALK "3. View history"
TALK "4. Exit"
choice = HEAR "Enter your choice:"
SWITCH choice
CASE "1"
balance = GET BOT MEMORY "balance"
TALK "Your balance is: $" + balance
CASE "2"
TALK "Transfer initiated..."
' Transfer logic here
CASE "3"
history = FIND "recent transactions"
TALK history
CASE "4"
TALK "Goodbye!"
DEFAULT
TALK "Invalid option. Please choose 1-4."
END SWITCH
Multiple Values Per Case
day = GET day_of_week
SWITCH day
CASE "monday", "tuesday", "wednesday", "thursday", "friday"
TALK "It's a weekday. Office hours: 9am-5pm"
CASE "saturday", "sunday"
TALK "It's the weekend. We're closed."
DEFAULT
TALK "Unknown day"
END SWITCH
Language Selection
lang = GET user_language
SWITCH lang
CASE "en"
TALK "Hello! How can I help you today?"
CASE "es"
TALK "¡Hola! ¿Cómo puedo ayudarte hoy?"
CASE "pt"
TALK "Olá! Como posso ajudá-lo hoje?"
CASE "fr"
TALK "Bonjour! Comment puis-je vous aider?"
DEFAULT
TALK "Hello! How can I help you today?"
END SWITCH
Department Routing
department = HEAR "Which department? (sales, support, billing)"
SWITCH department
CASE "sales"
SET CONTEXT "You are a sales assistant. Focus on products and pricing."
USE KB "products"
USE KB "pricing"
CASE "support"
SET CONTEXT "You are a technical support agent. Help resolve issues."
USE KB "troubleshooting"
USE KB "faq"
CASE "billing"
SET CONTEXT "You are a billing specialist. Handle payment questions."
USE KB "invoices"
USE KB "payment_methods"
DEFAULT
TALK "I'll connect you with general assistance."
USE KB "general"
END SWITCH
Status Code Handling
status = GET api_response_status
SWITCH status
CASE "200"
TALK "Request successful!"
CASE "400"
TALK "Bad request. Please check your input."
CASE "401", "403"
TALK "Authentication error. Please log in again."
CASE "404"
TALK "Resource not found."
CASE "500", "502", "503"
TALK "Server error. Please try again later."
DEFAULT
TALK "Unexpected status: " + status
END SWITCH
Numeric Ranges (Using Categories)
score = GET test_score
grade = ""
' Convert score to grade category
IF score >= 90 THEN
grade = "A"
ELSE IF score >= 80 THEN
grade = "B"
ELSE IF score >= 70 THEN
grade = "C"
ELSE IF score >= 60 THEN
grade = "D"
ELSE
grade = "F"
END IF
SWITCH grade
CASE "A"
TALK "Excellent work!"
SET BOT MEMORY "achievement", "honor_roll"
CASE "B"
TALK "Good job!"
CASE "C"
TALK "Satisfactory performance."
CASE "D"
TALK "You passed, but could improve."
CASE "F"
TALK "Please see a tutor for help."
END SWITCH
Comparison with IF-ELSE
Using IF-ELSE (Verbose)
IF color = "red" THEN
TALK "Stop"
ELSE IF color = "yellow" THEN
TALK "Caution"
ELSE IF color = "green" THEN
TALK "Go"
ELSE
TALK "Unknown signal"
END IF
Using SWITCH (Cleaner)
SWITCH color
CASE "red"
TALK "Stop"
CASE "yellow"
TALK "Caution"
CASE "green"
TALK "Go"
DEFAULT
TALK "Unknown signal"
END SWITCH
Notes
- No fall-through: Each CASE is isolated; no BREAK needed
- Case sensitivity: String comparisons are case-sensitive
- Expression evaluated once: The switch expression is evaluated only once
- DEFAULT is optional: Without DEFAULT, unmatched values skip the block
- Multiple values: Use commas to match multiple values in one CASE
Best Practices
- Always include DEFAULT for robust error handling
- Use meaningful case values that are self-documenting
- Order cases logically - most common first or alphabetically
- Keep case blocks concise - extract complex logic to separate scripts
See Also
- SET CONTEXT - Set conversation context
- USE KB - Load knowledge base
- GET - Get variable values
- IF/THEN/ELSE - Conditional branching
WEBHOOK
Creates an instant HTTP endpoint for your bot. With WEBHOOK, you can expose any BASIC script as an API endpoint that external systems can call - perfect for integrations, notifications, and building custom APIs with LLM-powered responses.
Why WEBHOOK?
Traditional API development requires:
- Setting up a web framework
- Writing routing code
- Handling HTTP parsing
- Deploying infrastructure
With General Bots WEBHOOK, you write one line and your endpoint is live:
WEBHOOK "my-endpoint"
That’s it. Your script is now accessible at /api/{botname}/webhook/my-endpoint.
Syntax
WEBHOOK "endpoint-name"
Parameters
| Parameter | Type | Description |
|---|---|---|
| endpoint-name | String | Unique name for the webhook (alphanumeric, hyphens, underscores) |
Request Data Available
When your webhook is called, these variables are automatically available:
| Variable | Description | Example |
|---|---|---|
params | Query string parameters | params.id, params.filter |
body | JSON request body as object | body.customer.name |
headers | HTTP headers | headers.authorization |
method | HTTP method used | "POST", "GET" |
path | Request path | "/webhook/my-endpoint" |
Examples
1. Simple Status Endpoint
' status.bas - Simple health check
WEBHOOK "status"
result_status = "healthy"
result_timestamp = NOW()
result_version = "1.0.0"
Call it:
curl https://bot.example.com/api/mybot/webhook/status
Response:
{"status": "healthy", "timestamp": "2024-01-20T10:30:00Z", "version": "1.0.0"}
2. WhatsApp Order Notification
Send order confirmations directly to customers on WhatsApp:
' order-notify.bas - Notify customer via WhatsApp
WEBHOOK "order-notify"
order_id = body.order_id
customer_phone = body.customer_phone
customer_name = body.customer_name
total = body.total
items = body.items
IF order_id = "" OR customer_phone = "" THEN
result_status = 400
result_error = "Missing order_id or customer_phone"
EXIT
END IF
' Build order summary
order_summary = "🛒 *Order Confirmed #" + order_id + "*\n\n"
order_summary = order_summary + "Hi " + customer_name + "!\n\n"
order_summary = order_summary + "Your order has been confirmed.\n"
order_summary = order_summary + "Total: $" + total + "\n\n"
order_summary = order_summary + "We'll notify you when it ships!"
' Send to WhatsApp using TALK TO
TALK TO "whatsapp:" + customer_phone, order_summary
' Save order to database
order_status = "confirmed"
created_at = NOW()
SAVE "orders", order_id, customer_name, customer_phone, total, order_status, created_at
result_status = "ok"
result_order_id = order_id
result_message = "Customer notified via WhatsApp"
Call it:
curl -X POST https://bot.example.com/api/mybot/webhook/order-notify \
-H "Content-Type: application/json" \
-d '{
"order_id": "ORD-12345",
"customer_phone": "+5511999887766",
"customer_name": "João",
"total": "299.90",
"items": ["Widget", "Gadget"]
}'
3. WhatsApp Document Delivery
Send invoices, reports, or documents to WhatsApp:
' send-invoice.bas - Generate and send invoice via WhatsApp
WEBHOOK "send-invoice"
order_id = body.order_id
customer_phone = body.customer_phone
customer_name = body.customer_name
IF order_id = "" OR customer_phone = "" THEN
result_status = 400
result_error = "Missing order_id or customer_phone"
EXIT
END IF
' Get order data
order = FIND "orders", "order_id=" + order_id
' Generate PDF invoice
invoice_date = FORMAT(NOW(), "DD/MM/YYYY")
GENERATE PDF "templates/invoice.html", order_id, customer_name, order.total, order.items, invoice_date, "invoices/" + order_id + ".pdf"
' Send PDF to WhatsApp with caption
SEND FILE TO "whatsapp:" + customer_phone, "invoices/" + order_id + ".pdf", "📄 Invoice #" + order_id + " - Thank you for your purchase!"
' Also send a follow-up message
TALK TO "whatsapp:" + customer_phone, "If you have any questions about your order, just reply to this message! 😊"
result_status = "ok"
result_message = "Invoice sent to WhatsApp"
4. WhatsApp Support Ticket System
Create support tickets and notify via WhatsApp:
' support-ticket.bas - Create ticket and notify customer
WEBHOOK "support-ticket"
customer_phone = body.phone
customer_name = body.name
issue = body.issue
priority = body.priority
IF customer_phone = "" OR issue = "" THEN
result_status = 400
result_error = "Missing phone or issue description"
EXIT
END IF
IF priority = "" THEN
priority = "normal"
END IF
' Create ticket
ticket_id = "TKT-" + FORMAT(NOW(), "YYYYMMDDHHmmss")
ticket_status = "open"
created_at = NOW()
SAVE "support_tickets", ticket_id, customer_name, customer_phone, issue, priority, ticket_status, created_at
' Notify customer via WhatsApp
confirmation = "🎫 *Support Ticket Created*\n\n"
confirmation = confirmation + "Ticket: #" + ticket_id + "\n"
confirmation = confirmation + "Priority: " + priority + "\n\n"
confirmation = confirmation + "We received your request:\n_" + issue + "_\n\n"
confirmation = confirmation + "Our team will respond within 24 hours."
TALK TO "whatsapp:" + customer_phone, confirmation
' Notify support team
team_msg = "🆕 New ticket #" + ticket_id + "\n"
team_msg = team_msg + "From: " + customer_name + " (" + customer_phone + ")\n"
team_msg = team_msg + "Priority: " + priority + "\n"
team_msg = team_msg + "Issue: " + issue
TALK TO "whatsapp:+5511999000001", team_msg
result_status = "ok"
result_ticket_id = ticket_id
5. AI-Powered WhatsApp Assistant
Create an API that uses AI and responds via WhatsApp:
' ai-assistant.bas - AI assistant that responds via WhatsApp
WEBHOOK "ask-ai"
question = body.question
customer_phone = body.phone
context_type = body.context
IF question = "" OR customer_phone = "" THEN
result_status = 400
result_error = "Missing question or phone"
EXIT
END IF
' Load appropriate knowledge base
IF context_type = "products" THEN
USE KB "product-catalog"
ELSE IF context_type = "support" THEN
USE KB "support-docs"
ELSE
USE KB "general-faq"
END IF
' Set AI context
SET CONTEXT "You are a helpful assistant. Be concise and friendly. Use emojis occasionally."
' Get AI response
answer = LLM question
' Send response via WhatsApp
TALK TO "whatsapp:" + customer_phone, answer
' Log the interaction
log_question = question
log_answer = answer
log_phone = customer_phone
log_context = context_type
log_timestamp = NOW()
INSERT "ai_conversations", log_question, log_answer, log_phone, log_context, log_timestamp
result_status = "ok"
result_answer = answer
6. WhatsApp Broadcast for Promotions
Send promotional messages to multiple customers:
' promo-broadcast.bas - Send promotions to customer list
WEBHOOK "send-promo"
promo_title = body.title
promo_message = body.message
promo_image = body.image_url
customer_segment = body.segment
IF promo_message = "" THEN
result_status = 400
result_error = "Missing promotion message"
EXIT
END IF
IF customer_segment = "" THEN
customer_segment = "all"
END IF
' Get customers for this segment
customers = FIND "customers", "segment=" + customer_segment + " AND whatsapp_optin=true"
sent_count = 0
error_count = 0
' Build promo message with formatting
full_message = "🎉 *" + promo_title + "*\n\n"
full_message = full_message + promo_message + "\n\n"
full_message = full_message + "_Reply STOP to unsubscribe_"
FOR EACH customer IN customers
' Send to each customer
IF promo_image <> "" THEN
SEND FILE TO "whatsapp:" + customer.phone, promo_image, full_message
ELSE
TALK TO "whatsapp:" + customer.phone, full_message
END IF
sent_count = sent_count + 1
' Rate limiting - wait between messages
WAIT 1
NEXT customer
' Log the campaign
campaign_id = "CAMP-" + FORMAT(NOW(), "YYYYMMDDHHmmss")
campaign_title = promo_title
campaign_sent = sent_count
campaign_date = NOW()
INSERT "campaigns", campaign_id, campaign_title, campaign_sent, customer_segment, campaign_date
result_status = "ok"
result_campaign_id = campaign_id
result_sent = sent_count
7. Payment Notification with WhatsApp Receipt
Handle payment webhooks and notify customers:
' payment-webhook.bas - Handle payment and notify via WhatsApp
WEBHOOK "payment"
event_type = body.type
payment_id = body.data.object.id
amount = body.data.object.amount
customer_id = body.data.object.customer
SELECT CASE event_type
CASE "payment_intent.succeeded"
' Get customer info
customer = FIND "customers", "stripe_id=" + customer_id
' Update order status
order_status = "paid"
paid_at = NOW()
UPDATE "orders", "payment_id=" + payment_id, order_status, paid_at
' Format amount (cents to dollars)
amount_formatted = amount / 100
' Send WhatsApp receipt
receipt = "✅ *Payment Received*\n\n"
receipt = receipt + "Amount: $" + amount_formatted + "\n"
receipt = receipt + "Payment ID: " + payment_id + "\n"
receipt = receipt + "Date: " + FORMAT(NOW(), "DD/MM/YYYY HH:mm") + "\n\n"
receipt = receipt + "Thank you for your purchase! 🙏"
TALK TO "whatsapp:" + customer.phone, receipt
CASE "payment_intent.payment_failed"
customer = FIND "customers", "stripe_id=" + customer_id
' Notify customer of failure
failure_msg = "⚠️ *Payment Failed*\n\n"
failure_msg = failure_msg + "We couldn't process your payment.\n"
failure_msg = failure_msg + "Please try again or use a different payment method.\n\n"
failure_msg = failure_msg + "Need help? Reply to this message!"
TALK TO "whatsapp:" + customer.phone, failure_msg
CASE ELSE
' Log unhandled event
TALK "Unhandled payment event: " + event_type
END SELECT
result_received = TRUE
8. Appointment Reminder System
Webhook to trigger appointment reminders:
' appointment-reminder.bas - Send appointment reminders via WhatsApp
WEBHOOK "send-reminder"
appointment_id = body.appointment_id
hours_before = body.hours_before
IF appointment_id = "" THEN
result_status = 400
result_error = "Missing appointment_id"
EXIT
END IF
IF hours_before = "" THEN
hours_before = 24
END IF
' Get appointment details
appointment = FIND "appointments", "id=" + appointment_id
' Format date/time nicely
appt_date = FORMAT(appointment.datetime, "dddd, MMMM DD")
appt_time = FORMAT(appointment.datetime, "HH:mm")
' Build reminder message
reminder = "📅 *Appointment Reminder*\n\n"
reminder = reminder + "Hi " + appointment.customer_name + "!\n\n"
reminder = reminder + "This is a reminder of your upcoming appointment:\n\n"
reminder = reminder + "📍 *" + appointment.service + "*\n"
reminder = reminder + "🗓️ " + appt_date + "\n"
reminder = reminder + "🕐 " + appt_time + "\n"
reminder = reminder + "📌 " + appointment.location + "\n\n"
reminder = reminder + "Reply *CONFIRM* to confirm or *CANCEL* to cancel."
' Send via WhatsApp
TALK TO "whatsapp:" + appointment.customer_phone, reminder
' Update reminder sent status
reminder_sent_at = NOW()
UPDATE "appointments", "id=" + appointment_id, reminder_sent_at
result_status = "ok"
result_message = "Reminder sent"
9. Form Submission with WhatsApp Follow-up
Handle web form submissions and follow up on WhatsApp:
' contact-form.bas - Handle contact form and follow up via WhatsApp
WEBHOOK "contact"
name = body.name
email = body.email
phone = body.phone
message = body.message
source = body.source
IF name = "" OR message = "" THEN
result_status = 400
result_error = "Name and message are required"
EXIT
END IF
' Use AI to categorize and generate response
SET CONTEXT "Categorize this message as: sales, support, feedback, or other. Then write a friendly acknowledgment."
ai_prompt = "Customer: " + name + "\nMessage: " + message
ai_response = LLM ai_prompt
' Save the submission
submission_id = "SUB-" + FORMAT(NOW(), "YYYYMMDDHHmmss")
submission_status = "new"
created_at = NOW()
SAVE "submissions", submission_id, name, email, phone, message, source, ai_response, submission_status, created_at
' If phone provided, send WhatsApp confirmation
IF phone <> "" THEN
whatsapp_msg = "👋 Hi " + name + "!\n\n"
whatsapp_msg = whatsapp_msg + "Thanks for reaching out! We received your message:\n\n"
whatsapp_msg = whatsapp_msg + "_" + message + "_\n\n"
whatsapp_msg = whatsapp_msg + "We'll get back to you soon. In the meantime, feel free to reply here if you have any questions!"
TALK TO "whatsapp:" + phone, whatsapp_msg
END IF
' Send email confirmation too
IF email <> "" THEN
SEND MAIL email, "We received your message", "Hi " + name + ",\n\nThank you for contacting us. We'll respond within 24 hours.\n\nBest regards"
END IF
result_status = "ok"
result_submission_id = submission_id
10. Multi-Channel Notification Hub
Single webhook that routes to multiple channels:
' notify.bas - Multi-channel notification hub
WEBHOOK "notify"
channel = body.channel
recipient = body.recipient
message = body.message
file_url = body.file
caption = body.caption
IF recipient = "" OR message = "" THEN
result_status = 400
result_error = "Missing recipient or message"
EXIT
END IF
IF channel = "" THEN
channel = "whatsapp"
END IF
' Route to appropriate channel
SELECT CASE channel
CASE "whatsapp"
IF file_url <> "" THEN
SEND FILE TO "whatsapp:" + recipient, file_url, caption
ELSE
TALK TO "whatsapp:" + recipient, message
END IF
CASE "email"
subject = body.subject
IF subject = "" THEN
subject = "Notification"
END IF
IF file_url <> "" THEN
SEND MAIL recipient, subject, message, file_url
ELSE
SEND MAIL recipient, subject, message
END IF
CASE "teams"
TALK TO "teams:" + recipient, message
CASE "web"
' Send to web session
TALK TO "web:" + recipient, message
CASE ELSE
result_status = 400
result_error = "Unknown channel: " + channel
EXIT
END SELECT
' Log notification
log_channel = channel
log_recipient = recipient
log_message = message
log_timestamp = NOW()
INSERT "notification_log", log_channel, log_recipient, log_message, log_timestamp
result_status = "ok"
result_channel = channel
result_delivered = TRUE
Response Handling
Control the HTTP response by setting result_ prefixed variables:
Simple Response
result_status = "ok"
result_data = my_data
Custom Status Code
result_status = 201 ' Created
result_id = new_id
result_created = TRUE
Error Response
result_status = 400
result_error = "Invalid request"
result_details = "Missing required field: phone"
WhatsApp Message Formatting
WhatsApp supports rich text formatting:
| Format | Syntax | Example |
|---|---|---|
| Bold | *text* | *Important* |
| Italic | _text_ | _note_ |
| Strikethrough | ~text~ | ~old price~ |
| Monospace | `text` | `code` |
| Line break | \n | "Line 1\nLine 2" |
Example with Formatting
message = "🎉 *Order Confirmed!*\n\n"
message = message + "Order: #" + order_id + "\n"
message = message + "Total: ~$" + old_price + "~ *$" + new_price + "*\n"
message = message + "_Discount applied!_"
TALK TO "whatsapp:" + phone, message
Security Best Practices
1. Validate Webhook Signatures
WEBHOOK "secure-endpoint"
signature = headers.x_webhook_signature
secret = GET BOT MEMORY "webhook_secret"
IF signature = "" THEN
TALK "Invalid request - no signature"
result_status = 401
result_error = "Missing signature"
EXIT
END IF
' Continue with verified request...
2. Validate Phone Numbers
phone = body.phone
' Remove non-numeric characters
clean_phone = REPLACE(phone, "+", "")
clean_phone = REPLACE(clean_phone, "-", "")
clean_phone = REPLACE(clean_phone, " ", "")
IF LEN(clean_phone) < 10 THEN
result_status = 400
result_error = "Invalid phone number"
EXIT
END IF
' Add country code if missing
IF LEFT(clean_phone, 2) <> "55" THEN
clean_phone = "55" + clean_phone
END IF
TALK TO "whatsapp:+" + clean_phone, message
3. Rate Limiting
WEBHOOK "rate-limited"
client_ip = headers.x_forwarded_for
rate_key = "rate:" + client_ip
current_count = GET BOT MEMORY rate_key
IF current_count = "" THEN
current_count = 0
END IF
IF current_count > 100 THEN
result_status = 429
result_error = "Rate limit exceeded"
result_retry_after = 60
EXIT
END IF
SET BOT MEMORY rate_key, current_count + 1
' Process request...
Use Cases Summary
| Use Case | Webhook Name | Description |
|---|---|---|
| Order Notifications | /order-notify | Confirm orders via WhatsApp |
| Invoice Delivery | /send-invoice | Send PDF invoices to WhatsApp |
| Support Tickets | /support-ticket | Create tickets, notify via WhatsApp |
| AI Assistant | /ask-ai | LLM answers sent to WhatsApp |
| Promotions | /send-promo | Broadcast promos to customers |
| Payment Alerts | /payment | Payment receipts via WhatsApp |
| Reminders | /send-reminder | Appointment reminders |
| Contact Forms | /contact | Form follow-up on WhatsApp |
| Multi-Channel | /notify | Route to any channel |
Technical Notes
- Webhooks register during script compilation
- Stored in
system_automationstable withkind = Webhook - Endpoint names must be unique per bot
- Request timeout: 30 seconds (keep processing fast)
- Maximum request body: 10MB
- HTTPS required in production
See Also
- TALK TO - Send messages to specific recipients
- SEND FILE TO - Send files to recipients
- SET SCHEDULE - Time-based automation
- ON - Database trigger events
- LLM - Language model queries
- USE KB - Knowledge base integration
TABLE Keyword
The TABLE keyword defines database tables directly in your .bas files. Tables are automatically created on the specified database connection when the script is compiled.
Syntax
TABLE TableName ON connection
FieldName dataType[(length[,precision])] [key] [references OtherTable]
...
END TABLE
Parameters
| Parameter | Description |
|---|---|
TableName | Name of the table to create |
connection | Connection name defined in config.csv (e.g., maria, sales_db) |
FieldName | Name of the field/column |
dataType | Data type (see supported types below) |
length | Optional length for string/number types |
precision | Optional decimal precision for number types |
key | Marks field as primary key |
references | Creates a foreign key reference to another table |
Supported Data Types
| Type | Description | SQL Mapping |
|---|---|---|
string(n) | Variable-length string | VARCHAR(n) |
number | Integer | INTEGER |
number(n) | Big integer | BIGINT |
number(n,p) | Decimal with precision | DECIMAL(n,p) |
integer | Integer | INTEGER |
double | Double precision float | DOUBLE PRECISION |
double(n,p) | Decimal | DECIMAL(n,p) |
date | Date only | DATE |
datetime | Date and time | TIMESTAMP/DATETIME |
boolean | True/false | BOOLEAN |
text | Long text | TEXT |
guid | UUID | UUID/CHAR(36) |
Connection Configuration
External database connections are configured in config.csv with the following format:
| Key | Description |
|---|---|
conn-{name}-Server | Database server hostname or IP |
conn-{name}-Name | Database name |
conn-{name}-Username | Username for authentication |
conn-{name}-Password | Password for authentication |
conn-{name}-Port | Port number (optional, uses default) |
conn-{name}-Driver | Database driver: mysql, mariadb, postgres, mssql |
Example config.csv
conn-maria-Server,192.168.1.100
conn-maria-Name,sales_database
conn-maria-Username,app_user
conn-maria-Password,secure_password
conn-maria-Port,3306
conn-maria-Driver,mariadb
Examples
Basic Table Definition
TABLE Contacts ON maria
Id number key
Nome string(150)
Email string(255)
Telefone string(20)
DataCadastro date
END TABLE
Table with Multiple Field Types
TABLE Produtos ON maria
Id number key
Nome string(150)
Sku string(20)
Preco double(10,2)
Estoque integer
Ativo boolean
DescricaoCurta string(4000)
DataValidade date
Categoria_id integer
END TABLE
Table with Foreign Key References
TABLE Pedidos ON maria
Id number key
Numero integer
Data date
Total double(15,2)
Contato_id number
Situacao_id integer
Vendedor_id number
END TABLE
TABLE PedidosItem ON maria
Id number key
Pedido_id number
Produto_id number
Quantidade integer
Valor double(10,2)
Desconto double(5,2)
END TABLE
Complete CRM Tables Example
' Contact management tables
TABLE Contatos ON maria
Id number key
Nome string(150)
Codigo string(50)
Situacao string(5)
NumeroDocumento string(25)
Telefone string(20)
Celular string(20)
Email string(50)
Endereco_geral_endereco string(100)
Endereco_geral_cep string(10)
Endereco_geral_bairro string(50)
Endereco_geral_municipio string(50)
Endereco_geral_uf string(5)
Vendedor_id number
DadosAdicionais_dataNascimento date
Financeiro_limiteCredito double
END TABLE
' Payment methods
TABLE FormaDePagamento ON maria
Id number key
Descricao string(255)
TipoPagamento integer
Situacao integer
Padrao integer
Taxas_aliquota double
Taxas_valor double
END TABLE
' Accounts receivable
TABLE ContasAReceber ON maria
Id number key
Situacao integer
Vencimento date
Valor double
Contato_id number
FormaPagamento_id number
Saldo double
DataEmissao date
NumeroDocumento string(50)
END TABLE
Using Tables After Creation
Once tables are defined, you can use standard BASIC keywords to work with the data:
Inserting Data
data = NEW OBJECT
data.Nome = "João Silva"
data.Email = "joao@example.com"
data.Telefone = "11999999999"
INSERT "Contatos", data
Finding Data
contacts = FIND "Contatos", "Situacao='A'"
FOR EACH contact IN contacts
TALK "Name: " + contact.Nome
NEXT
Updating Data
UPDATE "Contatos", "Id=123", "Telefone='11988888888'"
Deleting Data
DELETE "Contatos", "Id=123"
Notes
-
Automatic Table Creation: Tables are created automatically when the
.basfile is compiled. If the table already exists, no changes are made. -
Connection Required: The connection name must be configured in
config.csvbefore using it in TABLE definitions. -
Primary Keys: Fields marked with
keybecome the primary key. Multiple fields can be marked as key for composite primary keys. -
Default Connection: If
ON connectionis omitted, the table is created on the default (internal) PostgreSQL database. -
SQL Injection Protection: All identifiers are sanitized to prevent SQL injection attacks.
See Also
- FIND - Query data from tables
- SAVE - Insert or update data
- INSERT - Insert new records
- UPDATE - Update existing records
- DELETE - Delete records
- config.csv - Connection configuration
KB Statistics Keywords
Knowledge Base Statistics keywords provide real-time information about your Qdrant vector database collections. Use these keywords to monitor document counts, storage usage, and indexing activity.
Overview
These keywords are useful for:
- Administration: Monitor KB health and growth
- Dashboards: Display statistics in admin interfaces
- Automation: Trigger actions based on KB state
- Compliance: Track document retention and storage
Available Keywords
| Keyword | Returns | Description |
|---|---|---|
KB STATISTICS | JSON | Complete statistics for all collections |
KB COLLECTION STATS | JSON | Statistics for a specific collection |
KB DOCUMENTS COUNT | Integer | Total document count for bot |
KB DOCUMENTS ADDED SINCE | Integer | Documents added in last N days |
KB LIST COLLECTIONS | Array | List of collection names |
KB STORAGE SIZE | Float | Total storage in MB |
KB STATISTICS
Returns comprehensive statistics about all knowledge base collections for the current bot.
Syntax
stats = KB STATISTICS
Return Value
JSON string containing:
{
"total_collections": 3,
"total_documents": 5000,
"total_vectors": 5000,
"total_disk_size_mb": 125.5,
"total_ram_size_mb": 62.3,
"documents_added_last_week": 150,
"documents_added_last_month": 620,
"collections": [
{
"name": "kb_bot-id_main",
"vectors_count": 3000,
"points_count": 3000,
"segments_count": 2,
"disk_data_size": 78643200,
"ram_data_size": 39321600,
"indexed_vectors_count": 3000,
"status": "green"
}
]
}
Example
REM Get and display KB statistics
stats = KB STATISTICS
statsObj = JSON PARSE stats
TALK "Your knowledge base has " + statsObj.total_documents + " documents"
TALK "Using " + FORMAT(statsObj.total_disk_size_mb, "#,##0.00") + " MB of storage"
IF statsObj.documents_added_last_week > 100 THEN
TALK "High activity! " + statsObj.documents_added_last_week + " documents added this week"
END IF
KB COLLECTION STATS
Returns detailed statistics for a specific Qdrant collection.
Syntax
stats = KB COLLECTION STATS collection_name
Parameters
| Parameter | Type | Description |
|---|---|---|
collection_name | String | Name of the collection |
Return Value
JSON string with collection details:
{
"name": "kb_bot-id_products",
"vectors_count": 1500,
"points_count": 1500,
"segments_count": 1,
"disk_data_size": 52428800,
"ram_data_size": 26214400,
"indexed_vectors_count": 1500,
"status": "green"
}
Example
REM Check specific collection health
collections = KB LIST COLLECTIONS
FOR EACH collection IN collections
stats = KB COLLECTION STATS collection
collObj = JSON PARSE stats
IF collObj.status <> "green" THEN
TALK "Warning: Collection " + collection + " status is " + collObj.status
END IF
NEXT
KB DOCUMENTS COUNT
Returns the total number of documents indexed for the current bot.
Syntax
count = KB DOCUMENTS COUNT
Return Value
Integer representing total document count.
Example
docCount = KB DOCUMENTS COUNT
IF docCount = 0 THEN
TALK "Your knowledge base is empty. Upload some documents to get started!"
ELSE
TALK "You have " + FORMAT(docCount, "#,##0") + " documents in your knowledge base"
END IF
KB DOCUMENTS ADDED SINCE
Returns the number of documents added within the specified number of days.
Syntax
count = KB DOCUMENTS ADDED SINCE days
Parameters
| Parameter | Type | Description |
|---|---|---|
days | Integer | Number of days to look back |
Return Value
Integer representing documents added in the time period.
Example
REM Activity report
lastDay = KB DOCUMENTS ADDED SINCE 1
lastWeek = KB DOCUMENTS ADDED SINCE 7
lastMonth = KB DOCUMENTS ADDED SINCE 30
TALK "Document Activity Report"
TALK "Last 24 hours: " + lastDay + " documents"
TALK "Last 7 days: " + lastWeek + " documents"
TALK "Last 30 days: " + lastMonth + " documents"
REM Calculate daily average
IF lastWeek > 0 THEN
avgDaily = lastWeek / 7
TALK "Daily average: " + FORMAT(avgDaily, "#,##0.0")
END IF
KB LIST COLLECTIONS
Returns an array of all collection names belonging to the current bot.
Syntax
collections = KB LIST COLLECTIONS
Return Value
Array of collection name strings.
Example
collections = KB LIST COLLECTIONS
IF LEN(collections) = 0 THEN
TALK "No collections found"
ELSE
TALK "Your collections:"
FOR EACH name IN collections
TALK " - " + name
NEXT
END IF
KB STORAGE SIZE
Returns the total disk storage used by all collections in megabytes.
Syntax
sizeMB = KB STORAGE SIZE
Return Value
Float representing storage size in MB.
Example
storageMB = KB STORAGE SIZE
TALK "Storage used: " + FORMAT(storageMB, "#,##0.00") + " MB"
REM Alert if storage is high
IF storageMB > 1000 THEN
TALK "Warning: Knowledge base exceeds 1 GB. Consider archiving old documents."
END IF
Complete Example: KB Dashboard
REM Knowledge Base Dashboard
REM Displays comprehensive statistics
DESCRIPTION "View knowledge base statistics and health"
TALK "📊 **Knowledge Base Dashboard**"
TALK ""
REM Get overall statistics
stats = KB STATISTICS
statsObj = JSON PARSE stats
REM Summary section
TALK "**Summary**"
TALK "Collections: " + statsObj.total_collections
TALK "Documents: " + FORMAT(statsObj.total_documents, "#,##0")
TALK "Vectors: " + FORMAT(statsObj.total_vectors, "#,##0")
TALK ""
REM Storage section
TALK "**Storage**"
TALK "Disk: " + FORMAT(statsObj.total_disk_size_mb, "#,##0.00") + " MB"
TALK "RAM: " + FORMAT(statsObj.total_ram_size_mb, "#,##0.00") + " MB"
TALK ""
REM Activity section
TALK "**Recent Activity**"
TALK "Last 7 days: " + FORMAT(statsObj.documents_added_last_week, "#,##0") + " documents"
TALK "Last 30 days: " + FORMAT(statsObj.documents_added_last_month, "#,##0") + " documents"
REM Calculate growth rate
IF statsObj.documents_added_last_month > 0 THEN
growthRate = (statsObj.documents_added_last_week / (statsObj.documents_added_last_month / 4)) * 100 - 100
IF growthRate > 0 THEN
TALK "Growth trend: +" + FORMAT(growthRate, "#,##0") + "% vs average"
ELSE
TALK "Growth trend: " + FORMAT(growthRate, "#,##0") + "% vs average"
END IF
END IF
REM Health check
TALK ""
TALK "**Health Status**"
allHealthy = true
FOR EACH coll IN statsObj.collections
IF coll.status <> "green" THEN
TALK "⚠️ " + coll.name + ": " + coll.status
allHealthy = false
END IF
NEXT
IF allHealthy THEN
TALK "✅ All collections healthy"
END IF
REM Store for dashboard
SET BOT MEMORY "kb_last_check", NOW()
SET BOT MEMORY "kb_total_docs", statsObj.total_documents
SET BOT MEMORY "kb_storage_mb", statsObj.total_disk_size_mb
Use Cases
1. Admin Monitoring Bot
REM Daily KB health check
SET SCHEDULE "kb-health" TO "0 8 * * *"
stats = KB STATISTICS
statsObj = JSON PARSE stats
IF statsObj.total_disk_size_mb > 5000 THEN
SEND MAIL "admin@example.com", "KB Storage Alert",
"Knowledge base storage exceeds 5 GB: " + statsObj.total_disk_size_mb + " MB"
END IF
END SCHEDULE
2. User-Facing Statistics
REM Show user their document count
docCount = KB DOCUMENTS COUNT
TALK "Your bot has learned from " + docCount + " documents"
TALK "Ask me anything about your content!"
3. Compliance Reporting
REM Monthly compliance report
lastMonth = KB DOCUMENTS ADDED SINCE 30
storageSize = KB STORAGE SIZE
report = "Monthly KB Report\n"
report = report + "Documents added: " + lastMonth + "\n"
report = report + "Total storage: " + FORMAT(storageSize, "#,##0.00") + " MB\n"
SEND MAIL "compliance@example.com", "Monthly KB Report", report
Notes
- Statistics are fetched in real-time from Qdrant
- Large collections may have slight delays in statistics updates
- Document counts from the database may differ slightly from vector counts if indexing is in progress
- Collection names follow the pattern
kb_{bot_id}_{collection_name}
See Also
- USE KB - Load knowledge base for queries
- CLEAR KB - Clear knowledge base
- Vector Collections - Understanding collections
KB STATISTICS
The KB STATISTICS keyword retrieves comprehensive statistics about the bot’s knowledge base, including document counts, vector counts, storage usage, and collection information from the Qdrant vector database.
Syntax
stats = KB STATISTICS
Parameters
None. Returns statistics for the current bot’s knowledge base.
Description
KB STATISTICS queries the Qdrant vector database to gather comprehensive metrics about the bot’s knowledge base. This is useful for monitoring KB health, planning capacity, generating admin reports, and tracking document ingestion over time.
The keyword returns a JSON object containing:
- Total collections count
- Total documents across all collections
- Total vectors stored
- Disk and RAM usage
- Documents added in the last week/month
- Per-collection statistics
Use cases include:
- Admin dashboards and monitoring
- Capacity planning and alerts
- Usage reporting and analytics
- Knowledge base health checks
- Cost tracking for vector storage
Return Value
Returns a JSON string with the following structure:
| Property | Type | Description |
|---|---|---|
total_collections | Number | Number of KB collections for this bot |
total_documents | Number | Total document count across collections |
total_vectors | Number | Total vectors stored in Qdrant |
total_disk_size_mb | Number | Disk storage usage in MB |
total_ram_size_mb | Number | RAM usage in MB |
documents_added_last_week | Number | Documents added in past 7 days |
documents_added_last_month | Number | Documents added in past 30 days |
collections | Array | Detailed stats per collection |
Collection Stats Object
Each collection in the collections array contains:
| Property | Type | Description |
|---|---|---|
name | String | Collection name |
vectors_count | Number | Vectors in this collection |
points_count | Number | Points (documents) count |
segments_count | Number | Storage segments |
disk_data_size | Number | Disk size in bytes |
ram_data_size | Number | RAM size in bytes |
indexed_vectors_count | Number | Indexed vectors |
status | String | Collection status (green/yellow/red) |
Examples
Basic Statistics Retrieval
' Get KB statistics
stats_json = KB STATISTICS
' Parse the JSON response
stats = PARSE_JSON(stats_json)
TALK "Your knowledge base has:"
TALK " - " + stats.total_documents + " documents"
TALK " - " + stats.total_vectors + " vectors"
TALK " - " + FORMAT(stats.total_disk_size_mb, "#,##0.00") + " MB on disk"
Admin Dashboard Report
' Generate KB health report for administrators
stats_json = KB STATISTICS
stats = PARSE_JSON(stats_json)
report = "## Knowledge Base Report\n\n"
report = report + "**Generated:** " + FORMAT(NOW(), "YYYY-MM-DD HH:mm") + "\n\n"
report = report + "### Summary\n"
report = report + "- Collections: " + stats.total_collections + "\n"
report = report + "- Total Documents: " + FORMAT(stats.total_documents, "#,##0") + "\n"
report = report + "- Total Vectors: " + FORMAT(stats.total_vectors, "#,##0") + "\n"
report = report + "- Disk Usage: " + FORMAT(stats.total_disk_size_mb, "#,##0.00") + " MB\n"
report = report + "- RAM Usage: " + FORMAT(stats.total_ram_size_mb, "#,##0.00") + " MB\n\n"
report = report + "### Recent Activity\n"
report = report + "- Added this week: " + stats.documents_added_last_week + "\n"
report = report + "- Added this month: " + stats.documents_added_last_month + "\n"
TALK report
Storage Alert System
' Check KB storage and alert if threshold exceeded
stats_json = KB STATISTICS
stats = PARSE_JSON(stats_json)
storage_threshold_mb = 1000 ' 1 GB warning threshold
critical_threshold_mb = 5000 ' 5 GB critical threshold
IF stats.total_disk_size_mb > critical_threshold_mb THEN
SEND MAIL admin_email,
"CRITICAL: KB Storage Alert",
"Knowledge base storage is at " + FORMAT(stats.total_disk_size_mb, "#,##0") + " MB. Immediate action required.",
[]
TALK "Critical storage alert sent to administrator"
ELSE IF stats.total_disk_size_mb > storage_threshold_mb THEN
SEND MAIL admin_email,
"Warning: KB Storage Growing",
"Knowledge base storage is at " + FORMAT(stats.total_disk_size_mb, "#,##0") + " MB. Consider cleanup.",
[]
TALK "Storage warning sent to administrator"
ELSE
TALK "Storage levels are healthy: " + FORMAT(stats.total_disk_size_mb, "#,##0") + " MB"
END IF
Collection Health Check
' Check health of each collection
stats_json = KB STATISTICS
stats = PARSE_JSON(stats_json)
unhealthy_collections = []
FOR EACH collection IN stats.collections
IF collection.status <> "green" THEN
unhealthy_collections = unhealthy_collections + [collection.name]
PRINT "Warning: Collection " + collection.name + " status is " + collection.status
END IF
END FOR
IF LEN(unhealthy_collections) > 0 THEN
TALK "Found " + LEN(unhealthy_collections) + " collections needing attention"
ELSE
TALK "All " + stats.total_collections + " collections are healthy"
END IF
Scheduled Statistics Report
' Weekly KB statistics email (run via SET SCHEDULE)
stats_json = KB STATISTICS
stats = PARSE_JSON(stats_json)
' Calculate week-over-week growth
weekly_growth = stats.documents_added_last_week
monthly_growth = stats.documents_added_last_month
avg_weekly = monthly_growth / 4
body = "Weekly Knowledge Base Statistics\n\n"
body = body + "Total Documents: " + FORMAT(stats.total_documents, "#,##0") + "\n"
body = body + "Documents Added This Week: " + weekly_growth + "\n"
body = body + "4-Week Average: " + FORMAT(avg_weekly, "#,##0.0") + "\n"
body = body + "Storage Used: " + FORMAT(stats.total_disk_size_mb, "#,##0.00") + " MB\n"
body = body + "\nCollections:\n"
FOR EACH coll IN stats.collections
body = body + " - " + coll.name + ": " + FORMAT(coll.points_count, "#,##0") + " docs\n"
END FOR
SEND MAIL admin_email, "Weekly KB Report - " + FORMAT(NOW(), "YYYY-MM-DD"), body, []
Usage Analytics Integration
' Log KB stats to analytics system
stats_json = KB STATISTICS
stats = PARSE_JSON(stats_json)
' Store metrics for trending
metrics = #{
"timestamp": FORMAT(NOW(), "YYYY-MM-DDTHH:mm:ss"),
"bot_id": bot_id,
"total_docs": stats.total_documents,
"total_vectors": stats.total_vectors,
"disk_mb": stats.total_disk_size_mb,
"ram_mb": stats.total_ram_size_mb,
"collections": stats.total_collections
}
INSERT "kb_metrics", metrics
PRINT "KB metrics logged at " + metrics.timestamp
Error Handling
ON ERROR RESUME NEXT
stats_json = KB STATISTICS
IF ERROR THEN
PRINT "Failed to get KB statistics: " + ERROR_MESSAGE
TALK "Sorry, I couldn't retrieve knowledge base statistics right now."
ELSE
IF stats_json = "" THEN
TALK "No knowledge base data available yet."
ELSE
stats = PARSE_JSON(stats_json)
TALK "KB contains " + stats.total_documents + " documents"
END IF
END IF
Related Keywords
- KB COLLECTION STATS — Get stats for a specific collection
- KB DOCUMENTS COUNT — Get total document count
- KB DOCUMENTS ADDED SINCE — Count recently added documents
- KB LIST COLLECTIONS — List all KB collections
- KB STORAGE SIZE — Get storage usage in MB
- CLEAR KB — Clear knowledge base content
- USE KB — Enable knowledge base for queries
Configuration
No specific configuration required. The keyword uses the Qdrant connection configured at the system level.
Ensure Qdrant is running and accessible:
name,value
qdrant-url,https://localhost:6334
Implementation Notes
- Implemented in Rust under
src/basic/keywords/kb_statistics.rs - Queries Qdrant REST API for collection statistics
- Filters collections by bot ID prefix (
kb_{bot_id}) - Document counts from both Qdrant and PostgreSQL
- Returns JSON string for flexible parsing
- May take 1-2 seconds for large knowledge bases
Summary
KB STATISTICS provides comprehensive metrics about the bot’s knowledge base, enabling administrators to monitor health, track growth, and plan capacity. Use it for dashboards, alerts, and reporting. For simpler queries, use the specialized keywords like KB DOCUMENTS COUNT or KB STORAGE SIZE.
KB COLLECTION STATS
The KB COLLECTION STATS keyword retrieves detailed statistics for a specific knowledge base collection, allowing granular monitoring of individual collections within the bot’s KB.
Syntax
stats = KB COLLECTION STATS "collection_name"
Parameters
| Parameter | Type | Description |
|---|---|---|
collection_name | String | Name of the collection to query |
Description
KB COLLECTION STATS queries Qdrant for detailed metrics about a specific collection. This is useful when you need information about a particular knowledge domain rather than the entire KB.
Returns a JSON object containing:
- Collection name
- Vector and point counts
- Storage metrics (disk and RAM)
- Segment information
- Index status
- Collection health status
Return Value
Returns a JSON string with the following structure:
| Property | Type | Description |
|---|---|---|
name | String | Collection name |
vectors_count | Number | Total vectors in collection |
points_count | Number | Total points (documents) |
segments_count | Number | Number of storage segments |
disk_data_size | Number | Disk usage in bytes |
ram_data_size | Number | RAM usage in bytes |
indexed_vectors_count | Number | Vectors that are indexed |
status | String | Collection status (green/yellow/red) |
Examples
Basic Collection Stats
' Get stats for a specific collection
stats_json = KB COLLECTION STATS "kb_products"
stats = PARSE_JSON(stats_json)
TALK "Products collection has " + stats.points_count + " documents"
TALK "Storage: " + FORMAT(stats.disk_data_size / 1024 / 1024, "#,##0.00") + " MB"
Compare Multiple Collections
' Compare stats across collections
collections = ["kb_products", "kb_faqs", "kb_policies"]
TALK "Collection Statistics:"
FOR EACH coll_name IN collections
stats_json = KB COLLECTION STATS coll_name
stats = PARSE_JSON(stats_json)
disk_mb = stats.disk_data_size / 1024 / 1024
TALK " " + coll_name + ": " + stats.points_count + " docs, " + FORMAT(disk_mb, "#,##0.00") + " MB"
END FOR
Collection Health Monitoring
' Check if collection is healthy
stats_json = KB COLLECTION STATS collection_name
stats = PARSE_JSON(stats_json)
IF stats.status = "green" THEN
TALK "Collection " + collection_name + " is healthy"
ELSE IF stats.status = "yellow" THEN
TALK "Warning: Collection " + collection_name + " needs optimization"
ELSE
TALK "Error: Collection " + collection_name + " has issues - status: " + stats.status
END IF
Index Coverage Check
' Verify all vectors are indexed
stats_json = KB COLLECTION STATS "kb_main"
stats = PARSE_JSON(stats_json)
index_coverage = (stats.indexed_vectors_count / stats.vectors_count) * 100
IF index_coverage < 100 THEN
TALK "Warning: Only " + FORMAT(index_coverage, "#0.0") + "% of vectors are indexed"
TALK "Search performance may be degraded"
ELSE
TALK "All vectors are fully indexed"
END IF
Error Handling
ON ERROR RESUME NEXT
stats_json = KB COLLECTION STATS "kb_" + collection_name
IF ERROR THEN
IF INSTR(ERROR_MESSAGE, "not found") > 0 THEN
TALK "Collection '" + collection_name + "' does not exist"
ELSE
TALK "Error retrieving collection stats: " + ERROR_MESSAGE
END IF
ELSE
stats = PARSE_JSON(stats_json)
TALK "Collection has " + stats.points_count + " documents"
END IF
Related Keywords
- KB STATISTICS — Get overall KB statistics
- KB LIST COLLECTIONS — List all collections
- KB DOCUMENTS COUNT — Get total document count
- KB STORAGE SIZE — Get storage usage
Implementation Notes
- Implemented in Rust under
src/basic/keywords/kb_statistics.rs - Queries Qdrant REST API at
/collections/{name} - Collection name should match exactly (case-sensitive)
- Returns empty if collection doesn’t exist
Summary
KB COLLECTION STATS provides detailed metrics for a specific knowledge base collection. Use it for granular monitoring, comparing collections, or checking health of individual knowledge domains. For overall KB statistics, use KB STATISTICS instead.
KB DOCUMENTS COUNT
The KB DOCUMENTS COUNT keyword returns the total number of documents stored in the bot’s knowledge base.
Syntax
count = KB DOCUMENTS COUNT
Parameters
None. Returns the count for the current bot’s knowledge base.
Description
KB DOCUMENTS COUNT queries the database to return the total number of documents that have been indexed in the bot’s knowledge base. This is a lightweight operation compared to KB STATISTICS when you only need the document count.
Use cases include:
- Checking if knowledge base has content
- Displaying document counts in conversations
- Conditional logic based on KB size
- Simple monitoring and logging
Return Value
Returns an integer representing the total document count. Returns 0 if no documents exist or if an error occurs.
Examples
Basic Count Check
' Check how many documents are in KB
doc_count = KB DOCUMENTS COUNT
TALK "The knowledge base contains " + doc_count + " documents."
Conditional KB Usage
' Only use KB if it has content
doc_count = KB DOCUMENTS COUNT
IF doc_count > 0 THEN
USE KB
answer = SEARCH user_question
TALK answer
ELSE
TALK "The knowledge base is empty. Please add some documents first."
END IF
Admin Status Report
' Quick status check for administrators
doc_count = KB DOCUMENTS COUNT
IF doc_count = 0 THEN
status = "⚠️ Empty - No documents indexed"
ELSE IF doc_count < 10 THEN
status = "📄 Minimal - " + doc_count + " documents"
ELSE IF doc_count < 100 THEN
status = "📚 Growing - " + doc_count + " documents"
ELSE
status = "✅ Robust - " + doc_count + " documents"
END IF
TALK "Knowledge Base Status: " + status
Monitoring Growth
' Log document count for tracking
doc_count = KB DOCUMENTS COUNT
timestamp = FORMAT(NOW(), "YYYY-MM-DD HH:mm")
PRINT "[" + timestamp + "] KB document count: " + doc_count
' Store for trending
INSERT "kb_count_log", #{
"timestamp": NOW(),
"count": doc_count
}
Before/After Import Check
' Check count before and after document import
before_count = KB DOCUMENTS COUNT
' Import new documents
IMPORT "new-documents.zip"
after_count = KB DOCUMENTS COUNT
added = after_count - before_count
TALK "Import complete! Added " + added + " new documents."
TALK "Total documents now: " + after_count
Error Handling
ON ERROR RESUME NEXT
count = KB DOCUMENTS COUNT
IF ERROR THEN
PRINT "Error getting document count: " + ERROR_MESSAGE
count = 0
END IF
IF count > 0 THEN
TALK "Found " + count + " documents in the knowledge base."
ELSE
TALK "No documents found or unable to query knowledge base."
END IF
Related Keywords
- KB STATISTICS — Get comprehensive KB statistics
- KB DOCUMENTS ADDED SINCE — Count recently added documents
- KB STORAGE SIZE — Get storage usage
- KB LIST COLLECTIONS — List all collections
- CLEAR KB — Clear knowledge base
- USE KB — Enable KB for queries
Implementation Notes
- Implemented in Rust under
src/basic/keywords/kb_statistics.rs - Queries PostgreSQL
kb_documentstable - Filters by current bot ID
- Returns 0 on error (does not throw)
- Very fast operation (single COUNT query)
Summary
KB DOCUMENTS COUNT provides a quick way to get the total number of documents in the knowledge base. Use it for simple checks, conditional logic, and lightweight monitoring. For more detailed statistics, use KB STATISTICS instead.
KB DOCUMENTS ADDED SINCE
The KB DOCUMENTS ADDED SINCE keyword returns the count of documents added to the knowledge base within a specified number of days, useful for tracking ingestion activity and monitoring growth.
Syntax
count = KB DOCUMENTS ADDED SINCE days
Parameters
| Parameter | Type | Description |
|---|---|---|
days | Number | Number of days to look back |
Description
KB DOCUMENTS ADDED SINCE queries the database to count how many documents were added to the bot’s knowledge base within the specified time window. This is useful for tracking ingestion rates, monitoring content growth, and generating activity reports.
Use cases include:
- Tracking daily/weekly document ingestion
- Monitoring automated content pipelines
- Activity reports and dashboards
- Alert systems for low/high activity
- Growth trend analysis
Return Value
Returns an integer representing the number of documents added within the specified period.
Examples
Basic Usage
' Count documents added in last 7 days
weekly_count = KB DOCUMENTS ADDED SINCE 7
TALK "Documents added this week: " + weekly_count
Daily Activity Check
' Check today's ingestion
today_count = KB DOCUMENTS ADDED SINCE 1
IF today_count = 0 THEN
TALK "No new documents added today"
ELSE
TALK today_count + " documents added today"
END IF
Growth Comparison
' Compare recent activity periods
last_week = KB DOCUMENTS ADDED SINCE 7
last_month = KB DOCUMENTS ADDED SINCE 30
weekly_average = last_month / 4
IF last_week > weekly_average * 1.5 THEN
TALK "Document ingestion is above average this week!"
ELSE IF last_week < weekly_average * 0.5 THEN
TALK "Document ingestion is below average this week"
ELSE
TALK "Document ingestion is on track"
END IF
Activity Alert System
' Alert if no documents added recently
recent_docs = KB DOCUMENTS ADDED SINCE 3
IF recent_docs = 0 THEN
SEND MAIL admin_email,
"KB Activity Alert",
"No documents have been added to the knowledge base in the last 3 days. Please check content pipelines.",
[]
TALK "Alert sent - no recent KB activity"
END IF
Scheduled Activity Report
' Weekly ingestion report (run via SET SCHEDULE)
day_1 = KB DOCUMENTS ADDED SINCE 1
day_7 = KB DOCUMENTS ADDED SINCE 7
day_30 = KB DOCUMENTS ADDED SINCE 30
report = "KB Ingestion Report\n\n"
report = report + "Last 24 hours: " + day_1 + " documents\n"
report = report + "Last 7 days: " + day_7 + " documents\n"
report = report + "Last 30 days: " + day_30 + " documents\n"
report = report + "\nDaily average (30 days): " + FORMAT(day_30 / 30, "#,##0.0") + "\n"
report = report + "Weekly average (30 days): " + FORMAT(day_30 / 4, "#,##0.0")
SEND MAIL admin_email, "Weekly KB Ingestion Report", report, []
Pipeline Monitoring
' Monitor automated document pipeline
expected_daily = 50 ' Expected documents per day
tolerance = 0.2 ' 20% tolerance
yesterday_count = KB DOCUMENTS ADDED SINCE 1
min_expected = expected_daily * (1 - tolerance)
max_expected = expected_daily * (1 + tolerance)
IF yesterday_count < min_expected THEN
TALK "Warning: Only " + yesterday_count + " documents ingested yesterday (expected ~" + expected_daily + ")"
LOG_WARN "Low document ingestion: " + yesterday_count
ELSE IF yesterday_count > max_expected THEN
TALK "Note: High ingestion yesterday - " + yesterday_count + " documents"
LOG_INFO "High document ingestion: " + yesterday_count
ELSE
TALK "Document pipeline operating normally: " + yesterday_count + " documents yesterday"
END IF
Use with Other KB Keywords
' Comprehensive KB activity check
total_docs = KB DOCUMENTS COUNT
recent_docs = KB DOCUMENTS ADDED SINCE 7
storage_mb = KB STORAGE SIZE
TALK "Knowledge Base Status:"
TALK " Total documents: " + FORMAT(total_docs, "#,##0")
TALK " Added this week: " + recent_docs
TALK " Storage used: " + FORMAT(storage_mb, "#,##0.00") + " MB"
IF recent_docs > 0 THEN
pct_new = (recent_docs / total_docs) * 100
TALK " " + FORMAT(pct_new, "#,##0.0") + "% of KB is from this week"
END IF
Error Handling
ON ERROR RESUME NEXT
count = KB DOCUMENTS ADDED SINCE 7
IF ERROR THEN
PRINT "Failed to get document count: " + ERROR_MESSAGE
count = 0
END IF
TALK "Documents added recently: " + count
Related Keywords
- KB STATISTICS — Comprehensive KB statistics
- KB DOCUMENTS COUNT — Total document count
- KB COLLECTION STATS — Per-collection statistics
- KB STORAGE SIZE — Storage usage
- KB LIST COLLECTIONS — List collections
Implementation Notes
- Implemented in Rust under
src/basic/keywords/kb_statistics.rs - Queries PostgreSQL
kb_documentstable bycreated_attimestamp - Filters by current bot ID
- Returns 0 if no documents found or on error
- Days parameter is converted to interval for SQL query
Summary
KB DOCUMENTS ADDED SINCE provides a simple way to track recent document ingestion activity. Use it for monitoring content pipelines, generating activity reports, and creating alerts for unusual activity levels. Combine with other KB keywords for comprehensive knowledge base monitoring.
KB LIST COLLECTIONS
The KB LIST COLLECTIONS keyword returns a list of all knowledge base collection names associated with the current bot.
Syntax
collections = KB LIST COLLECTIONS
Parameters
None. Returns collections for the current bot.
Description
KB LIST COLLECTIONS queries Qdrant to retrieve all collection names that belong to the current bot. Collections are filtered by the bot ID prefix (kb_{bot_id}), returning only collections owned by the calling bot.
Use cases include:
- Discovering available knowledge domains
- Building dynamic collection selection interfaces
- Admin dashboards and monitoring
- Iterating over collections for batch operations
- Validating collection existence before operations
Return Value
Returns an array of collection name strings. Returns an empty array if no collections exist.
Example return value:
["kb_products", "kb_faqs", "kb_policies", "kb_support"]
Examples
Basic Collection Listing
' List all KB collections
collections = KB LIST COLLECTIONS
TALK "Available knowledge bases:"
FOR EACH coll IN collections
TALK " - " + coll
END FOR
Check Collection Existence
' Verify a collection exists before using it
collections = KB LIST COLLECTIONS
target_collection = "kb_products"
found = false
FOR EACH coll IN collections
IF coll = target_collection THEN
found = true
EXIT FOR
END IF
END FOR
IF found THEN
TALK "Products knowledge base is available"
USE KB target_collection
ELSE
TALK "Products knowledge base not found"
END IF
Admin Collection Overview
' Generate overview of all collections
collections = KB LIST COLLECTIONS
IF LEN(collections) = 0 THEN
TALK "No knowledge base collections found."
ELSE
TALK "Found " + LEN(collections) + " collections:"
FOR EACH coll IN collections
stats_json = KB COLLECTION STATS coll
stats = PARSE_JSON(stats_json)
disk_mb = stats.disk_data_size / 1024 / 1024
TALK " " + coll + ": " + stats.points_count + " docs (" + FORMAT(disk_mb, "#,##0.00") + " MB)"
END FOR
END IF
Dynamic Collection Selection
' Let user choose a knowledge base
collections = KB LIST COLLECTIONS
TALK "Which knowledge base would you like to search?"
TALK "Available options:"
idx = 1
FOR EACH coll IN collections
' Remove kb_ prefix for display
display_name = REPLACE(coll, "kb_", "")
TALK idx + ". " + display_name
idx = idx + 1
END FOR
HEAR choice AS NUMBER
IF choice > 0 AND choice <= LEN(collections) THEN
selected = collections[choice - 1]
USE KB selected
TALK "Now searching in: " + selected
ELSE
TALK "Invalid selection"
END IF
Batch Operations on All Collections
' Get stats for all collections
collections = KB LIST COLLECTIONS
total_docs = 0
total_size = 0
FOR EACH coll IN collections
stats_json = KB COLLECTION STATS coll
stats = PARSE_JSON(stats_json)
total_docs = total_docs + stats.points_count
total_size = total_size + stats.disk_data_size
END FOR
TALK "Across " + LEN(collections) + " collections:"
TALK " Total documents: " + FORMAT(total_docs, "#,##0")
TALK " Total size: " + FORMAT(total_size / 1024 / 1024, "#,##0.00") + " MB"
Collection Health Check
' Check health of all collections
collections = KB LIST COLLECTIONS
issues = []
FOR EACH coll IN collections
stats_json = KB COLLECTION STATS coll
stats = PARSE_JSON(stats_json)
IF stats.status <> "green" THEN
issues = issues + [coll + " (" + stats.status + ")"]
END IF
END FOR
IF LEN(issues) > 0 THEN
TALK "Collections with issues:"
FOR EACH issue IN issues
TALK " ⚠️ " + issue
END FOR
ELSE
TALK "✅ All " + LEN(collections) + " collections are healthy"
END IF
Collection-Based Routing
' Route query to appropriate collection based on topic
collections = KB LIST COLLECTIONS
' Determine best collection for user's question
topic = LLM "Classify this question into one category: products, support, policies, or general. Question: " + user_question
topic = TRIM(LOWER(topic))
target = "kb_" + topic
' Check if collection exists
collection_found = false
FOR EACH coll IN collections
IF coll = target THEN
collection_found = true
EXIT FOR
END IF
END FOR
IF collection_found THEN
USE KB target
answer = SEARCH user_question
ELSE
' Fall back to searching all collections
USE KB
answer = SEARCH user_question
END IF
TALK answer
Error Handling
ON ERROR RESUME NEXT
collections = KB LIST COLLECTIONS
IF ERROR THEN
PRINT "Failed to list collections: " + ERROR_MESSAGE
collections = []
END IF
IF LEN(collections) = 0 THEN
TALK "No knowledge base collections available"
ELSE
TALK "Found " + LEN(collections) + " knowledge base collections"
END IF
Related Keywords
- KB STATISTICS — Comprehensive KB statistics
- KB COLLECTION STATS — Stats for specific collection
- KB DOCUMENTS COUNT — Total document count
- KB STORAGE SIZE — Storage usage in MB
- USE KB — Enable KB for queries
- CLEAR KB — Clear knowledge base content
Implementation Notes
- Implemented in Rust under
src/basic/keywords/kb_statistics.rs - Queries Qdrant REST API at
/collections - Filters results by bot ID prefix (
kb_{bot_id}) - Returns array of Dynamic strings for easy iteration
- Empty array returned if no collections or on error
- Collection names include the full prefix (e.g.,
kb_products)
Summary
KB LIST COLLECTIONS provides a way to discover all knowledge base collections belonging to the current bot. Use it for dynamic collection selection, admin dashboards, batch operations, or validating collection existence before performing operations. Combine with KB COLLECTION STATS to get detailed information about each collection.
KB STORAGE SIZE
The KB STORAGE SIZE keyword returns the total disk storage used by the bot’s knowledge base in megabytes.
Syntax
size_mb = KB STORAGE SIZE
Parameters
None. Returns the storage size for the current bot’s knowledge base.
Description
KB STORAGE SIZE queries the Qdrant vector database to calculate the total disk storage consumed by all of the bot’s knowledge base collections. This is useful for monitoring storage usage, capacity planning, and cost management.
Use cases include:
- Storage monitoring and alerts
- Capacity planning
- Cost tracking for vector storage
- Admin dashboards
- Cleanup decisions
Return Value
Returns a floating-point number representing storage size in megabytes (MB).
Examples
Basic Storage Check
' Get current KB storage usage
storage_mb = KB STORAGE SIZE
TALK "Knowledge base is using " + FORMAT(storage_mb, "#,##0.00") + " MB of storage"
Storage Threshold Alert
' Alert if storage exceeds threshold
storage_mb = KB STORAGE SIZE
max_storage_mb = 1000 ' 1 GB limit
IF storage_mb > max_storage_mb THEN
SEND MAIL admin_email,
"KB Storage Alert",
"Knowledge base storage (" + FORMAT(storage_mb, "#,##0") + " MB) has exceeded the " + max_storage_mb + " MB threshold.",
[]
TALK "Storage alert sent to administrator"
ELSE
remaining = max_storage_mb - storage_mb
TALK "Storage OK: " + FORMAT(storage_mb, "#,##0") + " MB used, " + FORMAT(remaining, "#,##0") + " MB remaining"
END IF
Storage Tiers Display
' Display storage status with tier indicators
storage_mb = KB STORAGE SIZE
IF storage_mb < 100 THEN
tier = "🟢 Light"
ELSE IF storage_mb < 500 THEN
tier = "🟡 Moderate"
ELSE IF storage_mb < 1000 THEN
tier = "🟠 Heavy"
ELSE
tier = "🔴 Critical"
END IF
TALK "Storage Status: " + tier
TALK "Current usage: " + FORMAT(storage_mb, "#,##0.00") + " MB"
Cost Estimation
' Estimate storage costs (example pricing)
storage_mb = KB STORAGE SIZE
storage_gb = storage_mb / 1024
cost_per_gb = 0.25 ' Example: $0.25 per GB per month
monthly_cost = storage_gb * cost_per_gb
TALK "Current storage: " + FORMAT(storage_gb, "#,##0.00") + " GB"
TALK "Estimated monthly cost: $" + FORMAT(monthly_cost, "#,##0.00")
Storage Growth Tracking
' Log storage for trend analysis
storage_mb = KB STORAGE SIZE
doc_count = KB DOCUMENTS COUNT
' Calculate average size per document
IF doc_count > 0 THEN
avg_size_kb = (storage_mb * 1024) / doc_count
TALK "Average document size: " + FORMAT(avg_size_kb, "#,##0.00") + " KB"
END IF
' Store for trending
INSERT "storage_metrics", #{
"timestamp": NOW(),
"storage_mb": storage_mb,
"doc_count": doc_count,
"avg_size_kb": avg_size_kb
}
Comprehensive Storage Report
' Generate storage report
storage_mb = KB STORAGE SIZE
doc_count = KB DOCUMENTS COUNT
recent_docs = KB DOCUMENTS ADDED SINCE 30
' Calculate metrics
storage_gb = storage_mb / 1024
avg_doc_kb = IF(doc_count > 0, (storage_mb * 1024) / doc_count, 0)
report = "## KB Storage Report\n\n"
report = report + "**Date:** " + FORMAT(NOW(), "YYYY-MM-DD") + "\n\n"
report = report + "### Storage Metrics\n"
report = report + "- Total Storage: " + FORMAT(storage_mb, "#,##0.00") + " MB"
report = report + " (" + FORMAT(storage_gb, "#,##0.00") + " GB)\n"
report = report + "- Total Documents: " + FORMAT(doc_count, "#,##0") + "\n"
report = report + "- Avg Size per Doc: " + FORMAT(avg_doc_kb, "#,##0.00") + " KB\n"
report = report + "- Docs Added (30 days): " + recent_docs + "\n"
TALK report
Cleanup Decision Helper
' Help decide if cleanup is needed
storage_mb = KB STORAGE SIZE
max_storage = 2000 ' 2 GB limit
usage_pct = (storage_mb / max_storage) * 100
IF usage_pct > 80 THEN
TALK "⚠️ Storage at " + FORMAT(usage_pct, "#0.0") + "% capacity"
TALK "Consider cleaning up old or unused documents"
TALK "Use CLEAR KB to remove content if needed"
ELSE IF usage_pct > 60 THEN
TALK "📊 Storage at " + FORMAT(usage_pct, "#0.0") + "% capacity"
TALK "Storage is healthy but monitor growth"
ELSE
TALK "✅ Storage at " + FORMAT(usage_pct, "#0.0") + "% capacity"
TALK "Plenty of room for more documents"
END IF
Error Handling
ON ERROR RESUME NEXT
storage_mb = KB STORAGE SIZE
IF ERROR THEN
PRINT "Error getting storage size: " + ERROR_MESSAGE
storage_mb = 0.0
END IF
IF storage_mb > 0 THEN
TALK "Storage usage: " + FORMAT(storage_mb, "#,##0.00") + " MB"
ELSE
TALK "Unable to determine storage usage"
END IF
Related Keywords
- KB STATISTICS — Comprehensive KB statistics including storage
- KB DOCUMENTS COUNT — Total document count
- KB DOCUMENTS ADDED SINCE — Recently added documents
- KB COLLECTION STATS — Per-collection statistics
- KB LIST COLLECTIONS — List all collections
- CLEAR KB — Clear knowledge base content
Configuration
No specific configuration required. Uses the Qdrant connection configured at the system level.
Implementation Notes
- Implemented in Rust under
src/basic/keywords/kb_statistics.rs - Queries Qdrant REST API for collection sizes
- Aggregates disk usage across all bot collections
- Returns value in megabytes (MB) as float
- Returns 0.0 on error (does not throw)
- May take 1-2 seconds for large knowledge bases
Summary
KB STORAGE SIZE provides a quick way to check how much disk storage the knowledge base is consuming. Use it for monitoring, capacity planning, cost estimation, and cleanup decisions. For more detailed storage breakdown by collection, use KB STATISTICS instead.
Multi-Agent Keywords
This section covers keywords for building multi-agent systems where multiple specialized bots collaborate to handle complex tasks.
Overview
Multi-agent orchestration enables:
- Task specialization - Each bot focuses on what it does best
- Collaborative problem-solving - Bots work together on complex tasks
- Scalable architectures - Add new specialists without modifying existing bots
- Resilient systems - Failures are isolated and handled gracefully
Keyword Summary
| Keyword | Syntax | Description |
|---|---|---|
ADD BOT | ADD BOT "name" TRIGGER ON "keywords" | Add bot to session with triggers |
DELEGATE TO BOT | result = DELEGATE "msg" TO BOT "name" | Send task to another bot |
BROADCAST TO BOTS | BROADCAST "message" TO BOTS | Send message to all bots |
TRANSFER CONVERSATION | TRANSFER CONVERSATION TO "botname" | Hand off conversation |
BOT REFLECTION | BOT REFLECTION true | Enable agent self-analysis |
BOT REFLECTION INSIGHTS | insights = BOT REFLECTION INSIGHTS() | Get reflection results |
ADD BOT
Adds a bot to the current session with optional triggers, tools, and schedules.
' Add bot with keyword triggers
ADD BOT "billing-bot" TRIGGER ON "billing,invoice,payment"
' Add bot with tool access
ADD BOT "analyst-bot" TOOLS "calculate,forecast,report"
' Add bot with scheduled execution
ADD BOT "monitor-bot" SCHEDULE "0 */1 * * *"
' Add bot with multiple configurations
ADD BOT "support-bot" TRIGGER ON "help,support" TOOLS "ticket,escalate"
Trigger Types
| Type | Description | Example |
|---|---|---|
TRIGGER ON | Keyword-based activation | TRIGGER ON "billing,payment" |
TOOLS | Tool-based activation | TOOLS "calculate,search" |
SCHEDULE | Cron-based activation | SCHEDULE "0 9 * * *" |
DELEGATE TO BOT
Sends a task to another bot and optionally waits for a response.
' Fire-and-forget delegation
DELEGATE "Process this order" TO BOT "order-processor"
' Get response from delegation
result = DELEGATE "Calculate ROI" TO BOT "finance-bot"
TALK "Result: " + result
' Delegation with timeout
result = DELEGATE "Analyze report" TO BOT "analyst-bot" TIMEOUT 60
Parameters
| Parameter | Type | Description |
|---|---|---|
message | String | Task or message to send |
botname | String | Target bot name |
TIMEOUT | Number | Optional timeout in seconds (default: 30) |
BROADCAST TO BOTS
Sends a message to all bots in the current session.
' Notify all bots of an event
BROADCAST "New customer signup: " + customerid TO BOTS
' Emergency signal
BROADCAST "MAINTENANCE_MODE" TO BOTS
' Data update notification
BROADCAST "PRICE_UPDATE:" + JSON(prices) TO BOTS
TRANSFER CONVERSATION
Hands off the entire conversation to another bot. The current bot exits.
' Simple transfer
TALK "Let me connect you with our billing specialist."
TRANSFER CONVERSATION TO "billing-bot"
' Transfer with context
SET CONTEXT "issue" AS "refund request"
SET CONTEXT "amount" AS "$150"
TRANSFER CONVERSATION TO "refunds-bot"
' Conditional transfer
IF issueType = "technical" THEN
TRANSFER CONVERSATION TO "tech-support-bot"
ELSE
TRANSFER CONVERSATION TO "general-support-bot"
END IF
BOT REFLECTION
Enables agent self-analysis for continuous improvement.
' Enable reflection
BOT REFLECTION true
' Disable reflection
BOT REFLECTION false
' Monitor specific metric
BOT REFLECTION ON "conversation_quality"
BOT REFLECTION ON "response_accuracy"
BOT REFLECTION ON "user_satisfaction"
Reflection Metrics
| Metric | Description |
|---|---|
conversation_quality | Overall conversation effectiveness |
response_accuracy | Correctness of responses |
user_satisfaction | Estimated user satisfaction |
tone_appropriateness | Whether tone matches context |
resolution_rate | Whether issues were resolved |
BOT REFLECTION INSIGHTS
Retrieves the results of reflection analysis.
' Get insights
insights = BOT REFLECTION INSIGHTS()
' Access properties
PRINT "Quality Score: " + insights.qualityScore
PRINT "Issues: " + insights.issuesCount
' Iterate suggestions
FOR EACH suggestion IN insights.suggestions
PRINT "Suggestion: " + suggestion
NEXT suggestion
' Use for alerting
IF insights.qualityScore < 0.5 THEN
SEND MAIL admin, "Low Quality Alert", insights.summary
END IF
Insights Object
| Property | Type | Description |
|---|---|---|
qualityScore | Number | Overall quality (0-1) |
summary | String | Text summary |
issues | Array | Identified issues |
issuesCount | Number | Count of issues |
suggestions | Array | Improvement suggestions |
criticalIssues | Number | Critical problem count |
timestamp | DateTime | When analyzed |
Common Patterns
Router Pattern
A central bot routes queries to specialists.
' router-bot/start.bas
HEAR userquery
' Classify the query
category = LLM "Classify into: billing, technical, sales, general. Query: " + userquery
SWITCH category
CASE "billing"
result = DELEGATE userquery TO BOT "billing-bot"
CASE "technical"
result = DELEGATE userquery TO BOT "tech-bot"
CASE "sales"
result = DELEGATE userquery TO BOT "sales-bot"
CASE ELSE
result = LLM userquery
END SWITCH
TALK result
Expert Panel Pattern
Multiple bots provide perspectives.
question = "Should we expand into Europe?"
' Get multiple expert opinions
marketView = DELEGATE question TO BOT "market-analyst"
financeView = DELEGATE question TO BOT "finance-expert"
riskView = DELEGATE question TO BOT "risk-assessor"
' Synthesize
synthesis = LLM "Combine these expert views: " + marketView + financeView + riskView
TALK synthesis
Escalation Pattern
Automatic escalation when confidence is low.
' First-line bot
confidence = LLM "Rate confidence (0-100) for: " + userquery
IF confidence < 50 THEN
TALK "Let me connect you with a specialist."
SET CONTEXT "escalation_reason" AS "low_confidence"
TRANSFER CONVERSATION TO "senior-support-bot"
ELSE
response = LLM userquery
TALK response
END IF
Configuration
config.csv Options
name,value
a2a-enabled,true
a2a-timeout,30
a2a-max-hops,5
a2a-retry-count,3
reflection-enabled,true
reflection-interval,10
reflection-min-messages,3
| Option | Default | Description |
|---|---|---|
a2a-enabled | true | Enable agent-to-agent communication |
a2a-timeout | 30 | Default delegation timeout (seconds) |
a2a-max-hops | 5 | Maximum delegation chain depth |
a2a-retry-count | 3 | Retry attempts on failure |
reflection-enabled | true | Enable bot reflection |
reflection-interval | 10 | Messages between reflections |
Best Practices
- Use descriptive bot names -
billing-botnotbot2 - Set appropriate timeouts - Long tasks need longer timeouts
- Handle failures gracefully - Always have fallback paths
- Avoid circular delegation - Bot A → Bot B → Bot A
- Keep chains short - Max 3-4 delegation hops
- Log delegations - Helps debug multi-agent flows
- Review reflection insights - Act on improvement suggestions
See Also
- ADD BOT - Detailed ADD BOT reference
- DELEGATE TO BOT - Delegation details
- BOT REFLECTION - Reflection details
- Multi-Agent Orchestration - Complete guide
- A2A Protocol - Protocol details
Social Media Keywords
General Bots provides native social media integration through BASIC keywords for posting content, scheduling, retrieving metrics, and managing posts across multiple platforms.
Platform Support
Supported platforms include Instagram, Facebook, LinkedIn, and Twitter/X. Each platform requires appropriate API credentials configured in your bot’s config.csv.
POST TO
Publish content to one or more social media platforms.
Single Platform
POST TO INSTAGRAM image, "Check out our new feature! #AI #Automation"
POST TO FACEBOOK image, caption
POST TO LINKEDIN image, caption
POST TO TWITTER image, caption
Multiple Platforms
Post to several platforms simultaneously:
POST TO "instagram,facebook,linkedin" image, caption
The keyword returns a post ID that can be used for metrics retrieval or deletion.
Example: Product Announcement
image = "/products/new-release.jpg"
caption = "Introducing our latest innovation! Available now. #NewProduct #Innovation"
post_id = POST TO "instagram,facebook" image, caption
SET BOT MEMORY "latest_post_id", post_id
TALK "Posted to Instagram and Facebook"
POST TO … AT (Scheduled)
Schedule posts for future publishing at a specific date and time.
POST TO INSTAGRAM AT "2025-02-01 10:00" image, caption
POST TO FACEBOOK AT "2025-02-15 09:00" image, "Coming soon!"
Campaign Scheduling
' Schedule a week of posts
images = LIST "/campaign/week1/"
dates = ["2025-02-03 09:00", "2025-02-04 09:00", "2025-02-05 09:00"]
FOR i = 0 TO LEN(images) - 1
POST TO "instagram,facebook" AT dates[i] images[i].path, captions[i]
NEXT i
TALK "Campaign scheduled: " + LEN(images) + " posts"
GET METRICS
Retrieve engagement metrics for published posts.
Platform-Specific Metrics
' Instagram metrics
metrics = GET INSTAGRAM METRICS "post-id"
TALK "Likes: " + metrics.likes + ", Comments: " + metrics.comments
' Facebook metrics
fb_metrics = GET FACEBOOK METRICS "post-id"
TALK "Shares: " + fb_metrics.shares + ", Reactions: " + fb_metrics.reactions
' LinkedIn metrics
li_metrics = GET LINKEDIN METRICS "post-id"
TALK "Impressions: " + li_metrics.impressions
' Twitter metrics
tw_metrics = GET TWITTER METRICS "post-id"
TALK "Retweets: " + tw_metrics.retweets + ", Likes: " + tw_metrics.likes
Metrics Report
SET SCHEDULE "every monday at 9am"
post_id = GET BOT MEMORY "latest_post_id"
metrics = GET INSTAGRAM METRICS post_id
WITH report
.post_id = post_id
.likes = metrics.likes
.comments = metrics.comments
.reach = metrics.reach
.engagement_rate = ROUND((metrics.likes + metrics.comments) / metrics.reach * 100, 2)
.report_date = NOW()
END WITH
SEND MAIL TO "marketing@company.com" SUBJECT "Weekly Social Report" BODY report
GET POSTS
List posts from a platform.
' Get all Instagram posts
posts = GET INSTAGRAM POSTS
FOR EACH post IN posts
TALK post.id + ": " + post.caption
NEXT post
' Get Facebook posts
fb_posts = GET FACEBOOK POSTS
DELETE POST
Remove a scheduled or published post.
DELETE POST "post-id"
TALK "Post removed"
Conditional Deletion
' Delete posts with low engagement
posts = GET INSTAGRAM POSTS
FOR EACH post IN posts
metrics = GET INSTAGRAM METRICS post.id
IF metrics.likes < 10 AND DATEDIFF("day", post.created_at, NOW()) > 30 THEN
DELETE POST post.id
TALK "Deleted low-engagement post: " + post.id
END IF
NEXT post
Campaign Examples
Welcome Campaign
ON FORM SUBMIT "signup"
name = fields.name
email = fields.email
' Welcome email immediately
SEND TEMPLATE "welcome", "email", email, #{name: name}
' Schedule social proof post
IF fields.share_permission = "yes" THEN
caption = "Welcome to our community, " + name + "! 🎉 #NewMember #Community"
POST TO INSTAGRAM AT DATEADD(NOW(), 1, "hour") "/templates/welcome-card.png", caption
END IF
END ON
Social Media Campaign
' social-campaign.bas
SET SCHEDULE "every day at 10am"
' Rotate through content library
content_index = GET BOT MEMORY "content_index"
IF content_index = "" THEN content_index = 0
content_library = [
#{image: "/content/tip1.png", caption: "Pro tip: Automate your workflows! #Productivity"},
#{image: "/content/tip2.png", caption: "Save hours every week with automation #Efficiency"},
#{image: "/content/tip3.png", caption: "Let AI handle the repetitive tasks #AI #Automation"}
]
current = content_library[content_index MOD LEN(content_library)]
post_id = POST TO "instagram,linkedin" current.image, current.caption
SET BOT MEMORY "content_index", content_index + 1
SET BOT MEMORY "last_post_id", post_id
TALK "Posted content #" + (content_index + 1)
Engagement Monitoring
SET SCHEDULE "every 6 hours"
posts = GET INSTAGRAM POSTS
total_engagement = 0
post_count = 0
FOR EACH post IN posts
IF DATEDIFF("day", post.created_at, NOW()) <= 7 THEN
metrics = GET INSTAGRAM METRICS post.id
total_engagement = total_engagement + metrics.likes + metrics.comments
post_count = post_count + 1
END IF
NEXT post
avg_engagement = IIF(post_count > 0, ROUND(total_engagement / post_count, 0), 0)
IF avg_engagement < 50 THEN
SEND MAIL TO "marketing@company.com" SUBJECT "Low Engagement Alert" BODY "Average engagement this week: " + avg_engagement
END IF
Configuration
Add social media credentials to your bot’s config.csv:
key,value
instagram-access-token,your-instagram-token
instagram-account-id,your-account-id
facebook-access-token,your-facebook-token
facebook-page-id,your-page-id
linkedin-access-token,your-linkedin-token
linkedin-organization-id,your-org-id
twitter-api-key,your-api-key
twitter-api-secret,your-api-secret
twitter-access-token,your-access-token
twitter-access-secret,your-access-secret
Best Practices
Schedule posts strategically. Analyze your audience engagement patterns and post when your followers are most active.
Use hashtags effectively. Include relevant hashtags but avoid overloading—3 to 5 well-chosen tags typically perform better than 30 generic ones.
Monitor metrics regularly. Set up scheduled reports to track engagement trends and adjust your content strategy.
Handle rate limits gracefully. Social platforms enforce API rate limits. Space out bulk operations and implement retry logic.
Store post IDs. Save post identifiers in BOT MEMORY for later metrics retrieval or deletion.
post_id = POST TO INSTAGRAM image, caption
SET BOT MEMORY "post_" + FORMAT(NOW(), "yyyyMMdd"), post_id
See Also
- SET SCHEDULE - Automate posting schedules
- Template Variables - Dynamic content in captions
- SEND TEMPLATE - Multi-channel messaging
- GET BOT MEMORY - Store post tracking data
Lead Scoring Keywords
General Bots includes native lead scoring capabilities through BASIC keywords, enabling automated lead qualification, AI-enhanced scoring, and CRM integration directly from conversational flows.
Overview
Lead scoring assigns numeric values to prospects based on their attributes and behaviors. Higher scores indicate greater sales readiness. General Bots provides both rule-based and AI-enhanced scoring approaches.
SCORE LEAD
Calculate a lead score based on profile and behavior data using configurable rules.
Syntax
score = SCORE LEAD lead_data
Example
lead_data = NEW OBJECT
lead_data.email = "john@company.com"
lead_data.name = "John Smith"
lead_data.company = "Acme Corp"
lead_data.job_title = "VP of Engineering"
lead_data.industry = "Technology"
lead_data.company_size = "Enterprise"
score = SCORE LEAD lead_data
TALK "Score: " + score.score
TALK "Grade: " + score.grade
TALK "Status: " + score.status
TALK "Top recommendation: " + score.recommendations[0]
Return Object
The SCORE LEAD keyword returns an object containing:
| Property | Type | Description |
|---|---|---|
score | Integer | Numeric score (0-100) |
grade | String | Letter grade (A, B, C, D, F) |
status | String | hot, warm, cold, or unqualified |
breakdown | Object | Score components by category |
recommendations | Array | Suggested next actions |
Score Breakdown
score = SCORE LEAD lead_data
TALK "Demographic score: " + score.breakdown.demographic
TALK "Firmographic score: " + score.breakdown.firmographic
TALK "Behavioral score: " + score.breakdown.behavioral
TALK "Engagement score: " + score.breakdown.engagement
AI SCORE LEAD
Use AI/LLM-enhanced scoring for more nuanced lead evaluation.
score = AI SCORE LEAD lead_data
TALK "AI Score: " + score.score
TALK "Confidence: " + score.breakdown.ai_confidence
TALK "Reasoning: " + score.breakdown.ai_reasoning
AI scoring considers factors that rule-based scoring might miss, such as company news, market conditions, and subtle signals in communication patterns.
When to Use AI Scoring
AI scoring works best for complex B2B scenarios where context matters significantly. Rule-based scoring is faster and sufficient for high-volume B2C leads with clear qualification criteria.
' Use AI for enterprise leads, rules for SMB
IF lead_data.company_size = "Enterprise" THEN
score = AI SCORE LEAD lead_data
ELSE
score = SCORE LEAD lead_data
END IF
GET LEAD SCORE
Retrieve an existing lead score from the database.
score = GET LEAD SCORE "lead-id"
TALK "Current score: " + score.score
TALK "Last updated: " + score.updated_at
QUALIFY LEAD
Check if a lead meets the qualification threshold for sales handoff.
Default Threshold (70)
result = QUALIFY LEAD "lead-id"
IF result.qualified THEN
TALK "Lead is qualified: " + result.status
' Notify sales team
SEND MAIL TO "sales@company.com" SUBJECT "New Qualified Lead" BODY result
ELSE
TALK "Lead needs more nurturing. Score: " + result.score
END IF
Custom Threshold
' Enterprise deals require higher qualification
result = QUALIFY LEAD "lead-id", 85
IF result.qualified THEN
TALK "Enterprise lead qualified for sales"
END IF
Qualification Result
| Property | Type | Description |
|---|---|---|
qualified | Boolean | Meets threshold |
score | Integer | Current score |
threshold | Integer | Applied threshold |
status | String | Current lead status |
gap | Integer | Points needed if not qualified |
UPDATE LEAD SCORE
Manually adjust a lead’s score based on specific actions or behaviors.
Add Points
' Lead attended webinar
new_score = UPDATE LEAD SCORE "lead-id", 10, "Attended product webinar"
TALK "Score updated to: " + new_score.score
Deduct Points
' Lead unsubscribed from newsletter
new_score = UPDATE LEAD SCORE "lead-id", -15, "Unsubscribed from email"
Behavioral Scoring
ON "webinar:attended"
UPDATE LEAD SCORE params.lead_id, 15, "Webinar attendance"
END ON
ON "pricing:viewed"
UPDATE LEAD SCORE params.lead_id, 20, "Viewed pricing page"
END ON
ON "demo:requested"
UPDATE LEAD SCORE params.lead_id, 30, "Requested demo"
END ON
ON "email:bounced"
UPDATE LEAD SCORE params.lead_id, -25, "Email bounced"
END ON
Complete Lead Nurturing Flow
' lead-nurturing.bas
PARAM email AS string
PARAM name AS string
PARAM company AS string
PARAM source AS string
DESCRIPTION "Process and score new leads"
' Build lead profile
WITH lead
.email = email
.name = name
.company = company
.source = source
.created_at = NOW()
END WITH
' Initial scoring
score = SCORE LEAD lead
' Store lead
INSERT "leads", lead
SET BOT MEMORY "lead_" + email + "_score", score.score
' Route based on score
IF score.status = "hot" THEN
' Immediate sales notification
SEND MAIL TO "sales@company.com" SUBJECT "Hot Lead: " + name BODY score
SEND TEMPLATE "hot-lead-welcome", "email", email, #{name: name}
ELSEIF score.status = "warm" THEN
' Schedule nurture sequence
SEND TEMPLATE "welcome", "email", email, #{name: name}
SET SCHEDULE DATEADD(NOW(), 3, "day"), "nurture-day-3.bas"
ELSE
' Cold lead - educational content
SEND TEMPLATE "educational", "email", email, #{name: name}
END IF
TALK "Lead " + name + " processed with score " + score.score + " (" + score.status + ")"
Lead Scoring Configuration
Configure scoring weights in your bot’s config.csv:
key,value
lead-score-job-title-weight,20
lead-score-company-size-weight,15
lead-score-industry-weight,10
lead-score-engagement-weight,25
lead-score-behavioral-weight,30
lead-score-qualification-threshold,70
Title-Based Scoring
| Job Title Pattern | Points |
|---|---|
| C-Level (CEO, CTO, CFO) | 25 |
| VP / Vice President | 20 |
| Director | 15 |
| Manager | 10 |
| Individual Contributor | 5 |
Company Size Scoring
| Company Size | Points |
|---|---|
| Enterprise (1000+) | 20 |
| Mid-Market (100-999) | 15 |
| SMB (10-99) | 10 |
| Small (1-9) | 5 |
Behavioral Actions
| Action | Typical Points |
|---|---|
| Demo request | +30 |
| Pricing page view | +20 |
| Case study download | +15 |
| Webinar attendance | +15 |
| Blog subscription | +10 |
| Email open | +2 |
| Email click | +5 |
| Unsubscribe | -15 |
| Email bounce | -25 |
Scheduled Score Decay
Implement score decay for inactive leads:
' score-decay.bas
SET SCHEDULE "every day at 2am"
' Find leads with no activity in 30 days
stale_leads = FIND "leads", "last_activity < DATEADD(NOW(), -30, 'day') AND score > 20"
FOR EACH lead IN stale_leads
UPDATE LEAD SCORE lead.id, -5, "Inactivity decay"
NEXT lead
TALK "Processed " + LEN(stale_leads) + " stale leads"
Integration with CRM
Push qualified leads to external CRM systems:
result = QUALIFY LEAD lead_id
IF result.qualified THEN
' Push to Salesforce
crm_payload = NEW OBJECT
crm_payload.email = lead.email
crm_payload.name = lead.name
crm_payload.score = result.score
crm_payload.status = "Qualified"
POST "https://api.salesforce.com/leads", crm_payload
' Mark as synced
UPDATE "leads", "id = " + lead_id, #{crm_synced: true, synced_at: NOW()}
END IF
Best Practices
Start with simple rules. Begin with basic demographic and firmographic scoring, then add behavioral triggers as you gather data.
Align scoring with sales. Work with your sales team to define what makes a “qualified” lead. Their input ensures scores reflect actual sales readiness.
Review and adjust regularly. Analyze conversion rates by score range monthly. Adjust weights if high-scoring leads aren’t converting.
Combine rule-based and AI scoring. Use rule-based scoring for speed and consistency, AI scoring for complex enterprise deals requiring nuanced evaluation.
Implement score decay. Leads that go cold should have their scores decrease over time to keep the pipeline accurate.
Track score history. Store score changes with timestamps and reasons for audit trails and analysis.
' Log all score changes
ON "lead:score:changed"
INSERT "score_history", #{
lead_id: params.lead_id,
old_score: params.old_score,
new_score: params.new_score,
reason: params.reason,
changed_at: NOW()
}
END ON
See Also
- SEND TEMPLATE - Nurture campaign emails
- SET SCHEDULE - Automated scoring jobs
- ON Keyword - Event-driven score updates
- GET / POST - CRM integration
HTTP & API Operations
This section covers keywords for making HTTP requests and integrating with external APIs. These keywords enable bots to communicate with REST APIs, GraphQL endpoints, SOAP services, and any HTTP-based web service.
Overview
General Bots provides a complete set of HTTP keywords for API integration:
| Keyword | HTTP Method | Purpose |
|---|---|---|
| GET | GET | Retrieve data from URLs or files |
| POST | POST | Create resources, submit data |
| PUT | PUT | Replace/update entire resources |
| PATCH | PATCH | Partial resource updates |
| DELETE HTTP | DELETE | Remove resources |
| SET HEADER | — | Set request headers |
| GRAPHQL | POST | GraphQL queries and mutations |
| SOAP | POST | SOAP/XML web services |
Quick Examples
REST API Call
' GET request
data = GET "https://api.example.com/users/123"
TALK "User name: " + data.name
' POST request
result = POST "https://api.example.com/users" WITH
name = "John",
email = "john@example.com"
TALK "Created user ID: " + result.id
' PUT request (full update)
PUT "https://api.example.com/users/123" WITH
name = "John Doe",
email = "johndoe@example.com",
status = "active"
' PATCH request (partial update)
PATCH "https://api.example.com/users/123" WITH status = "inactive"
' DELETE request
DELETE HTTP "https://api.example.com/users/123"
With Authentication
' Set authorization header
SET HEADER "Authorization", "Bearer " + api_token
SET HEADER "Content-Type", "application/json"
' Make authenticated request
result = GET "https://api.example.com/protected/resource"
' Clear headers when done
SET HEADER "Authorization", ""
GraphQL Query
query = '
query GetUser($id: ID!) {
user(id: $id) {
name
email
orders { id total }
}
}
'
result = GRAPHQL "https://api.example.com/graphql", query WITH id = "123"
TALK "User: " + result.data.user.name
SOAP Service
' Call a SOAP web service
request = '
<GetWeather xmlns="http://weather.example.com">
<City>New York</City>
</GetWeather>
'
result = SOAP "https://weather.example.com/service", "GetWeather", request
TALK "Temperature: " + result.Temperature
Common Patterns
API Client Setup
' Configure API base URL and authentication
api_base = "https://api.myservice.com/v1"
SET HEADER "Authorization", "Bearer " + GET BOT MEMORY "api_token"
SET HEADER "X-API-Version", "2025-01"
' Helper function pattern
' GET users
users = GET api_base + "/users"
' GET specific user
user = GET api_base + "/users/" + user_id
' CREATE user
new_user = POST api_base + "/users", user_data
' UPDATE user
PUT api_base + "/users/" + user_id, updated_data
' DELETE user
DELETE HTTP api_base + "/users/" + user_id
Error Handling
ON ERROR RESUME NEXT
result = POST "https://api.example.com/orders", order_data
IF ERROR THEN
PRINT "API Error: " + ERROR_MESSAGE
TALK "Sorry, I couldn't process your order. Please try again."
ELSE IF result.error THEN
TALK "Order failed: " + result.error.message
ELSE
TALK "Order placed! ID: " + result.id
END IF
Retry Logic
max_retries = 3
retry_count = 0
success = false
WHILE retry_count < max_retries AND NOT success
ON ERROR RESUME NEXT
result = POST api_url, data
IF NOT ERROR AND NOT result.error THEN
success = true
ELSE
retry_count = retry_count + 1
WAIT 2 ' Wait 2 seconds before retry
END IF
WEND
IF success THEN
TALK "Request successful!"
ELSE
TALK "Request failed after " + max_retries + " attempts."
END IF
Pagination
' Fetch all pages of results
all_items = []
page = 1
has_more = true
WHILE has_more
result = GET api_base + "/items?page=" + page + "&limit=100"
FOR EACH item IN result.items
all_items = APPEND(all_items, item)
NEXT
has_more = result.has_more
page = page + 1
WEND
TALK "Fetched " + LEN(all_items) + " total items"
Request Headers
Common headers you might need to set:
| Header | Purpose | Example |
|---|---|---|
Authorization | API authentication | Bearer token123 |
Content-Type | Request body format | application/json |
Accept | Response format preference | application/json |
X-API-Key | API key authentication | key_abc123 |
X-Request-ID | Request tracking | req-uuid-here |
SET HEADER "Authorization", "Bearer " + token
SET HEADER "Content-Type", "application/json"
SET HEADER "Accept", "application/json"
SET HEADER "X-Request-ID", GUID()
Response Handling
JSON Responses
Most APIs return JSON, automatically parsed:
result = GET "https://api.example.com/user"
' Access properties directly
TALK "Name: " + result.name
TALK "Email: " + result.email
' Access nested objects
TALK "City: " + result.address.city
' Access arrays
FOR EACH order IN result.orders
TALK "Order: " + order.id
NEXT
Check Response Status
result = POST api_url, data
IF result.status = 201 THEN
TALK "Resource created!"
ELSE IF result.status = 400 THEN
TALK "Bad request: " + result.error.message
ELSE IF result.status = 401 THEN
TALK "Authentication failed. Please log in again."
ELSE IF result.status = 404 THEN
TALK "Resource not found."
ELSE IF result.status >= 500 THEN
TALK "Server error. Please try again later."
END IF
Configuration
Configure HTTP settings in config.csv:
name,value
http-timeout,30
http-retry-count,3
http-retry-delay,1000
http-base-url,https://api.mycompany.com
http-user-agent,GeneralBots/1.0
http-max-redirects,10
http-verify-ssl,true
Security Best Practices
- Store credentials securely — Use Vault or environment variables for API keys
- Use HTTPS — Never send credentials over unencrypted connections
- Validate responses — Check status codes and handle errors
- Set timeouts — Prevent hanging on slow APIs
- Rate limit — Respect API rate limits to avoid being blocked
- Log requests — Enable logging for debugging without exposing secrets
' Good: Token from secure storage
token = GET BOT MEMORY "api_token"
SET HEADER "Authorization", "Bearer " + token
' Bad: Hardcoded token
' SET HEADER "Authorization", "Bearer sk-abc123" ' NEVER DO THIS
See Also
- GET — Retrieve data
- POST — Create resources
- PUT — Update resources
- PATCH — Partial updates
- DELETE HTTP — Delete resources
- SET HEADER — Set request headers
- GRAPHQL — GraphQL operations
- SOAP — SOAP web services
POST
The POST keyword sends HTTP POST requests to external APIs and web services, enabling bots to create resources, submit data, and integrate with third-party systems.
Syntax
result = POST url, data
result = POST url, data, content_type
POST url, param1, param2, param3, ...
Parameters
| Parameter | Type | Description |
|---|---|---|
url | String | The target URL endpoint |
data | String/Object | Request body (JSON string or object) |
content_type | String | Optional content type (default: application/json) |
param1, param2, ... | Any | Positional parameters for form-style requests |
Description
POST sends data to a specified URL using the HTTP POST method. This is the primary keyword for:
- Creating new resources in REST APIs
- Submitting form data
- Triggering webhooks
- Sending notifications to external services
- Integrating with third-party platforms
The response is returned as a parsed JSON object when possible, or as a string for other content types.
Examples
Basic JSON POST
' Create a new user via API
data = '{"name": "John Doe", "email": "john@example.com"}'
result = POST "https://api.example.com/users", data
TALK "User created with ID: " + result.id
Using WITH Syntax
' Create order using WITH keyword
result = POST "https://api.store.com/orders" WITH
customer_id = "cust-123",
items = ["item-1", "item-2"],
total = 99.99
TALK "Order " + result.order_id + " placed successfully!"
Form-Style Parameters
' Submit with positional parameters
POST "https://warehouse.internal/api/orders", order_id, items, shipping_address, "express"
With Custom Headers
' Set authorization header first
SET HEADER "Authorization", "Bearer " + api_token
SET HEADER "X-Request-ID", request_id
result = POST "https://api.service.com/data", payload
' Clear headers after request
SET HEADER "Authorization", ""
Webhook Integration
' Send Slack notification
POST "https://hooks.slack.com/services/xxx/yyy/zzz" WITH
channel = "#alerts",
text = "New order received: " + order_id,
username = "Order Bot"
Creating Records
' Create a support ticket
result = POST "https://helpdesk.example.com/api/tickets" WITH
title = "Customer inquiry",
description = user_message,
priority = "medium",
customer_email = customer.email
IF result.id THEN
TALK "Ticket #" + result.id + " created. Our team will respond within 24 hours."
ELSE
TALK "Sorry, I couldn't create the ticket. Please try again."
END IF
Handling Responses
Check Response Status
result = POST "https://api.example.com/resource", data
IF result.error THEN
TALK "Error: " + result.error.message
ELSE IF result.id THEN
TALK "Success! Created resource: " + result.id
END IF
Parse Nested Response
result = POST "https://api.payment.com/charge", payment_data
IF result.status = "succeeded" THEN
TALK "Payment of $" + result.amount + " processed!"
TALK "Transaction ID: " + result.transaction_id
ELSE
TALK "Payment failed: " + result.failure_reason
END IF
Common Use Cases
Send Email via API
POST "https://api.mailservice.com/send" WITH
to = customer_email,
subject = "Order Confirmation",
body = "Thank you for your order #" + order_id
Create Calendar Event
result = POST "https://calendar.api.com/events" WITH
title = "Meeting with " + contact_name,
start = meeting_time,
duration = 60,
attendees = [contact_email]
TALK "Meeting scheduled! Calendar invite sent."
Log Analytics Event
' Track user action
POST "https://analytics.example.com/track" WITH
event = "purchase_completed",
user_id = user.id,
order_value = total,
items_count = LEN(cart)
CRM Integration
' Create lead in CRM
result = POST "https://crm.example.com/api/leads" WITH
first_name = first_name,
last_name = last_name,
email = email,
phone = phone,
source = "chatbot",
notes = "Initial inquiry: " + user_query
SET USER MEMORY "crm_lead_id", result.id
Error Handling
ON ERROR RESUME NEXT
result = POST "https://api.example.com/resource", data
IF ERROR THEN
PRINT "POST failed: " + ERROR_MESSAGE
' Try backup endpoint
result = POST "https://backup-api.example.com/resource", data
END IF
IF result.error THEN
TALK "The service returned an error. Please try again later."
ELSE
TALK "Request successful!"
END IF
Content Types
| Content Type | Use Case |
|---|---|
application/json | Default, most REST APIs |
application/x-www-form-urlencoded | HTML form submissions |
multipart/form-data | File uploads (use UPLOAD instead) |
text/xml | SOAP services (use SOAP instead) |
' Explicit content type
result = POST "https://legacy.api.com/submit", form_data, "application/x-www-form-urlencoded"
Configuration
Timeouts
Configure request timeout in config.csv:
name,value
http-timeout,30
http-retry-count,3
http-retry-delay,1000
Base URL
Set a base URL for all HTTP requests:
name,value
http-base-url,https://api.mycompany.com
Then use relative paths:
result = POST "/users", user_data ' Resolves to https://api.mycompany.com/users
Implementation Notes
- Implemented in Rust under
src/web_automation/http.rs - Uses
reqwestlibrary with async runtime - Automatically serializes objects to JSON
- Handles redirects (up to 10 hops)
- Validates SSL certificates by default
- Supports gzip/deflate response compression
Related Keywords
- GET — Retrieve data from URLs
- PUT — Update existing resources
- PATCH — Partial resource updates
- DELETE HTTP — Remove resources
- SET HEADER — Set request headers
- GRAPHQL — GraphQL queries and mutations
Summary
POST is essential for integrating bots with external services. Use it to create resources, submit data, trigger webhooks, and connect to any REST API. Combined with SET HEADER for authentication, it enables powerful integrations with CRMs, payment systems, notification services, and more.
PUT
The PUT keyword sends HTTP PUT requests to external APIs, used for replacing or updating entire resources.
Syntax
result = PUT url, data
PUT url WITH field1 = value1, field2 = value2
Parameters
| Parameter | Type | Description |
|---|---|---|
url | String | The target URL endpoint |
data | String | JSON string for request body |
WITH | Clause | Field-value pairs for the request body |
Description
PUT sends data to a specified URL using the HTTP PUT method. In REST APIs, PUT is used to:
- Replace an entire resource with new data
- Create a resource at a specific URL if it doesn’t exist
- Update all fields of an existing resource
Unlike PATCH which updates partial data, PUT typically replaces the entire resource.
Examples
Basic PUT Request
' Update entire user profile
result = PUT "https://api.example.com/users/123" WITH
name = "John Doe",
email = "john.doe@example.com",
phone = "+1-555-0100",
status = "active"
IF result.success THEN
TALK "Profile updated successfully!"
ELSE
TALK "Update failed: " + result.error
END IF
Replace Configuration
' Replace entire configuration object
result = PUT "https://api.example.com/config/bot-settings" WITH
theme = "dark",
language = "en",
notifications = true,
auto_reply = false
TALK "Configuration saved"
Update Product
' Replace product details
result = PUT "https://api.store.com/products/SKU-001" WITH
name = "Premium Widget",
price = 49.99,
stock = 100,
category = "electronics",
description = "High-quality widget with premium features"
TALK "Product updated: " + result.name
With Authentication
' Set authorization header first
SET HEADER "Authorization", "Bearer " + api_token
SET HEADER "Content-Type", "application/json"
' Make authenticated PUT request
result = PUT "https://api.service.com/resources/456" WITH
title = "Updated Title",
content = new_content,
updated_by = user.id
' Clear headers after request
SET HEADER "Authorization", ""
Using JSON String
' PUT with JSON string body
json_body = '{"name": "Updated Name", "status": "published"}'
result = PUT "https://api.example.com/articles/789", json_body
TALK "Article updated!"
PUT vs PATCH vs POST
| Method | Purpose | Body Contains |
|---|---|---|
POST | Create new resource | New resource data |
PUT | Replace entire resource | Complete resource data |
PATCH | Update partial resource | Only changed fields |
' POST - Create new
result = POST "https://api.example.com/users" WITH
name = "New User",
email = "new@example.com"
' Creates user, returns new ID
' PUT - Replace entire resource
result = PUT "https://api.example.com/users/123" WITH
name = "Updated Name",
email = "updated@example.com",
phone = "+1-555-0100"
' All fields required, replaces entire user
' PATCH - Update specific fields
result = PATCH "https://api.example.com/users/123" WITH
phone = "+1-555-0200"
' Only phone is updated, other fields unchanged
Common Use Cases
Update User Settings
' Save all user preferences
result = PUT "https://api.example.com/users/" + user.id + "/settings" WITH
email_notifications = true,
sms_notifications = false,
timezone = "America/New_York",
language = "en"
TALK "Your settings have been saved!"
Replace Document
' Upload new version of document (replaces existing)
document_content = READ "templates/contract.md"
result = PUT "https://api.docs.com/documents/" + doc_id WITH
title = "Service Agreement v2.0",
content = document_content,
version = "2.0",
last_modified = FORMAT(NOW(), "ISO8601")
TALK "Document replaced with new version"
Update Order Status
' Replace order with updated status
result = PUT "https://api.orders.com/orders/" + order_id WITH
customer_id = order.customer_id,
items = order.items,
total = order.total,
status = "shipped",
tracking_number = tracking_id,
shipped_at = FORMAT(NOW(), "ISO8601")
TALK "Order marked as shipped!"
Error Handling
ON ERROR RESUME NEXT
result = PUT "https://api.example.com/resource/123" WITH
field1 = value1,
field2 = value2
IF ERROR THEN
PRINT "PUT request failed: " + ERROR_MESSAGE
TALK "Sorry, I couldn't update that information."
ELSE IF result.error THEN
TALK "Update failed: " + result.error.message
ELSE
TALK "Update successful!"
END IF
Common HTTP Status Codes
| Status | Meaning | Action |
|---|---|---|
| 200 | Success, resource updated | Process response |
| 201 | Created (resource didn’t exist) | New resource created |
| 204 | Success, no content returned | Update complete |
| 400 | Bad request | Check request data |
| 401 | Unauthorized | Check authentication |
| 404 | Resource not found | Verify URL/ID |
| 409 | Conflict | Resource was modified |
| 422 | Validation error | Check field values |
Configuration
Configure HTTP settings in config.csv:
name,value
http-timeout,30
http-retry-count,3
http-retry-delay,1000
Implementation Notes
- Implemented in Rust under
src/web_automation/http.rs - Automatically serializes WITH clause to JSON
- Supports custom headers via SET HEADER
- Returns parsed JSON response
- Handles redirects (up to 10 hops)
Related Keywords
- GET — Retrieve data from URLs
- POST — Create new resources
- PATCH — Partial resource updates
- DELETE HTTP — Remove resources
- SET HEADER — Set request headers
Summary
PUT replaces entire resources via HTTP PUT requests. Use it when you need to update all fields of a resource or create a resource at a specific URL. For partial updates where you only change specific fields, use PATCH instead. Always include all required fields when using PUT, as missing fields may be set to null or cause errors.
PATCH
The PATCH keyword sends HTTP PATCH requests to external APIs, used for partial updates to existing resources.
Syntax
result = PATCH url, data
PATCH url WITH field1 = value1, field2 = value2
Parameters
| Parameter | Type | Description |
|---|---|---|
url | String | The target URL endpoint |
data | String | JSON string for request body |
WITH | Clause | Field-value pairs for the request body |
Description
PATCH sends partial data to a specified URL using the HTTP PATCH method. In REST APIs, PATCH is used to:
- Update specific fields without affecting others
- Make incremental changes to resources
- Modify only what has changed
Unlike PUT which replaces the entire resource, PATCH only updates the fields you specify.
Examples
Basic PATCH Request
' Update only the user's email
result = PATCH "https://api.example.com/users/123" WITH
email = "new.email@example.com"
IF result.success THEN
TALK "Email updated successfully!"
ELSE
TALK "Update failed: " + result.error
END IF
Update Status Only
' Change order status without modifying other fields
PATCH "https://api.orders.com/orders/" + order_id WITH
status = "shipped"
TALK "Order status updated to shipped"
Update Multiple Fields
' Update several fields at once
result = PATCH "https://api.example.com/products/SKU-001" WITH
price = 39.99,
stock = 150,
on_sale = true
TALK "Product updated: price, stock, and sale status"
With Authentication
' Set authorization header first
SET HEADER "Authorization", "Bearer " + api_token
SET HEADER "Content-Type", "application/json"
' Make authenticated PATCH request
result = PATCH "https://api.service.com/resources/456" WITH
title = "Updated Title"
' Clear headers after request
SET HEADER "Authorization", ""
Using JSON String
' PATCH with JSON string body
json_body = '{"status": "archived", "archived_at": "2025-01-15T10:00:00Z"}'
result = PATCH "https://api.example.com/documents/789", json_body
TALK "Document archived!"
PATCH vs PUT
| Aspect | PATCH | PUT |
|---|---|---|
| Purpose | Update specific fields | Replace entire resource |
| Body Contains | Only changed fields | All resource fields |
| Missing Fields | Unchanged | May be set to null |
| Use When | Changing 1-2 fields | Replacing whole object |
' PATCH - Only update what changed
result = PATCH "https://api.example.com/users/123" WITH
phone = "+1-555-0200"
' Only phone is updated, name/email/etc unchanged
' PUT - Must include all fields
result = PUT "https://api.example.com/users/123" WITH
name = "John Doe",
email = "john@example.com",
phone = "+1-555-0200",
status = "active"
' All fields required, replaces entire user
Common Use Cases
Toggle Feature Flag
' Enable a single feature
PATCH "https://api.example.com/users/" + user.id + "/settings" WITH
dark_mode = true
TALK "Dark mode enabled!"
Update Profile Field
' User wants to change their display name
TALK "What would you like your new display name to be?"
HEAR new_name
result = PATCH "https://api.example.com/users/" + user.id WITH
display_name = new_name
TALK "Your display name is now: " + new_name
Mark as Read
' Mark notification as read
PATCH "https://api.example.com/notifications/" + notification_id WITH
read = true,
read_at = FORMAT(NOW(), "ISO8601")
TALK "Notification marked as read"
Update Progress
' Update task completion percentage
PATCH "https://api.tasks.com/tasks/" + task_id WITH
progress = 75,
last_updated = FORMAT(NOW(), "ISO8601")
TALK "Task progress updated to 75%"
Increment Counter
' Update view count (if API supports increment)
result = PATCH "https://api.content.com/articles/" + article_id WITH
views = current_views + 1
' Or if API has increment syntax
PATCH "https://api.content.com/articles/" + article_id WITH
increment_views = 1
Soft Delete
' Mark record as deleted without removing it
PATCH "https://api.example.com/records/" + record_id WITH
deleted = true,
deleted_at = FORMAT(NOW(), "ISO8601"),
deleted_by = user.id
TALK "Record archived (can be restored if needed)"
Error Handling
ON ERROR RESUME NEXT
result = PATCH "https://api.example.com/resource/123" WITH
status = "updated"
IF ERROR THEN
PRINT "PATCH request failed: " + ERROR_MESSAGE
TALK "Sorry, I couldn't update that information."
ELSE IF result.error THEN
TALK "Update failed: " + result.error.message
ELSE
TALK "Update successful!"
END IF
Common HTTP Status Codes
| Status | Meaning | Action |
|---|---|---|
| 200 | Success, updated resource returned | Process response |
| 204 | Success, no content returned | Update complete |
| 400 | Bad request | Check field names/values |
| 401 | Unauthorized | Check authentication |
| 404 | Resource not found | Verify URL/ID |
| 409 | Conflict | Resource was modified by another |
| 422 | Validation error | Check field constraints |
Best Practices
- Update only changed fields — Don’t include unchanged data
- Check response — Verify the update was applied correctly
- Handle conflicts — Be prepared for concurrent modification errors
- Use optimistic locking — Include version/etag if API supports it
' With version checking (if API supports it)
SET HEADER "If-Match", current_etag
result = PATCH "https://api.example.com/resource/123" WITH
field = new_value
IF result.status = 409 THEN
TALK "Someone else modified this. Please refresh and try again."
END IF
Configuration
Configure HTTP settings in config.csv:
name,value
http-timeout,30
http-retry-count,3
http-retry-delay,1000
Implementation Notes
- Implemented in Rust under
src/web_automation/http.rs - Automatically serializes WITH clause to JSON
- Supports custom headers via SET HEADER
- Returns parsed JSON response
- Content-Type defaults to
application/json
Related Keywords
- GET — Retrieve data from URLs
- POST — Create new resources
- PUT — Replace entire resources
- DELETE HTTP — Remove resources
- SET HEADER — Set request headers
Summary
PATCH updates specific fields of a resource via HTTP PATCH requests. Use it when you only need to change one or a few fields without affecting the rest of the resource. This is more efficient than PUT and reduces the risk of accidentally overwriting data. Always specify only the fields that need to change.
DELETE HTTP
Deprecated: The
DELETE HTTPsyntax is kept for backwards compatibility. Use the unifiedDELETEkeyword instead, which auto-detects HTTP URLs.
Redirect to DELETE
The DELETE keyword now automatically handles HTTP DELETE requests when given a URL:
' Preferred - unified DELETE
DELETE "https://api.example.com/resource/123"
' Also works (backwards compatibility)
DELETE HTTP "https://api.example.com/resource/123"
See Also
- DELETE — Unified delete keyword (recommended)
The unified DELETE keyword automatically detects:
- HTTP URLs → HTTP DELETE request
- Table + filter → Database delete
- File path → File delete
Quick Example
' Set authentication header
SET HEADER "Authorization", "Bearer " + api_token
' Delete resource via API
DELETE "https://api.example.com/users/456"
' Clear headers
CLEAR HEADERS
TALK "User deleted"
Migration
Replace this:
DELETE HTTP "https://api.example.com/resource/123"
With this:
DELETE "https://api.example.com/resource/123"
Both work, but the unified DELETE is cleaner and more intuitive.
SET HEADER
The SET HEADER keyword configures HTTP request headers for subsequent API calls, enabling authentication, content type specification, and custom headers.
Syntax
SET HEADER "header-name", "value"
SET HEADER "header-name", ""
Parameters
| Parameter | Type | Description |
|---|---|---|
header-name | String | The HTTP header name (e.g., “Authorization”) |
value | String | The header value (empty string to clear) |
Description
SET HEADER configures headers that will be sent with subsequent HTTP requests (GET, POST, PUT, PATCH, DELETE HTTP). Headers persist until explicitly cleared or the script ends.
Common uses include:
- Setting authentication tokens
- Specifying content types
- Adding API keys
- Setting custom request identifiers
- Configuring accept headers
Examples
Basic Authentication Header
' Set Bearer token for API authentication
SET HEADER "Authorization", "Bearer " + api_token
' Make authenticated request
result = GET "https://api.example.com/protected/resource"
' Clear header when done
SET HEADER "Authorization", ""
API Key Header
' Set API key in custom header
SET HEADER "X-API-Key", api_key
result = POST "https://api.service.com/data" WITH
query = user_query
SET HEADER "X-API-Key", ""
Multiple Headers
' Set multiple headers for a request
SET HEADER "Authorization", "Bearer " + token
SET HEADER "Content-Type", "application/json"
SET HEADER "Accept", "application/json"
SET HEADER "X-Request-ID", request_id
result = POST "https://api.example.com/orders" WITH
product_id = "SKU-001",
quantity = 5
' Clear all headers
SET HEADER "Authorization", ""
SET HEADER "Content-Type", ""
SET HEADER "Accept", ""
SET HEADER "X-Request-ID", ""
Content Type for Form Data
' Set content type for form submission
SET HEADER "Content-Type", "application/x-www-form-urlencoded"
result = POST "https://api.legacy.com/submit", form_data
SET HEADER "Content-Type", ""
Common Headers
| Header | Purpose | Example Value |
|---|---|---|
Authorization | Authentication | Bearer token123 |
Content-Type | Request body format | application/json |
Accept | Expected response format | application/json |
X-API-Key | API key authentication | key_abc123 |
X-Request-ID | Request tracking/correlation | req-uuid-here |
User-Agent | Client identification | MyBot/1.0 |
Accept-Language | Preferred language | en-US |
If-Match | Conditional update (ETag) | "abc123" |
If-None-Match | Conditional fetch | "abc123" |
Authentication Patterns
Bearer Token (OAuth2/JWT)
' Most common for modern APIs
SET HEADER "Authorization", "Bearer " + access_token
result = GET "https://api.service.com/user/profile"
SET HEADER "Authorization", ""
Basic Authentication
' Encode credentials as Base64
credentials = BASE64_ENCODE(username + ":" + password)
SET HEADER "Authorization", "Basic " + credentials
result = GET "https://api.legacy.com/data"
SET HEADER "Authorization", ""
API Key in Header
' API key as custom header
SET HEADER "X-API-Key", api_key
' Or in Authorization header
SET HEADER "Authorization", "Api-Key " + api_key
result = POST "https://api.provider.com/query" WITH
question = user_input
Custom Token
' Some APIs use custom authentication schemes
SET HEADER "X-Auth-Token", auth_token
SET HEADER "X-Client-ID", client_id
result = GET "https://api.custom.com/resources"
Common Use Cases
Authenticated API Call
' Complete authenticated API interaction
SET HEADER "Authorization", "Bearer " + GET BOT MEMORY "api_token"
SET HEADER "Content-Type", "application/json"
result = POST "https://api.crm.com/leads" WITH
name = customer_name,
email = customer_email,
source = "chatbot"
IF result.id THEN
TALK "Lead created: " + result.id
ELSE
TALK "Error creating lead: " + result.error
END IF
' Always clean up
SET HEADER "Authorization", ""
SET HEADER "Content-Type", ""
Request Tracing
' Add request ID for debugging/tracing
request_id = GUID()
SET HEADER "X-Request-ID", request_id
SET HEADER "X-Correlation-ID", session.id
PRINT "Request ID: " + request_id
result = POST "https://api.example.com/process" WITH
data = payload
SET HEADER "X-Request-ID", ""
SET HEADER "X-Correlation-ID", ""
Conditional Requests
' Only fetch if resource changed (using ETag)
SET HEADER "If-None-Match", cached_etag
result = GET "https://api.example.com/data"
IF result.status = 304 THEN
TALK "Data unchanged, using cached version"
ELSE
' Process new data
cached_data = result.data
cached_etag = result.headers.etag
END IF
SET HEADER "If-None-Match", ""
Header Persistence
Headers persist across multiple requests until cleared:
' Set header once
SET HEADER "Authorization", "Bearer " + token
' Used in all these requests
result1 = GET "https://api.example.com/users"
result2 = GET "https://api.example.com/orders"
result3 = POST "https://api.example.com/actions" WITH action = "process"
' Clear when done with authenticated calls
SET HEADER "Authorization", ""
Best Practices
- Always clear sensitive headers — Remove authentication headers after use
- Use Vault for tokens — Never hardcode API keys or tokens
- Set Content-Type when needed — JSON is usually the default
- Add request IDs — Helps with debugging and support requests
- Check API documentation — Header names and formats vary by API
' Good practice pattern
' 1. Get token from secure storage
token = GET BOT MEMORY "api_token"
' 2. Set headers
SET HEADER "Authorization", "Bearer " + token
SET HEADER "X-Request-ID", GUID()
' 3. Make request
result = GET api_url
' 4. Clear sensitive headers
SET HEADER "Authorization", ""
SET HEADER "X-Request-ID", ""
Error Handling
ON ERROR RESUME NEXT
' Token might be expired
SET HEADER "Authorization", "Bearer " + old_token
result = GET "https://api.example.com/protected"
IF result.status = 401 THEN
' Token expired, refresh it
TALK "Refreshing authentication..."
new_token = REFRESH_TOKEN(refresh_token)
SET BOT MEMORY "api_token", new_token
SET HEADER "Authorization", "Bearer " + new_token
result = GET "https://api.example.com/protected"
END IF
SET HEADER "Authorization", ""
Configuration
HTTP defaults can be set in config.csv:
name,value
http-timeout,30
http-default-content-type,application/json
http-user-agent,GeneralBots/6.1.0
Implementation Notes
- Implemented in Rust under
src/web_automation/http.rs - Headers are stored in thread-local storage
- Case-insensitive header names (HTTP standard)
- Special characters in values are properly escaped
- Empty string clears the header
Related Keywords
- GET — Retrieve data from URLs
- POST — Create new resources
- PUT — Replace entire resources
- PATCH — Partial resource updates
- DELETE HTTP — Remove resources
- GRAPHQL — GraphQL operations
Summary
SET HEADER configures HTTP headers for API requests. Use it to add authentication tokens, specify content types, and include custom headers. Always clear sensitive headers after use and store credentials securely in Vault rather than hardcoding them. Headers persist until explicitly cleared, so you can set them once for multiple related requests.
GRAPHQL
The GRAPHQL keyword executes GraphQL queries and mutations against external APIs, enabling bots to interact with modern GraphQL-based services.
Syntax
result = GRAPHQL url, query
result = GRAPHQL url, query WITH variables
Parameters
| Parameter | Type | Description |
|---|---|---|
url | String | The GraphQL endpoint URL |
query | String | The GraphQL query or mutation |
WITH | Clause | Optional variables for the query |
Description
GRAPHQL sends queries and mutations to GraphQL APIs. GraphQL allows you to request exactly the data you need in a single request, making it efficient for complex data fetching. The keyword handles query formatting, variable substitution, and response parsing.
Use cases include:
- Fetching specific fields from APIs
- Creating, updating, or deleting data via mutations
- Querying nested relationships in one request
- Interacting with modern API platforms
Examples
Basic Query
' Simple query without variables
query = '
query {
users {
id
name
email
}
}
'
result = GRAPHQL "https://api.example.com/graphql", query
FOR EACH user IN result.data.users
TALK user.name + ": " + user.email
NEXT
Query with Variables
' Query with variables
query = '
query GetUser($id: ID!) {
user(id: $id) {
id
name
email
orders {
id
total
status
}
}
}
'
result = GRAPHQL "https://api.example.com/graphql", query WITH id = user_id
TALK "User: " + result.data.user.name
TALK "Orders: " + LEN(result.data.user.orders)
Mutation
' Create a new record
mutation = '
mutation CreateUser($name: String!, $email: String!) {
createUser(input: {name: $name, email: $email}) {
id
name
email
createdAt
}
}
'
result = GRAPHQL "https://api.example.com/graphql", mutation WITH
name = user_name,
email = user_email
TALK "User created with ID: " + result.data.createUser.id
With Authentication
' Set authorization header for GraphQL
SET HEADER "Authorization", "Bearer " + api_token
query = '
query {
me {
id
name
role
}
}
'
result = GRAPHQL "https://api.example.com/graphql", query
SET HEADER "Authorization", ""
TALK "Logged in as: " + result.data.me.name
Common Use Cases
Fetch User Profile
' Get detailed user profile
query = '
query GetProfile($userId: ID!) {
user(id: $userId) {
id
name
email
avatar
settings {
theme
language
notifications
}
recentActivity {
action
timestamp
}
}
}
'
result = GRAPHQL api_url, query WITH userId = user.id
profile = result.data.user
TALK "Welcome back, " + profile.name + "!"
TALK "Theme: " + profile.settings.theme
Search Products
' Search with filters
query = '
query SearchProducts($term: String!, $category: String, $limit: Int) {
products(search: $term, category: $category, first: $limit) {
edges {
node {
id
name
price
inStock
}
}
totalCount
}
}
'
result = GRAPHQL "https://api.store.com/graphql", query WITH
term = search_term,
category = selected_category,
limit = 10
products = result.data.products.edges
TALK "Found " + result.data.products.totalCount + " products:"
FOR EACH edge IN products
product = edge.node
TALK "- " + product.name + ": $" + product.price
NEXT
Create Order
' Create order mutation
mutation = '
mutation CreateOrder($input: OrderInput!) {
createOrder(input: $input) {
id
orderNumber
total
status
estimatedDelivery
}
}
'
result = GRAPHQL "https://api.store.com/graphql", mutation WITH
input = '{"customerId": "' + customer_id + '", "items": ' + cart_items + '}'
order = result.data.createOrder
TALK "Order #" + order.orderNumber + " placed!"
TALK "Total: $" + order.total
TALK "Estimated delivery: " + order.estimatedDelivery
Update Record
' Update mutation
mutation = '
mutation UpdateUser($id: ID!, $input: UserUpdateInput!) {
updateUser(id: $id, input: $input) {
id
name
email
updatedAt
}
}
'
result = GRAPHQL api_url, mutation WITH
id = user.id,
input = '{"name": "' + new_name + '", "email": "' + new_email + '"}'
TALK "Profile updated!"
Delete Record
' Delete mutation
mutation = '
mutation DeleteItem($id: ID!) {
deleteItem(id: $id) {
success
message
}
}
'
result = GRAPHQL api_url, mutation WITH id = item_id
IF result.data.deleteItem.success THEN
TALK "Item deleted successfully"
ELSE
TALK "Delete failed: " + result.data.deleteItem.message
END IF
Error Handling
ON ERROR RESUME NEXT
result = GRAPHQL api_url, query WITH id = resource_id
IF ERROR THEN
PRINT "GraphQL request failed: " + ERROR_MESSAGE
TALK "Sorry, I couldn't fetch that data. Please try again."
ELSE IF result.errors THEN
' GraphQL returned errors
FOR EACH err IN result.errors
PRINT "GraphQL error: " + err.message
NEXT
TALK "The request encountered an error: " + result.errors[0].message
ELSE
' Success
TALK "Data retrieved successfully!"
END IF
Common Errors
| Error | Cause | Solution |
|---|---|---|
VALIDATION_ERROR | Invalid query syntax | Check query format |
NOT_FOUND | Resource doesn’t exist | Verify ID/parameters |
UNAUTHORIZED | Missing/invalid auth | Check authentication |
FORBIDDEN | Insufficient permissions | Verify access rights |
VARIABLE_REQUIRED | Missing required variable | Provide all variables |
GraphQL vs REST
| Aspect | GraphQL | REST |
|---|---|---|
| Data fetching | Request exact fields | Fixed response structure |
| Multiple resources | Single request | Multiple requests |
| Versioning | Evolving schema | API versions (v1, v2) |
| Use case | Complex nested data | Simple CRUD operations |
' GraphQL - One request for nested data
query = '
query {
user(id: "123") {
name
orders {
items {
product { name }
}
}
}
}
'
result = GRAPHQL url, query
' REST equivalent would need multiple calls:
' GET /users/123
' GET /users/123/orders
' GET /orders/{id}/items for each order
' GET /products/{id} for each item
Query Building Tips
Request Only What You Need
' Good - request specific fields
query = '
query {
user(id: "123") {
name
email
}
}
'
' Avoid - requesting everything
' query {
' user(id: "123") {
' id name email phone address avatar settings ...
' }
' }
Use Fragments for Reusable Fields
query = '
fragment UserFields on User {
id
name
email
}
query {
user(id: "123") {
...UserFields
}
users {
...UserFields
}
}
'
Configuration
Configure HTTP settings in config.csv:
name,value
http-timeout,30
http-retry-count,3
API keys are stored in Vault:
vault kv put gbo/graphql/example api_key="your-api-key"
Implementation Notes
- Implemented in Rust under
src/web_automation/graphql.rs - Sends POST requests with
application/jsoncontent type - Automatically formats query and variables
- Parses JSON response into accessible objects
- Supports custom headers via SET HEADER
- Handles both queries and mutations
Related Keywords
- POST — REST POST requests
- GET — REST GET requests
- SET HEADER — Set authentication headers
- SOAP — SOAP/XML web services
Summary
GRAPHQL executes queries and mutations against GraphQL APIs. Use it when you need precise control over the data you fetch, especially for nested relationships. GraphQL is more efficient than REST for complex data needs, requiring fewer round trips. Always handle both network errors and GraphQL-specific errors in the response.
SOAP
The SOAP keyword enables bots to communicate with legacy SOAP/XML web services, allowing integration with enterprise systems, government APIs, and older corporate infrastructure that still relies on SOAP protocols.
Syntax
result = SOAP "wsdl_url", "operation", params
Parameters
| Parameter | Type | Description |
|---|---|---|
wsdl_url | String | URL to the WSDL file or SOAP endpoint |
operation | String | Name of the SOAP operation to call |
params | Object | Parameters to pass to the operation |
Description
SOAP sends a SOAP (Simple Object Access Protocol) request to a web service, automatically building the XML envelope and parsing the response. This enables integration with legacy enterprise systems that haven’t migrated to REST APIs.
Use cases include:
- Connecting to government tax and fiscal systems
- Integrating with legacy ERP systems (SAP, Oracle)
- Communicating with banking and payment systems
- Accessing healthcare HL7/SOAP interfaces
- Interfacing with older CRM systems
Examples
Basic SOAP Request
' Call a simple SOAP service
result = SOAP "https://api.example.com/service?wsdl", "GetUserInfo", #{
"userId": "12345"
}
TALK "User name: " + result.name
Tax Calculation Service
' Brazilian NF-e fiscal service example
nfe_params = #{
"CNPJ": company_cnpj,
"InvoiceNumber": invoice_number,
"Items": invoice_items,
"TotalValue": total_value
}
result = SOAP "https://nfe.fazenda.gov.br/NFeAutorizacao4/NFeAutorizacao4.asmx?wsdl",
"NfeAutorizacao",
nfe_params
IF result.status = "Authorized" THEN
TALK "Invoice authorized! Protocol: " + result.protocol
ELSE
TALK "Authorization failed: " + result.errorMessage
END IF
Currency Exchange Service
' Get exchange rates from central bank
params = #{
"fromCurrency": "USD",
"toCurrency": "BRL",
"date": FORMAT(NOW(), "YYYY-MM-DD")
}
result = SOAP "https://www.bcb.gov.br/webservice/cotacao.asmx?wsdl",
"GetCotacao",
params
rate = result.cotacao.valor
TALK "Today's USD/BRL rate: " + rate
Weather Service (Legacy)
' Access legacy weather SOAP service
weather_params = #{
"city": city_name,
"country": "BR"
}
result = SOAP "https://weather.example.com/service.asmx?wsdl",
"GetWeather",
weather_params
TALK "Weather in " + city_name + ": " + result.description
TALK "Temperature: " + result.temperature + "°C"
SAP Integration
' Query SAP for material information
sap_params = #{
"MaterialNumber": material_code,
"Plant": "1000"
}
result = SOAP "https://sap.company.com:8443/sap/bc/srt/wsdl/MATERIAL_INFO?wsdl",
"GetMaterialDetails",
sap_params
material = result.MaterialData
TALK "Material: " + material.Description
TALK "Stock: " + material.AvailableStock + " units"
TALK "Price: $" + material.StandardPrice
Working with Complex Types
Nested Objects
' SOAP request with nested structure
customer_data = #{
"Customer": #{
"Name": customer_name,
"Address": #{
"Street": street,
"City": city,
"ZipCode": zipcode,
"Country": "BR"
},
"Contact": #{
"Email": email,
"Phone": phone
}
}
}
result = SOAP "https://crm.company.com/CustomerService.asmx?wsdl",
"CreateCustomer",
customer_data
TALK "Customer created with ID: " + result.CustomerId
Array Parameters
' Send multiple items in SOAP request
order_items = [
#{ "SKU": "PROD-001", "Quantity": 2, "Price": 99.99 },
#{ "SKU": "PROD-002", "Quantity": 1, "Price": 49.99 },
#{ "SKU": "PROD-003", "Quantity": 5, "Price": 19.99 }
]
order_params = #{
"OrderHeader": #{
"CustomerId": customer_id,
"OrderDate": FORMAT(NOW(), "YYYY-MM-DD")
},
"OrderItems": order_items
}
result = SOAP "https://erp.company.com/OrderService?wsdl",
"CreateOrder",
order_params
TALK "Order " + result.OrderNumber + " created successfully!"
Response Handling
Parsing Complex Responses
' Handle structured SOAP response
result = SOAP "https://api.example.com/InvoiceService?wsdl",
"GetInvoices",
#{ "CustomerId": customer_id, "Year": 2024 }
' Access nested response data
FOR EACH invoice IN result.Invoices.Invoice
TALK "Invoice #" + invoice.Number + " - $" + invoice.Total
TALK " Date: " + invoice.Date
TALK " Status: " + invoice.Status
END FOR
Checking Response Status
result = SOAP service_url, operation, params
IF result.ResponseCode = "0" OR result.Success = true THEN
TALK "Operation completed successfully"
' Process result data
ELSE
TALK "Operation failed: " + result.ErrorMessage
END IF
Error Handling
ON ERROR RESUME NEXT
result = SOAP "https://legacy.system.com/service.asmx?wsdl",
"ProcessPayment",
payment_params
IF ERROR THEN
error_msg = ERROR_MESSAGE
IF INSTR(error_msg, "timeout") > 0 THEN
TALK "The service is taking too long. Please try again."
ELSE IF INSTR(error_msg, "WSDL") > 0 THEN
TALK "Cannot connect to the service. It may be down."
ELSE IF INSTR(error_msg, "authentication") > 0 THEN
TALK "Authentication failed. Please check credentials."
ELSE
TALK "Service error: " + error_msg
END IF
ELSE
IF result.TransactionId THEN
TALK "Payment processed! Transaction: " + result.TransactionId
END IF
END IF
Common Errors
| Error | Cause | Solution |
|---|---|---|
WSDL_PARSE_ERROR | Invalid WSDL format | Verify WSDL URL and format |
SOAP_FAULT | Service returned fault | Check error message from service |
TIMEOUT | Request took too long | Increase timeout or retry |
CONNECTION_ERROR | Cannot reach service | Check network and URL |
AUTHENTICATION_ERROR | Invalid credentials | Verify authentication headers |
Authentication
SOAP services commonly use several authentication methods. General Bots supports all major approaches.
Basic Authentication
The simplest form of authentication, sending username and password with each request:
' Basic HTTP authentication
SET HEADER "Authorization", "Basic " + BASE64(username + ":" + password)
result = SOAP service_url, operation, params
CLEAR HEADERS
WS-Security (Username Token)
WS-Security adds security tokens directly to the SOAP envelope. Configure in config.csv:
name,value
soap-wsse-enabled,true
soap-wsse-username,your_username
soap-wsse-password,your_password
soap-wsse-password-type,PasswordDigest
Password Types:
PasswordText- Password sent in plain text (use only with HTTPS)PasswordDigest- Password hashed with nonce and timestamp (recommended)
Usage:
' WS-Security is applied automatically when configured
result = SOAP "https://secure.service.com/api?wsdl", "SecureOperation", params
' The SOAP envelope will include:
' <wsse:Security>
' <wsse:UsernameToken>
' <wsse:Username>your_username</wsse:Username>
' <wsse:Password Type="...">hashed_password</wsse:Password>
' <wsse:Nonce>...</wsse:Nonce>
' <wsu:Created>...</wsu:Created>
' </wsse:UsernameToken>
' </wsse:Security>
WS-Security with Timestamp
Add timestamp validation to prevent replay attacks:
name,value
soap-wsse-enabled,true
soap-wsse-username,your_username
soap-wsse-password,your_password
soap-wsse-timestamp,true
soap-wsse-timestamp-ttl,300
The timestamp-ttl sets validity in seconds (default: 300 = 5 minutes).
Certificate-Based Authentication (Mutual TLS)
For services requiring client certificates:
name,value
soap-client-cert,/path/to/client.pem
soap-client-key,/path/to/client.key
soap-client-key-password,optional_key_password
soap-ca-cert,/path/to/ca.pem
soap-verify-ssl,true
Certificate Formats Supported:
- PEM (
.pem,.crt,.cer) - PKCS#12 (
.p12,.pfx) - setsoap-client-cert-type,p12
Example for Brazilian NFe:
name,value
soap-client-cert,/certs/certificado_a1.pfx
soap-client-cert-type,p12
soap-client-key-password,cert_password
soap-ca-cert,/certs/cadeia_nfe.pem
OAuth 2.0 Authentication
For modern SOAP services that support OAuth:
name,value
soap-oauth-enabled,true
soap-oauth-token-url,https://auth.service.com/oauth/token
soap-oauth-client-id,your_client_id
soap-oauth-client-secret,your_client_secret
soap-oauth-scope,soap_api
Or provide token directly:
' Get OAuth token first
token_response = POST "https://auth.service.com/oauth/token", #{
"grant_type": "client_credentials",
"client_id": client_id,
"client_secret": client_secret
}
' Use token for SOAP call
SET HEADER "Authorization", "Bearer " + token_response.access_token
result = SOAP service_url, operation, params
CLEAR HEADERS
API Key Authentication
Some SOAP services use API keys:
' API key in header
SET HEADER "X-API-Key", api_key
result = SOAP service_url, operation, params
CLEAR HEADERS
Or configure in config.csv:
name,value
soap-api-key,your_api_key
soap-api-key-header,X-API-Key
SAML Token Authentication
For enterprise SSO with SAML:
name,value
soap-saml-enabled,true
soap-saml-assertion-url,https://idp.company.com/saml/assertion
soap-saml-issuer,https://your-bot.example.com
Custom SOAP Headers
For services requiring custom security headers:
' Add custom SOAP header
SET HEADER "SOAPAction", "urn:processPayment"
SET HEADER "X-Custom-Auth", custom_auth_value
result = SOAP service_url, operation, params
CLEAR HEADERS
Authentication Examples by Industry
Government/Fiscal Services (NFe, NFS-e)
name,value
soap-client-cert,/certs/e-cnpj-a1.pfx
soap-client-cert-type,p12
soap-client-key-password,certificate_password
soap-ca-cert,/certs/ac-raiz.pem
soap-wsse-enabled,false
Banking/Financial Services
name,value
soap-wsse-enabled,true
soap-wsse-username,bank_user
soap-wsse-password,bank_password
soap-wsse-password-type,PasswordDigest
soap-wsse-timestamp,true
soap-client-cert,/certs/bank-client.pem
soap-client-key,/certs/bank-client.key
Healthcare (HL7/SOAP)
name,value
soap-wsse-enabled,true
soap-wsse-username,hl7_system_user
soap-wsse-password,hl7_system_password
soap-timeout,60
Legacy ERP (SAP, Oracle)
name,value
soap-auth-type,basic
soap-username,erp_integration_user
soap-password,erp_integration_password
soap-timeout,120
Configuration Reference
| Parameter | Description | Default |
|---|---|---|
soap-timeout | Request timeout in seconds | 120 |
soap-verify-ssl | Verify SSL certificates | true |
soap-wsse-enabled | Enable WS-Security | false |
soap-wsse-username | WS-Security username | Not set |
soap-wsse-password | WS-Security password | Not set |
soap-wsse-password-type | PasswordText or PasswordDigest | PasswordDigest |
soap-wsse-timestamp | Include timestamp | false |
soap-wsse-timestamp-ttl | Timestamp validity (seconds) | 300 |
soap-client-cert | Path to client certificate | Not set |
soap-client-key | Path to client private key | Not set |
soap-client-key-password | Password for private key | Not set |
soap-client-cert-type | Certificate type (pem, p12) | pem |
soap-ca-cert | Path to CA certificate | Not set |
soap-oauth-enabled | Enable OAuth authentication | false |
soap-api-key | API key value | Not set |
soap-api-key-header | Header name for API key | X-API-Key |
Practical Examples
Brazilian NFe (Electronic Invoice)
' Emit electronic invoice to Brazilian tax authority
nfe_data = #{
"infNFe": #{
"ide": #{
"cUF": "35",
"natOp": "VENDA",
"serie": "1",
"nNF": invoice_number
},
"emit": #{
"CNPJ": company_cnpj,
"xNome": company_name
},
"dest": #{
"CNPJ": customer_cnpj,
"xNome": customer_name
},
"det": invoice_items,
"total": #{
"vNF": total_value
}
}
}
result = SOAP "https://nfe.fazenda.sp.gov.br/ws/NFeAutorizacao4.asmx?wsdl",
"nfeAutorizacaoLote",
nfe_data
IF result.cStat = "100" THEN
TALK "NFe authorized! Key: " + result.chNFe
ELSE
TALK "Error: " + result.xMotivo
END IF
Healthcare HL7/SOAP
' Query patient information from healthcare system
patient_query = #{
"PatientId": patient_id,
"IncludeHistory": true
}
result = SOAP "https://hospital.example.com/PatientService?wsdl",
"GetPatientRecord",
patient_query
TALK "Patient: " + result.Patient.Name
TALK "DOB: " + result.Patient.DateOfBirth
TALK "Allergies: " + JOIN(result.Patient.Allergies, ", ")
Legacy CRM Integration
' Update customer in legacy Siebel CRM
update_data = #{
"AccountId": account_id,
"AccountName": new_name,
"PrimaryContact": #{
"FirstName": first_name,
"LastName": last_name,
"Email": email
},
"UpdatedBy": bot_user
}
result = SOAP "https://siebel.company.com/eai_enu/start.swe?SWEExtSource=WebService&wsdl",
"AccountUpdate",
update_data
TALK "CRM updated. Transaction ID: " + result.TransactionId
SOAP vs REST
| Aspect | SOAP | REST |
|---|---|---|
| Protocol | XML-based | JSON typically |
| Standards | WS-Security, WS-*, WSDL | OpenAPI, OAuth |
| Use Case | Enterprise, legacy | Modern APIs |
| Keyword | SOAP | POST, GET |
| Complexity | Higher | Lower |
When to use SOAP:
- Integrating with legacy enterprise systems
- Government/fiscal APIs requiring SOAP
- Systems with strict WS-Security requirements
- Banking and financial services
- Healthcare systems (HL7 SOAP)
Configuration
No specific configuration required. The keyword handles SOAP envelope construction automatically.
For services requiring custom SOAP headers or namespaces, these are inferred from the WSDL.
Implementation Notes
- Implemented in Rust under
src/basic/keywords/http_operations.rs - Automatically fetches and parses WSDL
- Builds SOAP envelope from parameters
- Parses XML response into JSON-like object
- Timeout: 120 seconds by default
- Supports SOAP 1.1 and 1.2
Related Keywords
- POST — For REST API calls
- GET — For REST GET requests
- GRAPHQL — For GraphQL APIs
- SET HEADER — Set authentication headers
Summary
SOAP enables integration with legacy SOAP/XML web services that are still common in enterprise, government, and healthcare sectors. While REST is preferred for modern APIs, SOAP remains essential for connecting to fiscal systems (NFe, tax services), legacy ERPs (SAP, Oracle), and older enterprise infrastructure. The keyword handles XML envelope construction and parsing automatically, making SOAP integration as simple as REST calls.
Data Operations
This section covers keywords for working with structured data in databases, spreadsheets, and in-memory collections. These keywords enable bots to query, transform, and persist data across various storage backends.
Overview
General Bots provides a complete set of data operation keywords:
| Keyword | Purpose |
|---|---|
| SAVE | Persist data to storage |
| INSERT | Add new records to tables |
| UPDATE | Modify existing records |
| DELETE | Remove records from tables |
| MERGE | Upsert (insert or update) records |
| FILL | Populate templates with data |
| MAP | Transform collections |
| FILTER | Select matching items |
| AGGREGATE | Sum, count, average operations |
| JOIN | Combine related datasets |
| PIVOT | Reshape data tables |
| GROUP BY | Group records by field |
Quick Examples
Database Operations
' Insert a new record
INSERT INTO "customers" WITH
name = "John Doe",
email = "john@example.com",
created_at = NOW()
' Update existing records
UPDATE "customers" SET status = "active" WHERE email = "john@example.com"
' Delete records
DELETE FROM "customers" WHERE status = "inactive" AND last_login < "2024-01-01"
' Merge (upsert) - insert or update based on key
MERGE INTO "products" ON sku = "SKU-001" WITH
sku = "SKU-001",
name = "Widget",
price = 29.99,
stock = 100
Collection Transformations
' Map - transform each item
prices = [10, 20, 30, 40]
with_tax = MAP prices WITH item * 1.1
' Result: [11, 22, 33, 44]
' Filter - select matching items
orders = FIND "orders"
large_orders = FILTER orders WHERE total > 100
' Returns only orders with total > 100
' Aggregate - calculate summaries
total_sales = AGGREGATE orders SUM total
order_count = AGGREGATE orders COUNT
avg_order = AGGREGATE orders AVERAGE total
Data Analysis
' Group by category
sales_by_category = GROUP BY "sales" ON category
FOR EACH group IN sales_by_category
TALK group.category + ": $" + group.total
NEXT
' Join related tables
order_details = JOIN "orders" WITH "customers" ON customer_id = id
FOR EACH detail IN order_details
TALK detail.customer_name + " ordered " + detail.product
NEXT
' Pivot data for reports
monthly_pivot = PIVOT "sales" ROWS month COLUMNS product VALUES SUM(amount)
Data Sources
Supported Backends
| Backend | Use Case | Configuration |
|---|---|---|
| PostgreSQL | Primary database | database-url in config.csv |
| SQLite | Local/embedded | database-provider,sqlite |
| In-memory | Temporary data | Default for collections |
| CSV files | Import/export | Via READ/WRITE AS TABLE |
| Excel | Spreadsheet data | Via READ AS TABLE |
Connection Configuration
name,value
database-provider,postgres
database-url,postgres://user:pass@localhost/botdb
database-pool-size,10
database-timeout,30
Multiple Connections
' Use default connection
customers = FIND "customers"
' Use named connection
legacy_data = FIND "orders" ON "legacy_db"
warehouse_stock = FIND "inventory" ON "warehouse_db"
Common Patterns
CRUD Operations
' CREATE
customer_id = INSERT INTO "customers" WITH
name = customer_name,
email = customer_email,
phone = customer_phone
TALK "Customer created with ID: " + customer_id
' READ
customer = FIND "customers" WHERE id = customer_id
TALK "Found: " + customer.name
' UPDATE
UPDATE "customers" SET
last_contact = NOW(),
contact_count = contact_count + 1
WHERE id = customer_id
' DELETE
DELETE FROM "customers" WHERE id = customer_id AND confirmed = true
Batch Operations
' Insert multiple records from data source
new_orders = READ "imports/orders.csv" AS TABLE
FOR EACH order IN new_orders
INSERT INTO "orders" WITH
product = order.product,
quantity = order.quantity,
price = order.price
NEXT
' Bulk update
UPDATE "products" SET on_sale = true WHERE category = "electronics"
Data Transformation Pipeline
' Load raw data
raw_sales = READ "imports/sales-data.csv" AS TABLE
' Clean and transform
cleaned = FILTER raw_sales WHERE amount > 0 AND date IS NOT NULL
' Enrich with calculations
enriched = MAP cleaned WITH
tax = item.amount * 0.1,
total = item.amount * 1.1,
quarter = QUARTER(item.date)
' Aggregate for reporting
quarterly_totals = GROUP BY enriched ON quarter
summary = AGGREGATE quarterly_totals SUM total
' Save results
WRITE summary TO "reports/quarterly-summary.csv" AS TABLE
INSERT INTO "sales_reports" VALUES summary
Lookup and Reference
' Simple lookup
product = FIND "products" WHERE sku = user_sku
IF product THEN
TALK "Price: $" + product.price
ELSE
TALK "Product not found"
END IF
' Lookup with join
order_with_customer = FIND "orders"
JOIN "customers" ON orders.customer_id = customers.id
WHERE orders.id = order_id
TALK "Order for " + order_with_customer.customer_name
Query Syntax
WHERE Clauses
' Equality
FIND "users" WHERE status = "active"
' Comparison
FIND "orders" WHERE total > 100
FIND "products" WHERE stock <= 10
' Multiple conditions
FIND "customers" WHERE country = "US" AND created_at > "2024-01-01"
FIND "items" WHERE category = "electronics" OR category = "accessories"
' NULL checks
FIND "leads" WHERE assigned_to IS NULL
FIND "orders" WHERE shipped_at IS NOT NULL
' Pattern matching
FIND "products" WHERE name LIKE "%widget%"
' IN lists
FIND "orders" WHERE status IN ["pending", "processing", "shipped"]
ORDER BY
' Single column sort
FIND "products" ORDER BY price ASC
' Multiple column sort
FIND "orders" ORDER BY priority DESC, created_at ASC
' With limit
recent_orders = FIND "orders" ORDER BY created_at DESC LIMIT 10
Aggregations
' Count records
total_customers = AGGREGATE "customers" COUNT
' Sum values
total_revenue = AGGREGATE "orders" SUM total
' Average
avg_order_value = AGGREGATE "orders" AVERAGE total
' Min/Max
cheapest = AGGREGATE "products" MIN price
most_expensive = AGGREGATE "products" MAX price
' With grouping
sales_by_region = AGGREGATE "sales" SUM amount GROUP BY region
Error Handling
ON ERROR RESUME NEXT
result = INSERT INTO "orders" VALUES order_data
IF ERROR THEN
PRINT "Database error: " + ERROR_MESSAGE
IF INSTR(ERROR_MESSAGE, "duplicate") > 0 THEN
TALK "This order already exists."
ELSE IF INSTR(ERROR_MESSAGE, "constraint") > 0 THEN
TALK "Invalid data. Please check all fields."
ELSE
TALK "Sorry, I couldn't save your order. Please try again."
END IF
ELSE
TALK "Order saved successfully!"
END IF
Transaction Handling
' Start transaction
BEGIN TRANSACTION
' Multiple operations
INSERT INTO "orders" VALUES order_data
UPDATE "inventory" SET stock = stock - quantity WHERE product_id = product_id
INSERT INTO "order_items" VALUES items
' Commit if all succeeded
IF NOT ERROR THEN
COMMIT
TALK "Order completed!"
ELSE
ROLLBACK
TALK "Order failed. All changes reverted."
END IF
Performance Tips
Use Indexes
Ensure database tables have appropriate indexes for frequently queried columns:
-- In database setup
CREATE INDEX idx_orders_customer ON orders(customer_id);
CREATE INDEX idx_orders_date ON orders(created_at);
CREATE INDEX idx_products_sku ON products(sku);
Limit Results
' Avoid loading entire tables
' Bad:
all_orders = FIND "orders"
' Good:
recent_orders = FIND "orders" WHERE created_at > date_limit LIMIT 100
Batch Operations
' Process large datasets in batches
page = 0
batch_size = 100
WHILE true
batch = FIND "records" LIMIT batch_size OFFSET page * batch_size
IF LEN(batch) = 0 THEN
EXIT WHILE
END IF
FOR EACH record IN batch
' Process record
NEXT
page = page + 1
WEND
Configuration
Configure data operations in config.csv:
name,value
database-provider,postgres
database-url,postgres://localhost/botdb
database-pool-size,10
database-timeout,30
database-log-queries,false
database-max-rows,10000
Security Considerations
- Parameterized queries — All keywords use parameterized queries to prevent SQL injection
- Row limits — Default limit on returned rows prevents memory exhaustion
- Access control — Bots can only access their own data by default
- Audit logging — All data modifications logged for compliance
- Encryption — Sensitive data encrypted at rest
See Also
- SAVE — Persist data
- INSERT — Add records
- UPDATE — Modify records
- DELETE — Remove records
- MERGE — Upsert operations
- FILL — Template population
- MAP — Transform collections
- FILTER — Select items
- AGGREGATE — Summaries
- JOIN — Combine datasets
- PIVOT — Reshape data
- GROUP BY — Group records
- TABLE — Create tables
SAVE
Saves data to a database table using upsert (insert or update) semantics.
Syntax
SAVE "table", id, data
Parameters
| Parameter | Type | Description |
|---|---|---|
| table | String | The name of the database table |
| id | String/Number | The unique identifier for the record |
| data | Object | A map/object containing field names and values |
Description
SAVE performs an upsert operation:
- If a record with the given
idexists, it updates the record - If no record exists, it inserts a new one
The id parameter maps to the id column in the table.
Examples
Basic Save with Object
' Create data object using Rhai map syntax
data = #{
"customer_name": "João Silva",
"email": "joao@example.com",
"phone": "+5511999887766",
"status": "active"
}
SAVE "customers", "CUST-001", data
Save Order Data
order_id = "ORD-" + FORMAT(NOW(), "YYYYMMDDHHmmss")
order_data = #{
"customer_id": customer_id,
"customer_name": customer_name,
"total": total,
"status": "pending",
"created_at": NOW()
}
SAVE "orders", order_id, order_data
TALK "Order " + order_id + " saved successfully!"
Update Existing Record
' If order exists, this updates it; otherwise creates it
update_data = #{
"status": "shipped",
"shipped_at": NOW(),
"tracking_number": tracking
}
SAVE "orders", order_id, update_data
With WhatsApp Notification
WEBHOOK "new-customer"
customer_id = "CUST-" + FORMAT(NOW(), "YYYYMMDDHHmmss")
phone = body.phone
name = body.name
customer_data = #{
"name": name,
"phone": phone,
"source": "webhook",
"created_at": NOW()
}
SAVE "customers", customer_id, customer_data
' Notify via WhatsApp
TALK TO "whatsapp:" + phone, "Welcome " + name + "! Your account has been created."
result_status = "ok"
result_customer_id = customer_id
Building Data Dynamically
' Start with empty map and add fields
data = #{}
data.name = customer_name
data.email = customer_email
data.phone = customer_phone
data.registered_at = NOW()
IF has_referral THEN
data.referral_code = referral_code
data.discount = 10
END IF
SAVE "customers", customer_id, data
Saving Multiple Related Records
WEBHOOK "create-order"
' Save order
order_id = body.order_id
order_data = #{
"customer_id": body.customer_id,
"total": body.total,
"status": "pending"
}
SAVE "orders", order_id, order_data
' Save each line item
FOR EACH item IN body.items
line_id = order_id + "-" + item.sku
line_data = #{
"order_id": order_id,
"sku": item.sku,
"quantity": item.quantity,
"price": item.price
}
SAVE "order_items", line_id, line_data
NEXT item
' Notify customer
TALK TO "whatsapp:" + body.customer_phone, "Order #" + order_id + " confirmed!"
result_status = "ok"
Return Value
Returns an object with:
command: “save”table: The table nameid: The record IDrows_affected: Number of rows affected (1 for insert/update)
Notes
- Table must exist in the database
- The
idcolumn is used as the primary key for conflict detection - All string values are automatically sanitized to prevent SQL injection
- Column names are validated to prevent injection
Comparison with INSERT and UPDATE
| Keyword | Behavior |
|---|---|
SAVE | Upsert - inserts if new, updates if exists |
INSERT | Always creates new record (may fail if ID exists) |
UPDATE | Only updates existing records (no-op if not found) |
' SAVE is preferred for most cases
SAVE "customers", id, data ' Insert or update
' Use INSERT when you need a new record guaranteed
INSERT "logs", log_entry ' Always creates new
' Use UPDATE for targeted updates
UPDATE "orders", "status=pending", update_data ' Update matching rows
See Also
- INSERT - Insert new records
- UPDATE - Update existing records
- DELETE - Delete records
- FIND - Query records
INSERT
The INSERT keyword adds new records to database tables, enabling bots to store data collected from conversations and integrations.
Syntax
INSERT INTO "table_name" WITH field1 = value1, field2 = value2
result = INSERT INTO "table_name" WITH field1 = value1, field2 = value2
INSERT INTO "table_name" ON connection WITH field1 = value1
Parameters
| Parameter | Type | Description |
|---|---|---|
table_name | String | Name of the target database table |
WITH | Clause | Field-value pairs for the new record |
ON connection | String | Optional named database connection |
Description
INSERT creates a new record in a database table. The WITH clause specifies the field names and values for the new row. The keyword returns the newly created record, including any auto-generated fields like id.
Use cases include:
- Storing user information collected during conversations
- Logging interactions and events
- Creating orders, tickets, or other business records
- Saving form submissions
Examples
Basic Insert
' Insert a new customer record
INSERT INTO "customers" WITH
name = "John Doe",
email = "john@example.com",
phone = "+1-555-0100"
TALK "Customer record created!"
Insert with Return Value
' Insert and capture the new record
result = INSERT INTO "customers" WITH
name = customer_name,
email = customer_email,
created_at = NOW()
TALK "Customer created with ID: " + result.id
Insert from Conversation
' Collect data from user and insert
TALK "What is your name?"
HEAR user_name
TALK "What is your email?"
HEAR user_email
TALK "What is your phone number?"
HEAR user_phone
result = INSERT INTO "contacts" WITH
name = user_name,
email = user_email,
phone = user_phone,
source = "chatbot",
created_at = NOW()
TALK "Thanks " + user_name + "! Your contact ID is " + result.id
Insert Order
' Create a new order
result = INSERT INTO "orders" WITH
customer_id = user.id,
product_id = selected_product.id,
quantity = order_quantity,
total = selected_product.price * order_quantity,
status = "pending",
created_at = NOW()
TALK "Order #" + result.id + " created for $" + result.total
Insert with Foreign Key
' Insert related records
customer = INSERT INTO "customers" WITH
name = customer_name,
email = customer_email
address = INSERT INTO "addresses" WITH
customer_id = customer.id,
street = street_address,
city = city_name,
postal_code = zip_code,
country = "US"
TALK "Customer and address saved!"
Insert to Named Connection
' Insert to a specific database
INSERT INTO "audit_log" ON "analytics_db" WITH
event = "user_signup",
user_id = user.id,
timestamp = NOW(),
ip_address = session.ip
Batch Insert
' Insert multiple records from a data source
new_contacts = READ "imports/contacts.csv" AS TABLE
inserted_count = 0
FOR EACH contact IN new_contacts
INSERT INTO "contacts" WITH
name = contact.name,
email = contact.email,
phone = contact.phone,
imported_at = NOW()
inserted_count = inserted_count + 1
NEXT
TALK "Imported " + inserted_count + " contacts"
Common Use Cases
Log User Interaction
' Log every conversation for analytics
INSERT INTO "conversation_logs" WITH
user_id = user.id,
session_id = session.id,
message = user_message,
response = bot_response,
timestamp = NOW()
Create Support Ticket
' Create a support ticket from conversation
result = INSERT INTO "tickets" WITH
customer_id = user.id,
subject = ticket_subject,
description = ticket_description,
priority = "medium",
status = "open",
created_at = NOW()
TALK "Ticket #" + result.id + " created. Our team will respond within 24 hours."
Save Form Submission
' Save a lead form submission
result = INSERT INTO "leads" WITH
first_name = form.first_name,
last_name = form.last_name,
email = form.email,
company = form.company,
interest = form.product_interest,
source = "website_chatbot",
created_at = NOW()
' Notify sales team
SEND MAIL "sales@company.com", "New Lead: " + form.first_name, "A new lead has been captured via chatbot."
Record Event
' Record a business event
INSERT INTO "events" WITH
event_type = "purchase",
user_id = user.id,
data = '{"product_id": "' + product_id + '", "amount": ' + amount + '}',
occurred_at = NOW()
Error Handling
ON ERROR RESUME NEXT
result = INSERT INTO "customers" WITH
name = customer_name,
email = customer_email
IF ERROR THEN
PRINT "Insert failed: " + ERROR_MESSAGE
IF INSTR(ERROR_MESSAGE, "duplicate") > 0 THEN
TALK "This email is already registered."
ELSE IF INSTR(ERROR_MESSAGE, "constraint") > 0 THEN
TALK "Please provide all required information."
ELSE
TALK "Sorry, I couldn't save your information. Please try again."
END IF
ELSE
TALK "Information saved successfully!"
END IF
Common Errors
| Error | Cause | Solution |
|---|---|---|
DUPLICATE_KEY | Unique constraint violated | Check for existing record first |
NOT_NULL_VIOLATION | Required field missing | Include all required fields |
FOREIGN_KEY_VIOLATION | Referenced record doesn’t exist | Verify foreign key values |
CHECK_VIOLATION | Value fails check constraint | Validate data before insert |
TABLE_NOT_FOUND | Table doesn’t exist | Verify table name |
Validation Before Insert
' Validate data before inserting
IF LEN(email) < 5 OR INSTR(email, "@") = 0 THEN
TALK "Please provide a valid email address."
ELSE IF LEN(name) < 2 THEN
TALK "Please provide your full name."
ELSE
result = INSERT INTO "contacts" WITH
name = name,
email = email,
created_at = NOW()
TALK "Contact saved!"
END IF
INSERT vs MERGE
| Keyword | Purpose | Use When |
|---|---|---|
INSERT | Create new record | Adding new data |
MERGE | Insert or update | Record may already exist |
' INSERT - Always creates new record (may fail if duplicate)
INSERT INTO "users" WITH email = "john@example.com", name = "John"
' MERGE - Creates or updates based on key
MERGE INTO "users" ON email = "john@example.com" WITH
email = "john@example.com",
name = "John Updated"
Configuration
Database connection is configured in config.csv:
name,value
database-provider,postgres
database-pool-size,10
database-timeout,30
Database credentials are stored in Vault, not in config files.
Implementation Notes
- Implemented in Rust under
src/database/operations.rs - Uses parameterized queries to prevent SQL injection
- Auto-generates
idif not specified (serial/UUID) - Timestamps can be set with
NOW()function - Returns the complete inserted record including defaults
Related Keywords
- UPDATE — Modify existing records
- DELETE — Remove records
- MERGE — Insert or update (upsert)
- FIND — Query records
- TABLE — Create tables
Summary
INSERT creates new records in database tables. Use it to store user data, log events, create orders, and save form submissions. Always validate data before inserting and handle potential errors like duplicates and constraint violations. For cases where a record may already exist, consider using MERGE instead.
UPDATE
The UPDATE keyword modifies existing records in database tables, enabling bots to change stored data based on conditions.
Syntax
UPDATE "table_name" SET field1 = value1 WHERE condition
UPDATE "table_name" SET field1 = value1, field2 = value2 WHERE condition
UPDATE "table_name" ON connection SET field1 = value1 WHERE condition
Parameters
| Parameter | Type | Description |
|---|---|---|
table_name | String | Name of the target database table |
SET | Clause | Field-value pairs to update |
WHERE | Clause | Condition to select records to update |
ON connection | String | Optional named database connection |
Description
UPDATE modifies existing records in a database table that match the specified WHERE condition. The SET clause specifies which fields to change and their new values. Without a WHERE clause, all records in the table would be updated (which is usually not desired).
Use cases include:
- Updating user profiles
- Changing order status
- Recording timestamps for actions
- Incrementing counters
- Marking items as read/processed
Examples
Basic Update
' Update a customer's email
UPDATE "customers" SET email = "new.email@example.com" WHERE id = 123
TALK "Email updated successfully!"
Update Multiple Fields
' Update multiple fields at once
UPDATE "orders" SET
status = "shipped",
shipped_at = NOW(),
tracking_number = tracking_id
WHERE id = order_id
TALK "Order #" + order_id + " marked as shipped"
Update with Variable Values
' Update from conversation data
TALK "What is your new phone number?"
HEAR new_phone
UPDATE "customers" SET phone = new_phone WHERE id = user.id
TALK "Your phone number has been updated to " + new_phone
Increment Counter
' Increment a counter field
UPDATE "products" SET view_count = view_count + 1 WHERE id = product_id
Update Based on Condition
' Mark old sessions as expired
UPDATE "sessions" SET
status = "expired",
expired_at = NOW()
WHERE last_activity < DATEADD(NOW(), -30, "minute")
TALK "Inactive sessions have been expired"
Update with Named Connection
' Update on specific database
UPDATE "audit_log" ON "analytics_db" SET
reviewed = true,
reviewed_by = admin.id
WHERE id = log_entry_id
Common Use Cases
Update User Profile
' User wants to update their profile
TALK "What would you like to update? (name, email, phone)"
HEAR field_to_update
TALK "What is the new value?"
HEAR new_value
SWITCH field_to_update
CASE "name"
UPDATE "users" SET name = new_value WHERE id = user.id
CASE "email"
UPDATE "users" SET email = new_value WHERE id = user.id
CASE "phone"
UPDATE "users" SET phone = new_value WHERE id = user.id
CASE ELSE
TALK "Unknown field. Please choose name, email, or phone."
END SWITCH
TALK "Your " + field_to_update + " has been updated!"
Change Order Status
' Update order through its lifecycle
UPDATE "orders" SET
status = "processing",
processed_at = NOW()
WHERE id = order_id AND status = "pending"
TALK "Order is now being processed"
Mark as Read
' Mark notification as read
UPDATE "notifications" SET
read = true,
read_at = NOW()
WHERE user_id = user.id AND id = notification_id
TALK "Notification marked as read"
Record Last Activity
' Update last activity timestamp
UPDATE "users" SET last_active = NOW() WHERE id = user.id
Soft Delete
' Soft delete (mark as deleted without removing)
UPDATE "records" SET
deleted = true,
deleted_at = NOW(),
deleted_by = user.id
WHERE id = record_id
TALK "Record archived"
Batch Update
' Update multiple records matching condition
UPDATE "subscriptions" SET
status = "active",
renewed_at = NOW()
WHERE expires_at > NOW() AND auto_renew = true
TALK "Active subscriptions renewed"
Error Handling
ON ERROR RESUME NEXT
UPDATE "customers" SET email = new_email WHERE id = customer_id
IF ERROR THEN
PRINT "Update failed: " + ERROR_MESSAGE
IF INSTR(ERROR_MESSAGE, "duplicate") > 0 THEN
TALK "This email is already in use by another account."
ELSE IF INSTR(ERROR_MESSAGE, "constraint") > 0 THEN
TALK "The value you entered is not valid."
ELSE
TALK "Sorry, I couldn't update your information. Please try again."
END IF
ELSE
TALK "Information updated successfully!"
END IF
Common Errors
| Error | Cause | Solution |
|---|---|---|
DUPLICATE_KEY | Unique constraint violated | Value already exists |
CHECK_VIOLATION | Value fails check constraint | Validate before update |
NOT_NULL_VIOLATION | Setting required field to null | Provide a value |
NO_ROWS_AFFECTED | WHERE matched no records | Verify condition |
Safety Considerations
Always Use WHERE Clause
' DANGEROUS - updates ALL records!
' UPDATE "users" SET status = "inactive"
' SAFE - updates only matching records
UPDATE "users" SET status = "inactive" WHERE last_login < "2024-01-01"
Verify Before Update
' Check record exists before updating
record = FIND "orders" WHERE id = order_id
IF record THEN
UPDATE "orders" SET status = "cancelled" WHERE id = order_id
TALK "Order cancelled"
ELSE
TALK "Order not found"
END IF
Limit Scope
' Update only records the user owns
UPDATE "documents" SET
title = new_title
WHERE id = document_id AND owner_id = user.id
UPDATE vs MERGE
| Keyword | Purpose | Use When |
|---|---|---|
UPDATE | Modify existing records | Record definitely exists |
MERGE | Insert or update | Record may or may not exist |
' UPDATE - Only modifies if exists
UPDATE "users" SET name = "John" WHERE email = "john@example.com"
' MERGE - Creates if not exists, updates if exists
MERGE INTO "users" ON email = "john@example.com" WITH
email = "john@example.com",
name = "John"
Configuration
Database connection is configured in config.csv:
name,value
database-provider,postgres
database-pool-size,10
database-timeout,30
Database credentials are stored in Vault, not in config files.
Implementation Notes
- Implemented in Rust under
src/database/operations.rs - Uses parameterized queries to prevent SQL injection
- Returns number of affected rows
- WHERE clause is required by default for safety
- Supports all comparison operators (=, <, >, <=, >=, <>)
- Supports AND/OR in WHERE conditions
Related Keywords
- INSERT — Add new records
- DELETE — Remove records
- MERGE — Insert or update (upsert)
- FIND — Query records
- TABLE — Create tables
Summary
UPDATE modifies existing database records that match a WHERE condition. Use it to change user data, update statuses, record timestamps, and modify stored information. Always include a WHERE clause to avoid accidentally updating all records. For cases where you’re unsure if a record exists, consider using MERGE instead.
DELETE
The DELETE keyword is a unified command that automatically detects context and handles HTTP requests, database operations, and file deletions through a single interface.
Syntax
' HTTP DELETE - auto-detected by URL
DELETE "https://api.example.com/resource/123"
' Database DELETE - table with filter
DELETE "table_name", "filter_condition"
' File DELETE - path without URL
DELETE "path/to/file.txt"
Parameters
| Context | Parameter 1 | Parameter 2 | Description |
|---|---|---|---|
| HTTP | URL (string) | - | DELETE request to the URL |
| Database | Table name | Filter condition | Delete matching records |
| File | File path | - | Delete the file |
Description
DELETE is a smart, unified keyword that detects what you want to delete based on the arguments:
- HTTP DELETE: If the first argument starts with
http://orhttps://, sends an HTTP DELETE request - Database DELETE: If two arguments are provided (table, filter), performs SQL DELETE
- File DELETE: Otherwise, treats the argument as a file path
This eliminates the need for separate DELETE HTTP, DELETE FILE commands - just use DELETE.
Examples
HTTP DELETE
' Delete a resource via REST API
DELETE "https://api.example.com/users/123"
TALK "User deleted from API"
' Delete with authentication (set headers first)
SET HEADER "Authorization", "Bearer " + api_token
DELETE "https://api.example.com/posts/" + post_id
CLEAR HEADERS
TALK "Post deleted"
Database DELETE
' Delete by ID
DELETE "customers", "id = 123"
TALK "Customer deleted"
' Delete with variable
DELETE "orders", "id = " + order_id + " AND user_id = " + user.id
TALK "Order cancelled"
' Delete with multiple conditions
DELETE "sessions", "user_id = " + user.id + " AND status = 'expired'"
TALK "Expired sessions cleared"
' Delete old records
DELETE "logs", "created_at < '2024-01-01'"
TALK "Old logs purged"
File DELETE
' Delete a file
DELETE "temp/report.pdf"
TALK "File deleted"
' Delete uploaded file
DELETE "uploads/" + filename
TALK "Upload removed"
Common Use Cases
REST API Resource Deletion
' Delete item from external service
TALK "Removing item from inventory system..."
SET HEADER "Authorization", "Bearer " + inventory_api_key
SET HEADER "Content-Type", "application/json"
result = DELETE "https://inventory.example.com/api/items/" + item_id
CLEAR HEADERS
IF result THEN
TALK "Item removed from inventory"
ELSE
TALK "Failed to remove item"
END IF
User Account Deletion
' Complete account deletion flow
TALK "Are you sure you want to delete your account? Type 'DELETE' to confirm."
HEAR confirmation
IF confirmation = "DELETE" THEN
' Delete related records first
DELETE "orders", "customer_id = " + user.id
DELETE "addresses", "customer_id = " + user.id
DELETE "preferences", "user_id = " + user.id
' Delete the user
DELETE "users", "id = " + user.id
TALK "Your account has been deleted."
ELSE
TALK "Account deletion cancelled."
END IF
Cleanup Temporary Files
' Clean up temp files after processing
temp_files = ["temp/doc1.pdf", "temp/doc2.pdf", "temp/merged.pdf"]
FOR EACH f IN temp_files
DELETE f
NEXT
TALK "Temporary files cleaned up"
Cancel Order via API
' Cancel order in external system
order_api_url = "https://orders.example.com/api/orders/" + order_id
SET HEADER "Authorization", "Bearer " + api_key
DELETE order_api_url
CLEAR HEADERS
' Also remove from local database
DELETE "local_orders", "external_id = '" + order_id + "'"
TALK "Order cancelled"
Remove Expired Data
' Scheduled cleanup task
' Delete expired tokens
DELETE "tokens", "expires_at < NOW()"
' Delete old notifications
DELETE "notifications", "read = true AND created_at < DATEADD(NOW(), -90, 'day')"
' Delete abandoned carts
DELETE "carts", "updated_at < DATEADD(NOW(), -7, 'day') AND checkout_completed = false"
TALK "Cleanup complete"
Error Handling
ON ERROR RESUME NEXT
DELETE "orders", "id = " + order_id
IF ERROR THEN
error_msg = ERROR MESSAGE
IF INSTR(error_msg, "foreign key") > 0 THEN
TALK "Cannot delete: this record is referenced by other data."
ELSE IF INSTR(error_msg, "not found") > 0 THEN
TALK "Record not found."
ELSE IF INSTR(error_msg, "permission") > 0 THEN
TALK "You don't have permission to delete this."
ELSE
TALK "Delete failed: " + error_msg
END IF
ELSE
TALK "Deleted successfully!"
END IF
ON ERROR GOTO 0
Common Errors
| Error | Cause | Solution |
|---|---|---|
FOREIGN_KEY_VIOLATION | Database record referenced elsewhere | Delete child records first |
FILE_NOT_FOUND | File doesn’t exist | Check file path |
HTTP 404 | API resource not found | Verify URL and resource ID |
HTTP 401/403 | Authentication failed | Check API credentials |
PERMISSION_DENIED | Insufficient privileges | Check permissions |
Context Detection
The DELETE keyword automatically detects context:
| Argument Pattern | Detected As |
|---|---|
"https://..." or "http://..." | HTTP DELETE |
Two arguments: "table", "filter" | Database DELETE |
| Single argument without URL prefix | File DELETE |
' HTTP - starts with http/https
DELETE "https://api.example.com/resource/1"
' Database - two arguments
DELETE "users", "id = 123"
' File - single argument, no URL prefix
DELETE "temp/file.txt"
Safety Considerations
Always Use Filters for Database
' DANGEROUS - would delete all records!
' DELETE "users", ""
' SAFE - specific condition
DELETE "users", "id = " + user_id
Verify Before Deleting
' Check record exists and belongs to user
record = FIND "documents", "id = " + doc_id + " AND owner_id = " + user.id
IF record THEN
DELETE "documents", "id = " + doc_id
TALK "Document deleted"
ELSE
TALK "Document not found or access denied"
END IF
Confirm Destructive Actions
TALK "Delete " + item_name + "? This cannot be undone. Type 'yes' to confirm."
HEAR confirmation
IF LOWER(confirmation) = "yes" THEN
DELETE "items", "id = " + item_id
TALK "Deleted"
ELSE
TALK "Cancelled"
END IF
Consider Soft Delete
' Instead of permanent delete, mark as deleted
UPDATE "records", #{ "deleted": true, "deleted_at": NOW() }, "id = " + record_id
TALK "Record archived (can be restored)"
Return Values
| Context | Returns |
|---|---|
| HTTP | Response body as string |
| Database | Number of deleted rows |
| File | true on success, error message on failure |
Configuration
No specific configuration required. Uses:
- HTTP: Standard HTTP client
- Database: Connection from
config.csv - Files: Bot’s
.gbdrivestorage
Implementation Notes
- Implemented in
data_operations.rs - Auto-detects URL vs table vs file
- HTTP DELETE supports custom headers via
SET HEADER - Database DELETE uses parameterized queries (SQL injection safe)
- File DELETE works within bot’s storage sandbox
Related Keywords
- INSERT — Add new records
- UPDATE — Modify existing records
- FIND — Query records
- POST — HTTP POST requests
- PUT — HTTP PUT requests
- READ — Read file contents
- WRITE — Write file contents
Summary
DELETE is a unified keyword that intelligently handles HTTP API deletions, database record removal, and file deletion through a single interface. It auto-detects context based on arguments: URLs trigger HTTP DELETE, table+filter triggers database DELETE, and paths trigger file DELETE. Always use filters for database operations, verify ownership before deleting user data, and confirm destructive actions. For recoverable deletions, consider soft delete instead.
MERGE
Combines data from multiple sources or upserts records into a database table.
Syntax
MERGE table, data, key_column
MERGE table, data, key_columns, update_columns
Parameters
| Parameter | Type | Description |
|---|---|---|
table | String | Target database table name |
data | Array/Object | Data to merge (single record or array of records) |
key_column | String | Column(s) to match existing records |
update_columns | Array | Optional specific columns to update on match |
Description
MERGE performs an “upsert” operation: it inserts new records or updates existing ones based on matching key columns. This is useful for synchronizing data from external sources, importing bulk data, or maintaining data consistency.
Examples
Basic Merge (Single Record)
contact = #{
email: "john@example.com",
name: "John Smith",
phone: "+1-555-0123"
}
MERGE "contacts", contact, "email"
TALK "Contact merged successfully"
Bulk Merge
new_products = GET "https://api.supplier.com/products"
MERGE "products", new_products, "sku"
TALK "Merged " + LEN(new_products) + " products"
Merge with Specific Update Columns
price_updates = [
#{sku: "ABC123", price: 29.99},
#{sku: "DEF456", price: 49.99},
#{sku: "GHI789", price: 19.99}
]
MERGE "products", price_updates, "sku", ["price"]
TALK "Prices updated"
Composite Key Match
attendance = #{
employee_id: "EMP001",
date: TODAY(),
status: "present",
check_in: NOW()
}
MERGE "attendance", attendance, "employee_id,date"
Sync from External API
SET SCHEDULE "every 6 hours"
' Fetch latest data from CRM
customers = GET "https://crm.example.com/api/customers"
' Merge into local database
MERGE "customers", customers, "crm_id"
TALK "Synced " + LEN(customers) + " customer records"
Return Value
Returns an object with merge statistics:
| Property | Description |
|---|---|
inserted | Number of new records created |
updated | Number of existing records updated |
unchanged | Number of records that matched but had no changes |
total | Total records processed |
result = MERGE "products", data, "sku"
TALK "Inserted: " + result.inserted + ", Updated: " + result.updated
Sample Conversation
Behavior
On Match (Key Exists)
- Updates all columns in the data (or only
update_columnsif specified) - Preserves columns not in the data
- Updates
updated_attimestamp if column exists
On No Match (New Record)
- Inserts new row with all provided columns
- Sets
created_attimestamp if column exists
Common Patterns
Daily Data Import
SET SCHEDULE "every day at 2am"
data = GET "https://data.provider.com/daily-export"
result = MERGE "imported_data", data, "external_id"
IF result.inserted > 0 THEN
SEND MAIL "admin@company.com", "Data Import",
"Imported " + result.inserted + " new records"
END IF
Inventory Sync
inventory = GET "https://warehouse.api/stock-levels"
MERGE "products", inventory, "sku", ["quantity", "last_restock"]
User Profile Updates
profile = #{
user_id: current_user_id,
preferences: user_preferences,
last_active: NOW()
}
MERGE "user_profiles", profile, "user_id"
See Also
- INSERT - Insert new records only
- UPDATE - Update existing records only
- SAVE - Simple data persistence
- FIND - Query data before merging
FILL
Populates a document template with data from variables or objects.
Syntax
result = FILL template, data
FILL template, data TO output_path
Parameters
| Parameter | Type | Description |
|---|---|---|
template | String | Path to template file (Word, Excel, PDF, or text) |
data | Object | Key-value pairs for placeholder replacement |
output_path | String | Optional destination path for filled document |
Description
FILL replaces placeholders in document templates with actual data values. Placeholders use double curly braces like {{name}} or {{company}}. This is useful for generating personalized documents, contracts, invoices, and reports.
Examples
Basic Template Fill
data = #{
name: "John Smith",
company: "Acme Corp",
date: FORMAT(TODAY(), "MMMM d, yyyy")
}
result = FILL "templates/contract.docx", data
TALK "Document generated: " + result.path
Invoice Generation
invoice_data = #{
invoice_number: "INV-2025-001",
customer_name: customer.name,
customer_email: customer.email,
items: order_items,
subtotal: subtotal,
tax: tax_amount,
total: total_amount,
due_date: FORMAT(DATEADD("day", 30, TODAY()), "yyyy-MM-dd")
}
FILL "templates/invoice.docx", invoice_data TO "invoices/INV-2025-001.docx"
TALK "Invoice generated and saved"
Certificate Generation
certificate = #{
recipient: participant.name,
course: "AI Fundamentals",
completion_date: FORMAT(TODAY(), "MMMM d, yyyy"),
instructor: "Dr. Sarah Johnson",
certificate_id: GUID()
}
FILL "templates/certificate.docx", certificate TO "certificates/" + certificate.certificate_id + ".docx"
Email Template
email_data = #{
first_name: user.first_name,
order_id: order.id,
tracking_number: shipment.tracking,
delivery_date: shipment.estimated_delivery
}
body = FILL "templates/shipping-notification.txt", email_data
SEND MAIL user.email, "Your order has shipped!", body
Supported Template Formats
| Format | Extension | Placeholder Style |
|---|---|---|
| Word | .docx | {{placeholder}} |
| Excel | .xlsx | {{placeholder}} |
| Text | .txt | {{placeholder}} |
| HTML | .html | {{placeholder}} |
| Markdown | .md | {{placeholder}} |
Return Value
Returns an object containing:
| Property | Description |
|---|---|
path | Path to the generated document |
content | Document content (for text formats) |
size | File size in bytes |
Sample Conversation
Template Example
A template file might look like:
SERVICE AGREEMENT
This agreement is entered into on {{date}} between:
Company: {{company_name}}
Contact: {{contact_name}}
Email: {{contact_email}}
SERVICES:
{{service_description}}
TERMS:
Duration: {{duration}} months
Payment: ${{monthly_amount}} per month
Start Date: {{start_date}}
Signature: _____________________
Advanced: Lists and Tables
For repeating data, use array placeholders:
data = #{
customer: "Acme Corp",
items: [
#{name: "Widget", qty: 10, price: 29.99},
#{name: "Gadget", qty: 5, price: 49.99}
],
total: 549.85
}
FILL "templates/order.docx", data TO "orders/order-123.docx"
In the template, use {{#items}}...{{/items}} for loops.
See Also
- GENERATE PDF - Convert filled documents to PDF
- MERGE PDF - Combine multiple documents
- UPLOAD - Upload generated documents
- SEND MAIL - Email generated documents
MAP
Transforms each element of an array by applying a function or expression.
Syntax
result = MAP(array, expression)
result = MAP(array, field)
Parameters
| Parameter | Type | Description |
|---|---|---|
array | Array | The source array to transform |
expression | String | Expression to apply to each element, or field name to extract |
Description
MAP creates a new array by applying a transformation to each element of the input array. This is useful for extracting specific fields from objects, formatting data, or performing calculations on each item.
Examples
Extract Field from Objects
users = FIND "users", "status=active"
names = MAP(users, "name")
TALK "Active users: " + JOIN(names, ", ")
Transform Values
prices = [100, 200, 300, 400]
with_tax = MAP(prices, "item * 1.1")
FOR EACH price IN with_tax
TALK "Price with tax: $" + price
NEXT
Format Data
orders = FIND "orders", "date=today"
summaries = MAP(orders, "'Order #' + item.id + ': $' + item.total")
FOR EACH summary IN summaries
TALK summary
NEXT
Extract Nested Properties
contacts = FIND "contacts", "company=Acme"
emails = MAP(contacts, "email")
email_list = JOIN(emails, "; ")
TALK "Emails: " + email_list
Uppercase Names
products = ["widget", "gadget", "gizmo"]
upper_products = MAP(products, "UPPER(item)")
TALK JOIN(upper_products, ", ")
' Output: "WIDGET, GADGET, GIZMO"
Return Value
Returns a new array with the same length as the input, containing transformed values.
- Original array is not modified
- Null values in the source are preserved as null
- If transformation fails for an element, that element becomes null
Sample Conversation
Common Patterns
Extract IDs for API Calls
records = FIND "items", "sync=pending"
ids = MAP(records, "id")
' Use ids for batch API operations
Create Display Labels
products = FIND "products", "in_stock=true"
labels = MAP(products, "item.name + ' ($' + item.price + ')'")
Calculate Derived Values
line_items = FIND "cart_items", "cart_id=123"
totals = MAP(line_items, "item.quantity * item.unit_price")
See Also
- FILTER - Filter array elements
- FOR EACH - Iterate with more control
- JOIN - Combine mapped results into string
- AGGREGATE - Calculate summary from mapped values
FILTER
Selects elements from an array that match a specified condition.
Syntax
result = FILTER(array, condition)
Parameters
| Parameter | Type | Description |
|---|---|---|
array | Array | The source array to filter |
condition | String | Expression that evaluates to true/false for each element |
Description
FILTER creates a new array containing only the elements from the input array that satisfy the given condition. The condition is evaluated for each element, and only elements where the condition is true are included in the result.
Examples
Filter by Field Value
orders = FIND "orders", "year=2025"
large_orders = FILTER(orders, "item.total > 1000")
TALK "Found " + LEN(large_orders) + " orders over $1000"
Filter by String Match
contacts = FIND "contacts", "active=true"
gmail_users = FILTER(contacts, "INSTR(item.email, 'gmail.com') > 0")
FOR EACH contact IN gmail_users
TALK contact.name + " - " + contact.email
NEXT
Filter by Status
tasks = FIND "tasks", "assigned_to=me"
pending = FILTER(tasks, "item.status = 'pending'")
completed = FILTER(tasks, "item.status = 'completed'")
TALK "Pending: " + LEN(pending) + ", Completed: " + LEN(completed)
Filter Numbers
scores = [85, 92, 67, 78, 95, 88, 72]
passing = FILTER(scores, "item >= 70")
honors = FILTER(scores, "item >= 90")
TALK "Passing: " + LEN(passing) + ", Honors: " + LEN(honors)
Complex Conditions
products = FIND "products", "category=electronics"
featured = FILTER(products, "item.in_stock = true AND item.rating >= 4.0")
TALK "Featured products:"
FOR EACH product IN featured
TALK "- " + product.name + " (★" + product.rating + ")"
NEXT
Return Value
Returns a new array containing only elements where the condition evaluated to true.
- Original array is not modified
- Returns empty array if no elements match
- Preserves order of matching elements
Sample Conversation
Condition Operators
| Operator | Description | Example |
|---|---|---|
= | Equals | "item.status = 'active'" |
!= | Not equals | "item.type != 'archived'" |
> | Greater than | "item.amount > 100" |
>= | Greater or equal | "item.score >= 70" |
< | Less than | "item.quantity < 10" |
<= | Less or equal | "item.age <= 30" |
AND | Logical and | "item.active = true AND item.verified = true" |
OR | Logical or | "item.priority = 'high' OR item.urgent = true" |
Common Patterns
Filter then Count
users = FIND "users", "registered=true"
premium = FILTER(users, "item.plan = 'premium'")
TALK "Premium users: " + LEN(premium)
Filter then Map
orders = FIND "orders", "status=shipped"
recent = FILTER(orders, "item.ship_date > DATEADD('day', -7, NOW())")
tracking = MAP(recent, "tracking_number")
Chain Multiple Filters
products = FIND "products", "active=true"
in_stock = FILTER(products, "item.quantity > 0")
on_sale = FILTER(in_stock, "item.discount > 0")
featured = FILTER(on_sale, "item.rating >= 4.5")
See Also
- FIND - Retrieve data from database
- MAP - Transform filtered results
- FOR EACH - Iterate over filtered array
- AGGREGATE - Calculate summary from filtered data
AGGREGATE
The AGGREGATE keyword performs calculations on collections of data, computing sums, counts, averages, and other statistical operations.
Syntax
result = AGGREGATE collection SUM field
result = AGGREGATE collection COUNT
result = AGGREGATE collection AVERAGE field
result = AGGREGATE collection MIN field
result = AGGREGATE collection MAX field
result = AGGREGATE "table_name" SUM field WHERE condition
Parameters
| Parameter | Type | Description |
|---|---|---|
collection | Array/String | Data array or table name |
SUM | Operation | Calculate total of numeric field |
COUNT | Operation | Count number of items |
AVERAGE | Operation | Calculate arithmetic mean |
MIN | Operation | Find minimum value |
MAX | Operation | Find maximum value |
field | String | Field name to aggregate |
WHERE | Clause | Optional filter condition |
Description
AGGREGATE performs mathematical and statistical calculations on data collections. It can work with in-memory arrays or query database tables directly. Use it to compute totals, counts, averages, and find extreme values.
Use cases include:
- Calculating order totals
- Counting records
- Computing averages for reports
- Finding highest/lowest values
- Summarizing data for dashboards
Examples
Sum Values
' Calculate total sales
orders = FIND "orders" WHERE status = "completed"
total_sales = AGGREGATE orders SUM amount
TALK "Total sales: $" + FORMAT(total_sales, "#,##0.00")
Count Records
' Count active users
active_count = AGGREGATE "users" COUNT WHERE status = "active"
TALK "We have " + active_count + " active users"
Calculate Average
' Calculate average order value
avg_order = AGGREGATE "orders" AVERAGE amount WHERE created_at > "2025-01-01"
TALK "Average order value: $" + FORMAT(avg_order, "#,##0.00")
Find Minimum and Maximum
' Find price range
products = FIND "products" WHERE category = "electronics"
min_price = AGGREGATE products MIN price
max_price = AGGREGATE products MAX price
TALK "Prices range from $" + min_price + " to $" + max_price
Multiple Aggregations
' Calculate multiple statistics
orders = FIND "orders" WHERE customer_id = user.id
total_spent = AGGREGATE orders SUM amount
order_count = AGGREGATE orders COUNT
avg_order = AGGREGATE orders AVERAGE amount
largest_order = AGGREGATE orders MAX amount
TALK "Your order summary:"
TALK "- Total orders: " + order_count
TALK "- Total spent: $" + FORMAT(total_spent, "#,##0.00")
TALK "- Average order: $" + FORMAT(avg_order, "#,##0.00")
TALK "- Largest order: $" + FORMAT(largest_order, "#,##0.00")
Common Use Cases
Sales Dashboard
' Calculate sales metrics
today = FORMAT(NOW(), "YYYY-MM-DD")
this_month = FORMAT(NOW(), "YYYY-MM") + "-01"
today_sales = AGGREGATE "orders" SUM amount WHERE DATE(created_at) = today
month_sales = AGGREGATE "orders" SUM amount WHERE created_at >= this_month
today_count = AGGREGATE "orders" COUNT WHERE DATE(created_at) = today
month_count = AGGREGATE "orders" COUNT WHERE created_at >= this_month
TALK "📊 Sales Dashboard"
TALK "Today: $" + FORMAT(today_sales, "#,##0.00") + " (" + today_count + " orders)"
TALK "This month: $" + FORMAT(month_sales, "#,##0.00") + " (" + month_count + " orders)"
Inventory Summary
' Calculate inventory metrics
total_items = AGGREGATE "products" COUNT
total_value = AGGREGATE "products" SUM (price * stock)
low_stock = AGGREGATE "products" COUNT WHERE stock < 10
out_of_stock = AGGREGATE "products" COUNT WHERE stock = 0
TALK "Inventory Summary:"
TALK "- Total products: " + total_items
TALK "- Total value: $" + FORMAT(total_value, "#,##0.00")
TALK "- Low stock items: " + low_stock
TALK "- Out of stock: " + out_of_stock
Customer Metrics
' Calculate customer statistics
total_customers = AGGREGATE "customers" COUNT
new_this_month = AGGREGATE "customers" COUNT WHERE created_at >= this_month
avg_lifetime_value = AGGREGATE "customers" AVERAGE lifetime_value
TALK "Customer Metrics:"
TALK "- Total customers: " + total_customers
TALK "- New this month: " + new_this_month
TALK "- Avg lifetime value: $" + FORMAT(avg_lifetime_value, "#,##0.00")
Rating Analysis
' Analyze product ratings
reviews = FIND "reviews" WHERE product_id = product.id
avg_rating = AGGREGATE reviews AVERAGE rating
review_count = AGGREGATE reviews COUNT
five_stars = AGGREGATE reviews COUNT WHERE rating = 5
TALK "Product rating: " + FORMAT(avg_rating, "#.#") + " stars"
TALK "Based on " + review_count + " reviews"
TALK five_stars + " customers gave 5 stars"
Aggregate from Array
' Aggregate in-memory data
prices = [29.99, 49.99, 19.99, 99.99, 39.99]
total = AGGREGATE prices SUM
count = AGGREGATE prices COUNT
average = AGGREGATE prices AVERAGE
minimum = AGGREGATE prices MIN
maximum = AGGREGATE prices MAX
TALK "Sum: $" + FORMAT(total, "#,##0.00")
TALK "Count: " + count
TALK "Average: $" + FORMAT(average, "#,##0.00")
TALK "Range: $" + minimum + " - $" + maximum
Aggregate with Expressions
' Calculate computed values
total_revenue = AGGREGATE "order_items" SUM (quantity * unit_price)
total_discount = AGGREGATE "order_items" SUM (quantity * unit_price * discount_percent / 100)
net_revenue = total_revenue - total_discount
TALK "Gross revenue: $" + FORMAT(total_revenue, "#,##0.00")
TALK "Discounts: $" + FORMAT(total_discount, "#,##0.00")
TALK "Net revenue: $" + FORMAT(net_revenue, "#,##0.00")
Conditional Aggregation
' Aggregate with different conditions
pending_total = AGGREGATE "orders" SUM amount WHERE status = "pending"
shipped_total = AGGREGATE "orders" SUM amount WHERE status = "shipped"
delivered_total = AGGREGATE "orders" SUM amount WHERE status = "delivered"
TALK "Order totals by status:"
TALK "- Pending: $" + FORMAT(pending_total, "#,##0.00")
TALK "- Shipped: $" + FORMAT(shipped_total, "#,##0.00")
TALK "- Delivered: $" + FORMAT(delivered_total, "#,##0.00")
Error Handling
ON ERROR RESUME NEXT
total = AGGREGATE "orders" SUM amount WHERE customer_id = user.id
IF ERROR THEN
PRINT "Aggregation failed: " + ERROR_MESSAGE
TALK "Sorry, I couldn't calculate your totals."
ELSE IF total = 0 THEN
TALK "You haven't placed any orders yet."
ELSE
TALK "Your total purchases: $" + FORMAT(total, "#,##0.00")
END IF
Common Errors
| Error | Cause | Solution |
|---|---|---|
INVALID_FIELD | Field doesn’t exist | Check field name spelling |
TYPE_ERROR | Non-numeric field for SUM/AVG | Use numeric fields only |
EMPTY_COLLECTION | No data to aggregate | Handle zero/null results |
TABLE_NOT_FOUND | Table doesn’t exist | Verify table name |
Null Handling
' AGGREGATE ignores NULL values by default
avg_rating = AGGREGATE "products" AVERAGE rating
' NULL ratings are not included in the average
' Count non-null values
rated_count = AGGREGATE "products" COUNT WHERE rating IS NOT NULL
total_count = AGGREGATE "products" COUNT
TALK rated_count + " of " + total_count + " products have ratings"
Performance Tips
- Use WHERE clauses — Filter before aggregating for better performance
- Index aggregate fields — Ensure database indexes on frequently aggregated columns
- Limit data scope — Aggregate only the date range or subset needed
- Cache results — Store aggregated values for expensive calculations
' Efficient: Filter first
total = AGGREGATE "orders" SUM amount WHERE date > "2025-01-01"
' Less efficient: Aggregate all, then filter
' all_orders = FIND "orders"
' recent = FILTER all_orders WHERE date > "2025-01-01"
' total = AGGREGATE recent SUM amount
Configuration
Database connection is configured in config.csv:
name,value
database-provider,postgres
database-pool-size,10
database-timeout,30
Database credentials are stored in Vault, not in config files.
Implementation Notes
- Implemented in Rust under
src/database/aggregate.rs - Uses SQL aggregate functions when querying tables
- Handles NULL values according to SQL standards
- Supports expressions in aggregate calculations
- Returns 0 for COUNT on empty sets, NULL for SUM/AVG/MIN/MAX
Related Keywords
- FIND — Query data before aggregating
- GROUP BY — Group data before aggregating
- FILTER — Filter in-memory collections
- MAP — Transform data before aggregating
Summary
AGGREGATE calculates sums, counts, averages, and min/max values from data collections. Use it for dashboards, reports, and any situation where you need to summarize data. It works with both database tables (using SQL) and in-memory arrays. Always handle empty results and use WHERE clauses to improve performance on large datasets.
JOIN
Combines elements of an array into a single string with a specified separator.
Syntax
result = JOIN(array, separator)
Parameters
| Parameter | Type | Description |
|---|---|---|
array | Array | The array of elements to join |
separator | String | The delimiter to place between elements |
Description
JOIN concatenates all elements of an array into a single string, inserting the specified separator between each element. This is commonly used for creating comma-separated lists, building display strings, or formatting data for output.
Examples
Basic Usage
names = ["Alice", "Bob", "Charlie"]
result = JOIN(names, ", ")
TALK result
' Output: "Alice, Bob, Charlie"
Creating Hashtag Lists
tags = ["GeneralBots", "AI", "Automation", "NoCode"]
hashtags = JOIN(tags, " #")
hashtags = "#" + hashtags
TALK hashtags
' Output: "#GeneralBots #AI #Automation #NoCode"
Building File Paths
parts = ["documents", "reports", "2025", "sales.pdf"]
path = JOIN(parts, "/")
TALK "File: " + path
' Output: "File: documents/reports/2025/sales.pdf"
Email Recipients
recipients = ["john@example.com", "jane@example.com", "bob@example.com"]
to_list = JOIN(recipients, "; ")
SEND MAIL to_list, "Team Update", "Please review the attached report."
Display Lists
items = FIND "products", "category=electronics"
product_names = []
FOR EACH item IN items
product_names = APPEND(product_names, item.name)
NEXT
TALK "Available products: " + JOIN(product_names, ", ")
Return Value
Returns a string containing all array elements concatenated with the separator.
- If the array is empty, returns an empty string
- If the array has one element, returns that element as a string
- Null values in the array are converted to empty strings
Sample Conversation
Common Separators
| Separator | Use Case |
|---|---|
", " | Readable comma-separated lists |
"," | CSV data |
"\n" | Multi-line output |
" " | Space-separated words |
" | " | Table columns |
"/" | File paths |
"; " | Email recipients |
See Also
- SPLIT - Split a string into an array (opposite of JOIN)
- FOR EACH - Iterate over arrays
- FILTER - Filter arrays before joining
PIVOT
Transforms rows into columns, creating a cross-tabulation summary of data.
Syntax
result = PIVOT data, row_column, column_column, value_column
result = PIVOT data, row_column, column_column, value_column, aggregate
Parameters
| Parameter | Type | Description |
|---|---|---|
data | Array | The data array to pivot |
row_column | String | Column to use for row headers |
column_column | String | Column to use for column headers |
value_column | String | Column containing values to aggregate |
aggregate | String | Aggregate function: SUM, AVG, COUNT, MIN, MAX (default: SUM) |
Description
PIVOT reorganizes data from a normalized format into a cross-tabulation format, making it easier to compare values across two dimensions. This is useful for creating summary reports and dashboards.
Examples
Basic Pivot
sales = FIND "sales", "year=2025"
summary = PIVOT sales, "region", "quarter", "amount"
' Result: regions as rows, quarters as columns
' Shows total sales for each region/quarter combination
With Aggregate Function
orders = FIND "orders", "status=completed"
avg_order = PIVOT orders, "product", "month", "total", "AVG"
FOR EACH row IN avg_order
TALK row.product + ": Q1=$" + row.Q1 + ", Q2=$" + row.Q2
NEXT
Sales by Region and Product
data = FIND "sales", "year=2025"
pivot_table = PIVOT data, "region", "product", "revenue", "SUM"
TALK "Revenue by Region and Product:"
FOR EACH region IN pivot_table
TALK region.row_header + ":"
TALK " Widgets: $" + region.Widgets
TALK " Gadgets: $" + region.Gadgets
NEXT
Return Value
Returns an array of objects where:
- Each object represents a row
row_headercontains the row label- Dynamic properties contain pivoted column values
Sample Conversation
Use Cases
| Scenario | Row | Column | Value |
|---|---|---|---|
| Sales dashboard | Region | Quarter | Revenue |
| Attendance report | Employee | Month | Days |
| Product comparison | Product | Store | Units sold |
| Time tracking | Project | Week | Hours |
See Also
- GROUP BY - Group data by columns
- AGGREGATE - Calculate summary values
- TABLE - Display formatted tables
GROUP BY
Groups data by specified columns and optionally applies aggregate functions.
Syntax
result = GROUP BY data, column
result = GROUP BY data, column, aggregates
Parameters
| Parameter | Type | Description |
|---|---|---|
data | Array | The data array to group |
column | String | Column name to group by |
aggregates | Object | Optional aggregate functions to apply |
Description
GROUP BY organizes rows of data into groups based on matching values in a specified column. When combined with aggregate functions, it calculates summary values for each group.
Examples
Basic Grouping
orders = FIND "orders", "status=completed"
grouped = GROUP BY orders, "category"
FOR EACH group IN grouped
TALK "Category: " + group.key + " - Count: " + group.count
NEXT
With Aggregates
sales = FIND "sales", "year=2025"
summary = GROUP BY sales, "region", #{
total: "SUM(amount)",
average: "AVG(amount)",
count: "COUNT(*)"
}
FOR EACH region IN summary
TALK region.key + ": $" + region.total + " (" + region.count + " sales)"
NEXT
Multiple Level Grouping
' First group by category, then by month
products = FIND "orders", "year=2025"
by_category = GROUP BY products, "category"
FOR EACH cat IN by_category
TALK "Category: " + cat.key
by_month = GROUP BY cat.items, "month"
FOR EACH month IN by_month
TALK " " + month.key + ": " + month.count + " orders"
NEXT
NEXT
Return Value
Returns an array of group objects, each containing:
| Property | Description |
|---|---|
key | The grouping value |
items | Array of items in this group |
count | Number of items in group |
| Additional | Any requested aggregates |
Supported Aggregates
| Function | Description |
|---|---|
SUM(column) | Sum of values |
AVG(column) | Average of values |
MIN(column) | Minimum value |
MAX(column) | Maximum value |
COUNT(*) | Number of rows |
Sample Conversation
See Also
- AGGREGATE - Single aggregate calculations
- PIVOT - Cross-tabulation of data
- FILTER - Filter data before grouping
- FIND - Retrieve data to group
Media & Messaging Keywords
Keywords for displaying media content and sending messages across various channels.
Overview
These keywords handle media playback, QR code generation, and messaging operations that extend beyond the basic TALK/HEAR conversation flow.
Keywords in This Section
| Keyword | Description |
|---|---|
| PLAY | Display videos, images, documents, and presentations |
| QR CODE | Generate QR code images from data |
| SEND SMS | Send SMS text messages |
Quick Reference
Media Display
' Play video with controls
PLAY "training.mp4" WITH OPTIONS "controls"
' Display image fullscreen
PLAY "banner.png" WITH OPTIONS "fullscreen"
' Show PDF document
PLAY "contract.pdf"
' Display PowerPoint presentation
PLAY "slides.pptx"
QR Code Generation
' Generate basic QR code
qr_path = QR CODE "https://example.com"
SEND FILE qr_path
' Generate with custom size
qr_path = QR CODE "payment-data", 512
' WiFi QR code
wifi_data = "WIFI:T:WPA;S:MyNetwork;P:password123;;"
qr_path = QR CODE wifi_data
SMS Messaging
' Send basic SMS
SEND SMS "+1234567890", "Hello from General Bots!"
' Send with specific provider
SEND SMS phone, message, "twilio"
' Two-factor authentication
otp = RANDOM(100000, 999999)
SEND SMS user.phone, "Your code: " + otp
Channel Behavior
These keywords adapt their behavior based on the active channel:
| Keyword | Web | Teams | SMS | |
|---|---|---|---|---|
| PLAY | Modal player | Send as media | Adaptive card | N/A |
| QR CODE | Display inline | Send as image | Embed in card | N/A |
| SEND SMS | N/A | N/A | N/A | Direct send |
Configuration
SMS Providers
Configure in config.csv:
sms-provider,twilio
twilio-account-sid,YOUR_SID
twilio-auth-token,YOUR_TOKEN
twilio-phone-number,+15551234567
Supported Providers
- Twilio - Global coverage, reliable
- AWS SNS - AWS integration, cost-effective
- Vonage - Good international rates
- MessageBird - European coverage
Common Patterns
Interactive Media Training
TALK "Welcome to the training module!"
PLAY "intro-video.mp4" WITH OPTIONS "controls"
HEAR ready AS TEXT "Type 'next' when ready:"
PLAY "chapter-1.pptx"
HEAR quiz AS TEXT "What did you learn?"
' Process quiz response
QR Code Payment Flow
HEAR amount AS NUMBER "Enter payment amount:"
payment_data = GENERATE_PAYMENT_CODE(amount)
qr_path = QR CODE payment_data, 400
TALK "Scan to pay $" + amount + ":"
SEND FILE qr_path
SMS Verification
otp = RANDOM(100000, 999999)
REMEMBER "otp_" + user.id, otp, "5 minutes"
SEND SMS user.phone, "Your code: " + otp
HEAR code AS TEXT "Enter verification code:"
IF code = RECALL("otp_" + user.id) THEN
TALK "✅ Verified!"
ELSE
TALK "❌ Invalid code"
END IF
See Also
- Universal Messaging - Multi-channel messaging
- SEND MAIL - Email messaging
- TALK - Basic text output
- File Operations - File handling
PLAY
Open a content projector/player to display various media types including videos, images, documents, and presentations.
Syntax
' Basic playback
PLAY file_or_url
' With options
PLAY file_or_url WITH OPTIONS options_string
Parameters
| Parameter | Type | Required | Description |
|---|---|---|---|
file_or_url | String | Yes | Path to file or URL to display |
options_string | String | No | Comma-separated playback options |
Supported Options
| Option | Description |
|---|---|
autoplay | Start playback automatically |
loop | Loop content continuously |
fullscreen | Open in fullscreen mode |
muted | Start with audio muted |
controls | Show playback controls |
nocontrols | Hide playback controls |
Supported Content Types
Video
| Extension | Format |
|---|---|
.mp4 | MPEG-4 Video |
.webm | WebM Video |
.ogg | Ogg Video |
.mov | QuickTime |
.avi | AVI Video |
.mkv | Matroska |
.m4v | M4V Video |
Audio
| Extension | Format |
|---|---|
.mp3 | MP3 Audio |
.wav | WAV Audio |
.flac | FLAC Audio |
.aac | AAC Audio |
.m4a | M4A Audio |
.ogg | Ogg Audio |
Images
| Extension | Format |
|---|---|
.jpg .jpeg | JPEG Image |
.png | PNG Image |
.gif | GIF (animated) |
.webp | WebP Image |
.svg | SVG Vector |
.bmp | Bitmap |
Documents
| Extension | Format |
|---|---|
.pdf | PDF Document |
.docx .doc | Word Document |
.pptx .ppt | PowerPoint |
.xlsx .xls | Excel Spreadsheet |
.odt | OpenDocument Text |
.odp | OpenDocument Presentation |
Code
| Extension | Language |
|---|---|
.rs | Rust |
.py | Python |
.js .ts | JavaScript/TypeScript |
.java | Java |
.go | Go |
.rb | Ruby |
.md | Markdown |
.html | HTML |
Examples
Play a Video
' Play a video file
PLAY "training-video.mp4"
' Play with autoplay and loop
PLAY "background.mp4" WITH OPTIONS "autoplay,loop,muted"
' Play from URL
PLAY "https://example.com/videos/demo.mp4"
Display an Image
' Show an image
PLAY "product-photo.jpg"
' Show image fullscreen
PLAY "banner.png" WITH OPTIONS "fullscreen"
Show a Presentation
' Display PowerPoint presentation
PLAY "quarterly-report.pptx"
' Fullscreen presentation mode
PLAY "sales-deck.pptx" WITH OPTIONS "fullscreen"
Display a Document
' Show PDF document
PLAY "contract.pdf"
' Show Word document
PLAY "proposal.docx"
Interactive Training Module
TALK "Welcome to the training module!"
TALK "Let's start with an introduction video."
PLAY "intro-video.mp4" WITH OPTIONS "controls"
HEAR ready AS TEXT "Type 'continue' when you're ready to proceed:"
IF LOWER(ready) = "continue" THEN
TALK "Great! Now let's review the key concepts."
PLAY "concepts-slides.pptx"
HEAR understood AS TEXT "Did you understand the concepts? (yes/no)"
IF LOWER(understood) = "yes" THEN
TALK "Excellent! Here's your certificate."
PLAY "certificate.pdf"
ELSE
TALK "Let's review the material again."
PLAY "concepts-detailed.mp4"
END IF
END IF
Product Showcase
' Show product images in sequence
products = FIND "products", "featured=true"
FOR EACH product IN products
TALK "Now showing: " + product.name
PLAY product.image_path
WAIT 3000 ' Wait 3 seconds between images
NEXT
Code Review
' Display code for review
TALK "Let's review the implementation:"
PLAY "src/main.rs"
HEAR feedback AS TEXT "Any comments on this code?"
INSERT "code_reviews", file_path, feedback, NOW()
Audio Playback
' Play audio message
TALK "Here's a voice message from your team:"
PLAY "team-message.mp3" WITH OPTIONS "controls"
' Play background music
PLAY "ambient.mp3" WITH OPTIONS "autoplay,loop,muted"
Dynamic Content Display
' Display content based on file type
HEAR file_name AS TEXT "Enter the file name to display:"
file_ext = LOWER(RIGHT(file_name, 4))
IF file_ext = ".mp4" OR file_ext = "webm" THEN
PLAY file_name WITH OPTIONS "controls,autoplay"
ELSE IF file_ext = ".pdf" THEN
PLAY file_name
ELSE IF file_ext = ".jpg" OR file_ext = ".png" THEN
PLAY file_name WITH OPTIONS "fullscreen"
ELSE
TALK "Unsupported file type"
END IF
Embedded Video from URL
' Play YouTube video (via embed URL)
PLAY "https://www.youtube.com/embed/dQw4w9WgXcQ"
' Play Vimeo video
PLAY "https://player.vimeo.com/video/123456789"
Onboarding Flow
' Multi-step onboarding with media
TALK "Welcome to our platform! Let's get you started."
' Step 1: Welcome video
TALK "First, watch this quick introduction:"
PLAY "onboarding/welcome.mp4" WITH OPTIONS "controls"
HEAR step1_done AS TEXT "Press Enter when done..."
' Step 2: Feature overview
TALK "Here's an overview of our key features:"
PLAY "onboarding/features.pptx"
HEAR step2_done AS TEXT "Press Enter when done..."
' Step 3: Quick start guide
TALK "Finally, here's your quick start guide:"
PLAY "onboarding/quickstart.pdf"
TALK "You're all set! 🎉"
Error Handling
' Check if file exists before playing
file_path = "presentation.pptx"
IF FILE_EXISTS(file_path) THEN
PLAY file_path
ELSE
TALK "Sorry, the file could not be found."
TALK "Please check the file path and try again."
END IF
Player Behavior
Web Interface
When used in the web interface, PLAY opens a modal overlay with:
- Appropriate player for the content type
- Close button to dismiss
- Optional playback controls
- Fullscreen toggle
WhatsApp/Messaging Channels
On messaging channels, PLAY sends the file directly:
- Videos/images: Sent as media messages
- Documents: Sent as file attachments
- URLs: Sent as links with preview
Desktop Application
In the desktop app, PLAY uses the native media player or viewer appropriate for the content type.
File Locations
Files can be referenced from:
| Location | Example |
|---|---|
| Bot’s .gbdrive | documents/report.pdf |
| User’s folder | users/john@email.com/uploads/photo.jpg |
| Absolute URL | https://cdn.example.com/video.mp4 |
| Relative path | ./assets/logo.png |
Limitations
- Maximum file size depends on channel (WhatsApp: 16MB for media, 100MB for documents)
- Some formats may require conversion for web playback
- Streaming large files requires adequate bandwidth
- Protected/DRM content is not supported
See Also
- SEND FILE - Send files as attachments
- TALK - Display text messages
- UPLOAD - Upload files to storage
- DOWNLOAD - Download files from URLs
Implementation
The PLAY keyword is implemented in src/basic/keywords/play.rs with content type detection and appropriate player selection for each media format.
QR CODE
Generate QR code images from text or data.
Syntax
' Basic QR code generation
path = QR CODE data
' With custom size (pixels)
path = QR CODE data, size
' With size and output path
path = QR CODE data, size, output_path
Parameters
| Parameter | Type | Required | Description |
|---|---|---|---|
data | String | Yes | The data to encode in the QR code (URL, text, etc.) |
size | Integer | No | Image size in pixels (default: 256) |
output_path | String | No | Custom output file path |
Return Value
Returns the file path to the generated QR code image (PNG format).
Examples
Basic QR Code
' Generate a QR code for a URL
qr_path = QR CODE "https://example.com"
TALK "Scan this QR code:"
SEND FILE qr_path
QR Code with Custom Size
' Generate a larger QR code (512x512 pixels)
qr_path = QR CODE "https://mywebsite.com/signup", 512
SEND FILE qr_path
Dynamic Content
HEAR user_id AS TEXT "Enter your user ID:"
' Generate QR code with dynamic data
profile_url = "https://app.example.com/profile/" + user_id
qr_path = QR CODE profile_url, 300
TALK "Here's your profile QR code:"
SEND FILE qr_path
Event Check-in
' Generate unique check-in codes for events
event_id = "EVT-2025-001"
attendee_email = user.email
checkin_data = "CHECKIN:" + event_id + ":" + attendee_email
qr_path = QR CODE checkin_data, 400
TALK "Show this QR code at the event entrance:"
SEND FILE qr_path
Payment QR Code
' Generate PIX payment QR code (Brazil)
HEAR amount AS NUMBER "Enter payment amount:"
pix_payload = "00020126580014br.gov.bcb.pix0136" + merchant_key
pix_payload = pix_payload + "5204000053039865802BR"
pix_payload = pix_payload + "5913MerchantName6008CityName62070503***"
qr_path = QR CODE pix_payload, 400
TALK "Scan to pay R$ " + amount + ":"
SEND FILE qr_path
WiFi QR Code
' Generate WiFi connection QR code
wifi_ssid = "MyNetwork"
wifi_password = "SecurePass123"
wifi_type = "WPA"
wifi_data = "WIFI:T:" + wifi_type + ";S:" + wifi_ssid + ";P:" + wifi_password + ";;"
qr_path = QR CODE wifi_data, 300
TALK "Scan to connect to WiFi:"
SEND FILE qr_path
Contact Card (vCard)
' Generate QR code with contact information
vcard = "BEGIN:VCARD\n"
vcard = vcard + "VERSION:3.0\n"
vcard = vcard + "N:Doe;John\n"
vcard = vcard + "TEL:+1234567890\n"
vcard = vcard + "EMAIL:john@example.com\n"
vcard = vcard + "END:VCARD"
qr_path = QR CODE vcard, 350
TALK "Scan to add contact:"
SEND FILE qr_path
Custom Output Location
' Save QR code to specific path
output_file = "work/qrcodes/user_" + user.id + ".png"
qr_path = QR CODE "https://example.com", 256, output_file
TALK "QR code saved to: " + qr_path
Supported Data Types
The QR CODE keyword can encode various types of data:
| Type | Format | Example |
|---|---|---|
| URL | https://... | https://example.com |
| Plain Text | Any text | Hello World |
| WiFi | WIFI:T:WPA;S:ssid;P:pass;; | Network credentials |
| vCard | BEGIN:VCARD...END:VCARD | Contact information |
mailto:email@example.com | Email link | |
| Phone | tel:+1234567890 | Phone number |
| SMS | sms:+1234567890?body=Hello | SMS with message |
| Geo | geo:lat,lon | Geographic coordinates |
Size Guidelines
| Use Case | Recommended Size |
|---|---|
| Mobile scanning | 256-300px |
| Print (business card) | 300-400px |
| Print (poster) | 512-1024px |
| Digital display | 256-512px |
Error Handling
' Check if QR code was generated
qr_path = QR CODE data
IF qr_path = "" THEN
TALK "Failed to generate QR code"
ELSE
SEND FILE qr_path
END IF
File Storage
Generated QR codes are stored in the bot’s .gbdrive storage:
- Default location:
work/qrcodes/ - Format: PNG
- Naming: UUID-based unique filenames
Limitations
- Maximum data length depends on QR code version (up to ~4,296 alphanumeric characters)
- Larger data requires larger image sizes for reliable scanning
- Binary data should be Base64 encoded
See Also
- SEND FILE - Send generated QR codes
- TALK - Display messages with QR codes
- FORMAT - Format data before encoding
Implementation
The QR CODE keyword is implemented in src/basic/keywords/qrcode.rs using the qrcode and image crates for generation.
SEND SMS
Send SMS text messages to phone numbers using various providers.
Syntax
' Basic SMS sending
SEND SMS phone, message
' With specific provider
SEND SMS phone, message, provider
Parameters
| Parameter | Type | Required | Description |
|---|---|---|---|
phone | String | Yes | Recipient phone number (E.164 format recommended) |
message | String | Yes | The text message to send (max 160 chars for single SMS) |
provider | String | No | SMS provider: twilio, aws_sns, vonage, messagebird |
Return Value
Returns true if the SMS was sent successfully, false otherwise.
Configuration
Configure SMS provider credentials in config.csv:
key,value
sms-provider,twilio
twilio-account-sid,YOUR_ACCOUNT_SID
twilio-auth-token,YOUR_AUTH_TOKEN
twilio-phone-number,+15551234567
Provider-Specific Configuration
Twilio:
sms-provider,twilio
twilio-account-sid,ACxxxxx
twilio-auth-token,your_token
twilio-phone-number,+15551234567
AWS SNS:
sms-provider,aws_sns
aws-access-key-id,AKIAXXXXXXXX
aws-secret-access-key,your_secret
aws-region,us-east-1
Vonage (Nexmo):
sms-provider,vonage
vonage-api-key,your_api_key
vonage-api-secret,your_secret
vonage-from-number,+15551234567
MessageBird:
sms-provider,messagebird
messagebird-access-key,your_access_key
messagebird-originator,YourBrand
Examples
Basic SMS
HEAR phone AS TEXT "Enter phone number:"
SEND SMS phone, "Hello from General Bots!"
TALK "SMS sent successfully!"
Order Confirmation
' Send order confirmation via SMS
order_id = "ORD-2025-001"
phone = customer.phone
message = "Your order " + order_id + " has been confirmed. "
message = message + "Estimated delivery: 2-3 business days."
result = SEND SMS phone, message
IF result THEN
TALK "Confirmation SMS sent to " + phone
ELSE
TALK "Failed to send SMS. We'll email you instead."
SEND MAIL customer.email, "Order Confirmation", message
END IF
Two-Factor Authentication
' Generate and send OTP
otp = RANDOM(100000, 999999)
REMEMBER "otp_" + user.id, otp, "5 minutes"
message = "Your verification code is: " + otp + ". Valid for 5 minutes."
SEND SMS user.phone, message
HEAR entered_code AS TEXT "Enter the code sent to your phone:"
stored_otp = RECALL "otp_" + user.id
IF entered_code = stored_otp THEN
TALK "✅ Phone verified successfully!"
SET USER MEMORY "phone_verified", true
ELSE
TALK "❌ Invalid code. Please try again."
END IF
Appointment Reminder
' Send appointment reminder
appointment_date = FORMAT(appointment.datetime, "MMMM D, YYYY")
appointment_time = FORMAT(appointment.datetime, "h:mm A")
message = "Reminder: Your appointment is on " + appointment_date
message = message + " at " + appointment_time + ". Reply YES to confirm."
SEND SMS patient.phone, message
' Set up response handler
ON "sms:received" FROM patient.phone
IF UPPER(params.message) = "YES" THEN
UPDATE "appointments", appointment.id, "status", "confirmed"
SEND SMS patient.phone, "Thank you! Your appointment is confirmed."
END IF
END ON
Multi-Language SMS
' Send SMS in user's preferred language
lang = GET USER MEMORY "language"
IF lang = "es" THEN
message = "Gracias por tu compra. Tu pedido está en camino."
ELSE IF lang = "pt" THEN
message = "Obrigado pela sua compra. Seu pedido está a caminho."
ELSE
message = "Thank you for your purchase. Your order is on the way."
END IF
SEND SMS user.phone, message
Using Different Providers
' Use specific provider for different regions
country_code = LEFT(phone, 3)
IF country_code = "+1 " THEN
' Use Twilio for US/Canada
SEND SMS phone, message, "twilio"
ELSE IF country_code = "+55" THEN
' Use local provider for Brazil
SEND SMS phone, message, "vonage"
ELSE
' Default provider
SEND SMS phone, message
END IF
Emergency Alert
' Send emergency notification to multiple recipients
alert_message = "⚠️ ALERT: System maintenance in 30 minutes. Save your work."
contacts = FIND "emergency_contacts", "notify=true"
FOR EACH contact IN contacts
SEND SMS contact.phone, alert_message
WAIT 100 ' Small delay between messages
NEXT
TALK "Emergency alert sent to " + COUNT(contacts) + " contacts"
Delivery Tracking
' Send delivery status updates
ON "delivery:status_changed"
order = FIND "orders", "id=" + params.order_id
SWITCH params.status
CASE "shipped"
message = "📦 Your order has shipped! Tracking: " + params.tracking_number
CASE "out_for_delivery"
message = "🚚 Your package is out for delivery today!"
CASE "delivered"
message = "✅ Your package has been delivered. Enjoy!"
DEFAULT
message = "Order update: " + params.status
END SWITCH
SEND SMS order.phone, message
END ON
Phone Number Formats
The keyword accepts various phone number formats:
| Format | Example | Recommended |
|---|---|---|
| E.164 | +14155551234 | ✅ Yes |
| National | (415) 555-1234 | ⚠️ Converted |
| Digits only | 4155551234 | ⚠️ Needs country |
Best Practice: Always use E.164 format (+ followed by country code and number).
Message Length
| Type | Characters | Notes |
|---|---|---|
| Single SMS | 160 | Standard ASCII |
| Unicode SMS | 70 | Emojis, non-Latin scripts |
| Concatenated | 153 × segments | Long messages split |
' Check message length before sending
IF LEN(message) > 160 THEN
TALK "Warning: Message will be sent as multiple SMS"
END IF
SEND SMS phone, message
Error Handling
' Handle SMS errors gracefully
TRY
result = SEND SMS phone, message
IF NOT result THEN
' Log the failure
INSERT "sms_failures", phone, message, NOW()
' Fallback to email if available
IF user.email <> "" THEN
SEND MAIL user.email, "Notification", message
END IF
END IF
CATCH error
TALK "SMS service unavailable: " + error.message
END TRY
Cost Considerations
SMS messages incur costs per message sent. Consider:
- Using SEND WHATSAPP for free messaging when possible
- Batching non-urgent messages
- Using templates to keep messages under 160 characters
Compliance
When sending SMS messages, ensure compliance with:
- TCPA (US) - Require consent before sending
- GDPR (EU) - Document consent and provide opt-out
- LGPD (Brazil) - Similar consent requirements
' Check opt-in before sending
IF GET USER MEMORY "sms_opt_in" = true THEN
SEND SMS phone, message
ELSE
TALK "User has not opted in to SMS notifications"
END IF
See Also
- SEND WHATSAPP - WhatsApp messaging
- SEND MAIL - Email messaging
- SEND TEMPLATE - Template messages
- Universal Messaging - Multi-channel messaging
Implementation
The SEND SMS keyword is implemented in src/basic/keywords/sms.rs with support for multiple providers through a unified interface.
START MEET / JOIN MEET Keywords
The START MEET and JOIN MEET keywords enable bots to create and participate in video meetings, bringing AI capabilities directly into video conferencing.
Keywords
| Keyword | Purpose |
|---|---|
START MEET | Create a new meeting room and get join link |
JOIN MEET | Add the bot to an existing meeting |
LEAVE MEET | Remove the bot from a meeting |
INVITE TO MEET | Send meeting invitations to participants |
START MEET
Creates a new video meeting room and optionally adds the bot as a participant.
Syntax
room = START MEET "room-name"
room = START MEET "room-name" WITH BOT
room = START MEET "room-name" WITH OPTIONS options
Parameters
| Parameter | Type | Description |
|---|---|---|
room-name | String | Display name for the meeting room |
WITH BOT | Flag | Automatically add the bot to the meeting |
options | JSON | Meeting configuration options |
Options Object
' Options can be set as a JSON string
options = '{"recording": true, "transcription": true, "max_participants": 50}'
Example
' Create a simple meeting
room = START MEET "Team Sync"
TALK "Meeting created! Join here: " + room.url
' Create meeting with bot participant
room = START MEET "AI-Assisted Workshop" WITH BOT
TALK "I've joined the meeting and I'm ready to help!"
TALK "Join link: " + room.url
' Create meeting with full options
options = '{"recording": true, "transcription": true, "bot_persona": "note-taker"}'
room = START MEET "Project Review" WITH OPTIONS options
Return Value
Returns a room object with:
| Property | Description |
|---|---|
room.id | Unique room identifier |
room.url | Join URL for participants |
room.name | Room display name |
room.created | Creation timestamp |
room.host_token | Host access token |
JOIN MEET
Adds the bot to an existing meeting room.
Syntax
JOIN MEET room_id
JOIN MEET room_id AS "persona"
JOIN MEET room_url
Parameters
| Parameter | Type | Description |
|---|---|---|
room_id | String | Meeting room ID |
room_url | String | Meeting join URL |
persona | String | Bot’s display name in the meeting |
Example
' Join by room ID
JOIN MEET "room-abc123"
' Join with custom persona
JOIN MEET "room-abc123" AS "Meeting Assistant"
' Join by URL
JOIN MEET "https://meet.gb/abc-123"
' Join and announce
JOIN MEET meeting_room AS "AI Note Taker"
TALK TO MEET "Hello everyone! I'm here to take notes. Just say 'note that' followed by anything important."
LEAVE MEET
Removes the bot from the current meeting.
Syntax
LEAVE MEET
LEAVE MEET room_id
Example
' Leave current meeting
LEAVE MEET
' Leave specific meeting (when bot is in multiple)
LEAVE MEET "room-abc123"
' Graceful exit
TALK TO MEET "Thanks everyone! I'll send the meeting notes shortly."
WAIT 2
LEAVE MEET
INVITE TO MEET
Sends meeting invitations to participants.
Syntax
INVITE TO MEET room, participants
INVITE TO MEET room, participants, message
Parameters
| Parameter | Type | Description |
|---|---|---|
room | Object/String | Room object or room ID |
participants | Array | List of email addresses |
message | String | Optional custom invitation message |
Example
' Create room and invite team
room = START MEET "Sprint Planning" WITH BOT
participants = ["alice@company.com", "bob@company.com", "carol@company.com"]
INVITE TO MEET room, participants
TALK "Invitations sent to " + LEN(participants) + " participants"
' With custom message
INVITE TO MEET room, participants, "Join us for sprint planning! The AI assistant will be taking notes."
TALK TO MEET
Sends a message to all meeting participants (text-to-speech or chat).
Syntax
TALK TO MEET "message"
TALK TO MEET "message" AS CHAT
TALK TO MEET "message" AS VOICE
Example
' Send as both chat and voice (default)
TALK TO MEET "Let's start with the agenda review."
' Chat only (no voice)
TALK TO MEET "Here's the link to the document: https://..." AS CHAT
' Voice only (no chat message)
TALK TO MEET "I've noted that action item." AS VOICE
HEAR FROM MEET
Listens for speech or chat messages from meeting participants.
Syntax
HEAR FROM MEET INTO variable
HEAR FROM MEET INTO variable TIMEOUT seconds
Example
' Listen for meeting input
HEAR FROM MEET INTO participant_message
IF INSTR(participant_message, "note that") > 0 THEN
note = REPLACE(participant_message, "note that", "")
notes = notes + "\n- " + note
TALK TO MEET "Got it! I've noted: " + note
END IF
Complete Example: AI Meeting Assistant
' AI Meeting Assistant Bot
' Joins meetings, takes notes, and provides summaries
TALK "Would you like me to join your meeting? Share the room ID or say 'create new'."
HEAR user_input
IF user_input = "create new" THEN
TALK "What should we call this meeting?"
HEAR meeting_name
room = START MEET meeting_name WITH BOT
TALK "Meeting created! Share this link: " + room.url
TALK "Who should I invite? (comma-separated emails, or 'skip')"
HEAR invites
IF invites <> "skip" THEN
participants = SPLIT(invites, ",")
INVITE TO MEET room, participants
TALK "Invitations sent!"
END IF
ELSE
room_id = user_input
JOIN MEET room_id AS "AI Assistant"
TALK "I've joined the meeting!"
END IF
' Initialize notes
notes = "# Meeting Notes\n\n"
notes = notes + "**Date:** " + FORMAT(NOW(), "YYYY-MM-DD HH:mm") + "\n\n"
notes = notes + "## Key Points\n\n"
TALK TO MEET "Hello! I'm your AI assistant. Say 'note that' to capture important points, or 'summarize' when you're done."
' Meeting loop
meeting_active = true
WHILE meeting_active
HEAR FROM MEET INTO message TIMEOUT 300
IF message = "" THEN
' Timeout - check if meeting still active
CONTINUE
END IF
' Process commands
IF INSTR(LOWER(message), "note that") > 0 THEN
note_content = REPLACE(LOWER(message), "note that", "")
notes = notes + "- " + TRIM(note_content) + "\n"
TALK TO MEET "Noted!" AS VOICE
ELSE IF INSTR(LOWER(message), "action item") > 0 THEN
action = REPLACE(LOWER(message), "action item", "")
notes = notes + "- **ACTION:** " + TRIM(action) + "\n"
TALK TO MEET "Action item recorded!" AS VOICE
ELSE IF INSTR(LOWER(message), "summarize") > 0 THEN
' Generate AI summary
summary = LLM "Summarize these meeting notes concisely:\n\n" + notes
TALK TO MEET "Here's the summary: " + summary
ELSE IF INSTR(LOWER(message), "end meeting") > 0 THEN
meeting_active = false
END IF
WEND
' Save and share notes
filename = "meeting-notes-" + FORMAT(NOW(), "YYYYMMDD-HHmm") + ".md"
SAVE notes TO filename
TALK TO MEET "Meeting ended. I'll send the notes to all participants."
LEAVE MEET
' Email notes to participants
SEND MAIL participants, "Meeting Notes: " + meeting_name, notes
TALK "Notes saved and sent to all participants!"
Example: Quick Standup Bot
' Daily Standup Bot
room = START MEET "Daily Standup" WITH BOT
team = ["dev1@company.com", "dev2@company.com", "dev3@company.com"]
INVITE TO MEET room, team, "Time for standup! Join now."
TALK TO MEET "Good morning team! Let's do a quick round. I'll call on each person."
updates = ""
FOR EACH member IN team
TALK TO MEET member + ", what did you work on yesterday and what's planned for today?"
HEAR FROM MEET INTO update TIMEOUT 120
updates = updates + "**" + member + ":** " + update + "\n\n"
NEXT
TALK TO MEET "Great standup everyone! I'll post the summary to Slack."
' Post to Slack
POST "https://slack.com/api/chat.postMessage" WITH
channel = "#dev-standup",
text = "📋 **Standup Summary**\n\n" + updates
LEAVE MEET
Configuration
Configure Meet integration in config.csv:
name,value
meet-provider,livekit
meet-server-url,wss://localhost:7880
meet-api-key,vault:gbo/meet/api_key
meet-api-secret,vault:gbo/meet/api_secret
meet-bot-default-persona,AI Assistant
meet-recording-enabled,true
meet-transcription-enabled,true
meet-max-participants,50
Bot Capabilities in Meetings
When a bot joins a meeting, it can:
| Capability | Description |
|---|---|
| Listen | Transcribe speech from participants |
| Speak | Text-to-speech announcements |
| Chat | Send text messages to meeting chat |
| Record | Capture meeting recording |
| Screen Share | Display content (dashboards, docs) |
| React | Send emoji reactions |
See Also
- Meet App - User interface for Meet
- BOOK_MEETING - Schedule meetings with calendar integration
- Calls API - API reference for video calls
- Multi-Agent Keywords - Bot collaboration features
File Operations
This section covers keywords for working with files in the bot’s storage system. These keywords enable bots to read, write, copy, move, and manage files stored in the bot’s drive bucket.
Overview
General Bots provides a complete set of file operation keywords:
| Keyword | Purpose |
|---|---|
| READ | Load content from files |
| WRITE | Save content to files |
| DELETE FILE | Remove files |
| COPY | Copy files within storage |
| MOVE | Move or rename files |
| LIST | List files in a directory |
| COMPRESS | Create ZIP archives |
| EXTRACT | Extract archive contents |
| UPLOAD | Upload files from URLs or users |
| DOWNLOAD | Send files to users |
| GENERATE PDF | Create PDF documents |
| MERGE PDF | Combine multiple PDFs |
Quick Examples
Basic File Operations
' Read a file
content = READ "documents/report.txt"
TALK content
' Write to a file
WRITE "Hello, World!" TO "greeting.txt"
' Append to a file
WRITE "New line\n" TO "log.txt" APPEND
' Delete a file
DELETE FILE "temp/old-file.txt"
' Copy a file
COPY "templates/form.docx" TO "user-forms/form-copy.docx"
' Move/rename a file
MOVE "inbox/message.txt" TO "archive/message.txt"
' List files in a directory
files = LIST "documents/"
FOR EACH file IN files
TALK file.name + " (" + file.size + " bytes)"
NEXT
Working with CSV Data
' Read CSV as structured data
customers = READ "data/customers.csv" AS TABLE
FOR EACH customer IN customers
TALK customer.name + ": " + customer.email
NEXT
' Write data as CSV from database query
orders = FIND "orders" WHERE status = "pending" LIMIT 100
WRITE orders TO "exports/orders.csv" AS TABLE
File Upload and Download
' Accept file from user
TALK "Please send me a document."
HEAR user_file
result = UPLOAD user_file TO "uploads/" + user.id
TALK "File saved: " + result.filename
' Send file to user
DOWNLOAD "reports/summary.pdf" AS "Monthly Summary.pdf"
TALK "Here's your report!"
PDF Operations
' Generate PDF from template
GENERATE PDF "templates/invoice.html" TO "invoices/inv-001.pdf" WITH
customer = "John Doe",
amount = 150.00,
date = FORMAT(NOW(), "YYYY-MM-DD")
' Merge multiple PDFs
MERGE PDF ["cover.pdf", "report.pdf", "appendix.pdf"] TO "complete-report.pdf"
Archive Operations
' Create a ZIP archive
COMPRESS ["doc1.pdf", "doc2.pdf", "images/"] TO "package.zip"
' Extract archive contents
EXTRACT "uploaded.zip" TO "extracted/"
Storage Structure
Files are stored in the bot’s drive bucket with the following structure:
bot-name/
├── documents/
├── templates/
├── exports/
├── uploads/
│ └── user-123/
├── reports/
├── temp/
└── archives/
Path Rules
| Path | Description |
|---|---|
file.txt | Root of bot’s storage |
folder/file.txt | Subdirectory |
folder/sub/file.txt | Nested subdirectory |
../file.txt | Not allowed — no parent traversal |
/absolute/path | Not allowed — paths are always relative |
' Valid paths
content = READ "documents/report.pdf"
WRITE data TO "exports/2025/january.csv"
' Invalid paths (will error)
' READ "../other-bot/file.txt" ' Parent traversal blocked
' READ "/etc/passwd" ' Absolute paths blocked
Supported File Types
Text Files
| Extension | Description |
|---|---|
.txt | Plain text |
.md | Markdown |
.json | JSON data |
.csv | Comma-separated values |
.xml | XML data |
.html | HTML documents |
.yaml | YAML configuration |
Documents
| Extension | Description | Auto-Extract |
|---|---|---|
.pdf | PDF documents | ✓ Text extracted |
.docx | Word documents | ✓ Text extracted |
.xlsx | Excel spreadsheets | ✓ As table data |
.pptx | PowerPoint | ✓ Text from slides |
Media
| Extension | Description |
|---|---|
.jpg, .png, .gif | Images |
.mp3, .wav | Audio |
.mp4, .mov | Video |
Archives
| Extension | Description |
|---|---|
.zip | ZIP archives |
.tar.gz | Compressed tarballs |
Common Patterns
Template Processing
' Load template and fill placeholders
template = READ "templates/welcome-email.html"
email_body = REPLACE(template, "{{name}}", customer.name)
email_body = REPLACE(email_body, "{{date}}", FORMAT(NOW(), "MMMM DD, YYYY"))
email_body = REPLACE(email_body, "{{order_id}}", order.id)
SEND MAIL customer.email, "Welcome!", email_body
Data Export
' Export query results to CSV
results = FIND "orders" WHERE status = "completed" AND date > "2025-01-01"
WRITE results TO "exports/completed-orders.csv" AS TABLE
' Generate download link
link = DOWNLOAD "exports/completed-orders.csv" AS LINK
TALK "Download your export: " + link
Backup and Archive
' Create dated backup
backup_name = "backups/data-" + FORMAT(NOW(), "YYYYMMDD") + ".json"
data = GET BOT MEMORY "important_data"
WRITE JSON_STRINGIFY(data) TO backup_name
' Archive old files
old_files = LIST "reports/2024/"
COMPRESS old_files TO "archives/reports-2024.zip"
' Clean up originals
FOR EACH file IN old_files
DELETE FILE file.path
NEXT
File Validation
' Check file exists before processing
files = LIST "uploads/" + user.id + "/"
document_found = false
FOR EACH file IN files
IF file.name = expected_filename THEN
document_found = true
EXIT FOR
END IF
NEXT
IF document_found THEN
content = READ "uploads/" + user.id + "/" + expected_filename
' Process content...
ELSE
TALK "I couldn't find that document. Please upload it again."
END IF
Organize Uploads
' Organize uploaded files by type
HEAR uploaded_file
file_type = uploaded_file.mime_type
IF INSTR(file_type, "image") > 0 THEN
folder = "images"
ELSE IF INSTR(file_type, "pdf") > 0 THEN
folder = "documents"
ELSE IF INSTR(file_type, "spreadsheet") > 0 OR INSTR(file_type, "excel") > 0 THEN
folder = "spreadsheets"
ELSE
folder = "other"
END IF
result = UPLOAD uploaded_file TO folder + "/" + FORMAT(NOW(), "YYYY/MM")
TALK "File saved to " + folder + "!"
Error Handling
ON ERROR RESUME NEXT
content = READ "documents/important.pdf"
IF ERROR THEN
PRINT "File error: " + ERROR_MESSAGE
TALK "Sorry, I couldn't access that file. It may have been moved or deleted."
ELSE
TALK "File loaded successfully!"
' Process content...
END IF
Common Errors
| Error | Cause | Solution |
|---|---|---|
FILE_NOT_FOUND | File doesn’t exist | Check path, list directory first |
PERMISSION_DENIED | Access blocked | Check file permissions |
PATH_TRAVERSAL | Invalid path with .. | Use only relative paths |
FILE_TOO_LARGE | Exceeds size limit | Increase limit or split file |
INVALID_FORMAT | Unsupported file type | Convert or use different format |
Configuration
Configure file operations in config.csv:
name,value
drive-provider,seaweedfs
drive-url,http://localhost:8333
drive-bucket,my-bot
drive-read-timeout,30
drive-write-timeout,60
drive-max-file-size,52428800
drive-allowed-extensions,pdf,docx,xlsx,jpg,png,csv,json
Size Limits
| Operation | Default Limit | Configurable |
|---|---|---|
| Read file | 50 MB | Yes |
| Write file | 50 MB | Yes |
| Upload file | 50 MB | Yes |
| Total storage | 10 GB per bot | Yes |
| Files per directory | 10,000 | Yes |
Security Considerations
- Path validation — All paths are sanitized to prevent directory traversal
- File type restrictions — Executable files blocked by default
- Size limits — Prevents storage exhaustion attacks
- Access control — Files isolated per bot
- Malware scanning — Uploaded files scanned before storage
See Also
- READ — Load file content
- WRITE — Save content to files
- DELETE FILE — Remove files
- COPY — Copy files
- MOVE — Move/rename files
- LIST — List directory contents
- COMPRESS — Create archives
- EXTRACT — Extract archives
- UPLOAD — Upload files
- DOWNLOAD — Send files to users
- GENERATE PDF — Create PDFs
- MERGE PDF — Combine PDFs
READ
The READ keyword loads content from files stored in the bot’s drive storage, enabling bots to access documents, data files, and other stored resources.
Syntax
content = READ "filename"
content = READ "path/to/filename"
data = READ "filename.csv" AS TABLE
lines = READ "filename.txt" AS LINES
Parameters
| Parameter | Type | Description |
|---|---|---|
filename | String | Path to the file in the bot’s storage |
AS TABLE | Flag | Parse CSV/Excel files as structured data |
AS LINES | Flag | Return content as array of lines |
Description
READ retrieves file content from the bot’s configured storage (drive bucket). It supports:
- Text files (
.txt,.md,.json,.xml,.csv) - Documents (
.pdf,.docx) — automatically extracts text - Spreadsheets (
.xlsx,.csv) — can parse as structured data - Binary files — returned as base64 encoded string
The file path is relative to the bot’s storage root. Use forward slashes for subdirectories.
Examples
Basic File Read
' Read a text file
content = READ "welcome-message.txt"
TALK content
Read from Subdirectory
' Read file from nested folder
template = READ "templates/email/welcome.html"
Read JSON Data
' Read and parse JSON configuration
config_text = READ "config.json"
config = JSON_PARSE(config_text)
TALK "Current theme: " + config.theme
Read CSV as Table
' Load CSV data as structured table
products = READ "inventory/products.csv" AS TABLE
FOR EACH product IN products
TALK product.name + ": $" + product.price
NEXT
Read as Lines
' Read file as array of lines
faq_lines = READ "faq.txt" AS LINES
TALK "We have " + LEN(faq_lines) + " FAQ entries"
FOR EACH line IN faq_lines
IF INSTR(line, user_question) > 0 THEN
TALK "Found relevant FAQ: " + line
END IF
NEXT
Read PDF Document
' Extract text from PDF
contract_text = READ "documents/contract.pdf"
TALK "Contract length: " + LEN(contract_text) + " characters"
' Use LLM to analyze
summary = LLM "Summarize the key points of this contract:\n\n" + contract_text
TALK summary
Read Excel Spreadsheet
' Load Excel data
sales_data = READ "reports/sales-q1.xlsx" AS TABLE
total = 0
FOR EACH row IN sales_data
total = total + row.amount
NEXT
TALK "Total Q1 sales: $" + FORMAT(total, "#,##0.00")
Working with Different File Types
Text Files
' Plain text - returned as string
notes = READ "notes.txt"
readme = READ "README.md"
JSON Files
' JSON - returned as string, use JSON_PARSE for object
json_text = READ "data.json"
data = JSON_PARSE(json_text)
CSV Files
' CSV as string
csv_raw = READ "data.csv"
' CSV as structured table (recommended)
csv_data = READ "data.csv" AS TABLE
first_row = csv_data[0]
Documents
' PDF - text extracted automatically
pdf_content = READ "report.pdf"
' Word documents - text extracted automatically
doc_content = READ "proposal.docx"
Error Handling
ON ERROR RESUME NEXT
content = READ "optional-file.txt"
IF ERROR THEN
PRINT "File not found, using default"
content = "Default content"
END IF
Check File Exists
' List directory to check if file exists
files = LIST "documents/"
found = false
FOR EACH file IN files
IF file.name = "report.pdf" THEN
found = true
EXIT FOR
END IF
NEXT
IF found THEN
content = READ "documents/report.pdf"
ELSE
TALK "Report not found. Would you like me to generate one?"
END IF
Common Use Cases
Load Email Template
' Read HTML template and fill variables
template = READ "templates/order-confirmation.html"
' Replace placeholders
email_body = REPLACE(template, "{{customer_name}}", customer.name)
email_body = REPLACE(email_body, "{{order_id}}", order.id)
email_body = REPLACE(email_body, "{{total}}", FORMAT(order.total, "$#,##0.00"))
SEND MAIL customer.email, "Order Confirmation", email_body
Process Data File
' Read customer list and send personalized messages
customers = READ "campaigns/target-customers.csv" AS TABLE
FOR EACH customer IN customers
IF customer.opted_in = "yes" THEN
message = "Hi " + customer.first_name + ", check out our new products!"
SEND SMS customer.phone, message
END IF
NEXT
TALK "Campaign sent to " + LEN(customers) + " customers"
Load Bot Configuration
' Read bot settings from file
settings_text = READ "bot-settings.json"
settings = JSON_PARSE(settings_text)
' Apply settings
SET BOT MEMORY "greeting", settings.greeting
SET BOT MEMORY "language", settings.language
SET BOT MEMORY "max_retries", settings.max_retries
Knowledge Base Lookup
' Read FAQ document for quick lookups
faq_content = READ "knowledge/faq.md"
' Search for relevant section
IF INSTR(user_question, "return") > 0 THEN
' Extract return policy section
start_pos = INSTR(faq_content, "## Return Policy")
end_pos = INSTR(faq_content, "##", start_pos + 1)
policy = MID(faq_content, start_pos, end_pos - start_pos)
TALK policy
END IF
File Path Rules
| Path | Description |
|---|---|
file.txt | Root of bot’s storage |
folder/file.txt | Subdirectory |
folder/sub/file.txt | Nested subdirectory |
../file.txt | Not allowed — no parent traversal |
/absolute/path | Not allowed — paths are always relative |
Configuration
Configure storage settings in config.csv:
name,value
drive-provider,seaweedfs
drive-url,http://localhost:8333
drive-bucket,my-bot
drive-read-timeout,30
Implementation Notes
- Implemented in Rust under
src/file/mod.rs - Automatically detects file encoding (UTF-8, UTF-16, etc.)
- PDF extraction uses
pdf-extractcrate - DOCX extraction parses XML content
- Maximum file size: 50MB (configurable)
- Files are cached in memory for repeated reads
Related Keywords
- WRITE — Save content to files
- LIST — List files in a directory
- DOWNLOAD — Download files from URLs
- UPLOAD — Upload files to storage
- DELETE FILE — Remove files
- GET — Read from URLs or files
Summary
READ is the primary keyword for accessing stored files. It handles text extraction from various document formats, supports structured data parsing for CSV/Excel files, and integrates seamlessly with the bot’s storage system. Use it to load templates, process data files, access configuration, and work with uploaded documents.
WRITE
The WRITE keyword saves content to files in the bot’s drive storage, enabling bots to create documents, export data, and persist information.
Syntax
WRITE content TO "filename"
WRITE data TO "filename.csv" AS TABLE
WRITE lines TO "filename.txt" AS LINES
WRITE content TO "filename" APPEND
Parameters
| Parameter | Type | Description |
|---|---|---|
content | String | The content to write to the file |
filename | String | Path to the file in the bot’s storage |
AS TABLE | Flag | Write structured data as CSV format |
AS LINES | Flag | Write array as separate lines |
APPEND | Flag | Add to existing file instead of overwriting |
Description
WRITE saves content to the bot’s configured storage (drive bucket). It supports:
- Text files (
.txt,.md,.json,.xml,.csv) - Creating new files or overwriting existing ones
- Appending to existing files
- Writing structured data as CSV
- Automatic directory creation
The file path is relative to the bot’s storage root. Use forward slashes for subdirectories.
Examples
Basic File Write
' Write a simple text file
message = "Welcome to our service!"
WRITE message TO "welcome.txt"
TALK "File saved successfully!"
Write to Subdirectory
' Write file to nested folder (directories created automatically)
report = "Monthly Report\n\nSales: $10,000\nExpenses: $3,000"
WRITE report TO "reports/2025/january.md"
Write JSON Data
' Create JSON configuration file
config_json = '{"theme": "dark", "language": "en", "notifications": true}'
WRITE config_json TO "settings.json"
Write CSV as Table
' Export data as CSV - use FIND to get data from database
orders = FIND "orders" WHERE status = "completed" LIMIT 100
WRITE orders TO "exports/orders.csv" AS TABLE
TALK "Exported " + LEN(orders) + " orders to CSV"
Write Lines
' Write array as separate lines
log_entries = [
"2025-01-15 10:00 - User logged in",
"2025-01-15 10:05 - Order placed",
"2025-01-15 10:10 - Payment processed"
]
WRITE log_entries TO "logs/activity.log" AS LINES
Append to File
' Add entry to existing log file
new_entry = FORMAT(NOW(), "YYYY-MM-DD HH:mm") + " - " + event_description + "\n"
WRITE new_entry TO "logs/events.log" APPEND
Common Use Cases
Generate Report
' Create a formatted report
report = "# Sales Report\n\n"
report = report + "**Date:** " + FORMAT(NOW(), "MMMM DD, YYYY") + "\n\n"
report = report + "## Summary\n\n"
report = report + "- Total Sales: $" + FORMAT(total_sales, "#,##0.00") + "\n"
report = report + "- Orders: " + order_count + "\n"
report = report + "- Average Order: $" + FORMAT(total_sales / order_count, "#,##0.00") + "\n"
filename = "reports/sales-" + FORMAT(NOW(), "YYYYMMDD") + ".md"
WRITE report TO filename
TALK "Report saved to " + filename
Export Customer Data
' Export customer list to CSV
customers = FIND "customers" WHERE status = "active"
WRITE customers TO "exports/active-customers.csv" AS TABLE
' Email the export
SEND MAIL "manager@company.com", "Customer Export", "See attached file", "exports/active-customers.csv"
Save Meeting Notes
' Save notes from a conversation
notes = "# Meeting Notes\n\n"
notes = notes + "**Date:** " + FORMAT(NOW(), "YYYY-MM-DD HH:mm") + "\n"
notes = notes + "**Participants:** " + participants + "\n\n"
notes = notes + "## Discussion\n\n"
notes = notes + meeting_content + "\n\n"
notes = notes + "## Action Items\n\n"
notes = notes + action_items
filename = "meetings/" + FORMAT(NOW(), "YYYYMMDD") + "-" + meeting_topic + ".md"
WRITE notes TO filename
TALK "Meeting notes saved!"
Create Backup
' Backup current data
data = GET BOT MEMORY "important_data"
backup_name = "backups/data-" + FORMAT(NOW(), "YYYYMMDD-HHmmss") + ".json"
WRITE JSON_STRINGIFY(data) TO backup_name
TALK "Backup created: " + backup_name
Build Log File
' Append to daily log
log_line = FORMAT(NOW(), "HH:mm:ss") + " | " + user_id + " | " + action + " | " + details
log_file = "logs/" + FORMAT(NOW(), "YYYY-MM-DD") + ".log"
WRITE log_line + "\n" TO log_file APPEND
Generate HTML Page
' Create a simple HTML report
html = "<!DOCTYPE html>\n"
html = html + "<html><head><title>Report</title></head>\n"
html = html + "<body>\n"
html = html + "<h1>Daily Summary</h1>\n"
html = html + "<p>Generated: " + FORMAT(NOW(), "YYYY-MM-DD HH:mm") + "</p>\n"
html = html + "<ul>\n"
FOR EACH item IN summary_items
html = html + "<li>" + item + "</li>\n"
NEXT
html = html + "</ul>\n"
html = html + "</body></html>"
WRITE html TO "reports/daily-summary.html"
Writing Different Formats
Plain Text
WRITE "Hello, World!" TO "greeting.txt"
Markdown
doc = "# Title\n\n## Section 1\n\nContent here.\n"
WRITE doc TO "document.md"
JSON
json_text = '{"name": "Test", "value": 123}'
WRITE json_text TO "data.json"
CSV (Manual)
csv = "name,email,phone\n"
csv = csv + "Alice,alice@example.com,555-0100\n"
csv = csv + "Bob,bob@example.com,555-0101\n"
WRITE csv TO "contacts.csv"
CSV (From Table)
' Write query results as CSV
data = FIND "contacts" WHERE active = true
WRITE data TO "contacts.csv" AS TABLE
Error Handling
ON ERROR RESUME NEXT
WRITE content TO "protected/file.txt"
IF ERROR THEN
PRINT "Write failed: " + ERROR_MESSAGE
TALK "Sorry, I couldn't save the file. Please try again."
ELSE
TALK "File saved successfully!"
END IF
File Path Rules
| Path | Description |
|---|---|
file.txt | Root of bot’s storage |
folder/file.txt | Subdirectory (created if needed) |
folder/sub/file.txt | Nested subdirectory |
../file.txt | Not allowed — no parent traversal |
/absolute/path | Not allowed — paths are always relative |
Overwrite vs Append
| Mode | Behavior |
|---|---|
| Default | Overwrites existing file completely |
APPEND | Adds content to end of existing file |
' Overwrite (default)
WRITE "New content" TO "file.txt"
' Append
WRITE "Additional content\n" TO "file.txt" APPEND
Configuration
Configure storage settings in config.csv:
name,value
drive-provider,seaweedfs
drive-url,http://localhost:8333
drive-bucket,my-bot
drive-write-timeout,60
drive-max-file-size,52428800
Implementation Notes
- Implemented in Rust under
src/file/mod.rs - Automatically creates parent directories
- Uses UTF-8 encoding for text files
- Maximum file size: 50MB (configurable)
- Atomic writes to prevent corruption
- Returns confirmation on success
Related Keywords
- READ — Load content from files
- LIST — List files in a directory
- DELETE FILE — Remove files
- COPY — Copy files
- MOVE — Move or rename files
- UPLOAD — Upload files to storage
Summary
WRITE is the primary keyword for creating and saving files. Use it to generate reports, export data, create backups, build logs, and persist any content. Combined with AS TABLE for CSV exports and APPEND for log files, it provides flexible file creation capabilities for any bot workflow.
DELETE FILE
Deprecated: The
DELETE FILEkeyword has been unified into theDELETEkeyword. UseDELETEinstead.
Unified DELETE Keyword
The DELETE keyword now automatically detects file paths and handles file deletion:
' Delete a file - just use DELETE
DELETE "path/to/file.txt"
' DELETE auto-detects:
' - URLs → HTTP DELETE
' - table, filter → Database DELETE
' - path → File DELETE
Migration
Old Syntax (Deprecated)
' Old way - no longer needed
DELETE FILE "temp/report.pdf"
New Syntax (Recommended)
' New way - unified DELETE
DELETE "temp/report.pdf"
Examples
' Delete a temporary file
DELETE "temp/processed.csv"
' Delete uploaded file
DELETE "uploads/" + filename
' Delete with error handling
ON ERROR RESUME NEXT
DELETE "temp/large-file.pdf"
IF ERROR THEN
TALK "Could not delete file: " + ERROR MESSAGE
END IF
ON ERROR GOTO 0
See Also
- DELETE — Unified delete keyword (HTTP, Database, File)
- READ — Read file contents
- WRITE — Write file contents
- COPY — Copy files
- MOVE — Move/rename files
COPY
The COPY keyword duplicates files within the bot’s drive storage, creating copies in the same or different directories.
Syntax
COPY "source" TO "destination"
result = COPY "source" TO "destination"
Parameters
| Parameter | Type | Description |
|---|---|---|
source | String | Path to the file to copy |
destination | String | Path for the new copy |
Description
COPY creates a duplicate of a file in the bot’s storage. The original file remains unchanged. If the destination directory doesn’t exist, it’s created automatically.
Use cases include:
- Creating backups before modifications
- Duplicating templates for new users
- Archiving files while keeping originals accessible
- Organizing files into multiple locations
Examples
Basic File Copy
' Copy a file to a new location
COPY "templates/report.docx" TO "user-reports/report-copy.docx"
TALK "File copied successfully!"
Copy with Same Name
' Copy to different directory, keeping the same filename
COPY "documents/contract.pdf" TO "archive/contract.pdf"
Copy Before Editing
' Create backup before modifying
COPY "config/settings.json" TO "config/settings.json.backup"
' Now safe to modify original
content = READ "config/settings.json"
modified = REPLACE(content, "old_value", "new_value")
WRITE modified TO "config/settings.json"
TALK "Settings updated. Backup saved."
Copy Template for User
' Create user-specific copy of template
user_folder = "users/" + user.id
COPY "templates/welcome-kit.pdf" TO user_folder + "/welcome-kit.pdf"
TALK "Your welcome kit is ready!"
Copy with Timestamp
' Create timestamped copy
timestamp = FORMAT(NOW(), "YYYYMMDD-HHmmss")
COPY "reports/daily.csv" TO "archive/daily-" + timestamp + ".csv"
TALK "Report archived with timestamp"
Batch Copy
' Copy multiple files
files_to_copy = ["doc1.pdf", "doc2.pdf", "doc3.pdf"]
FOR EACH file IN files_to_copy
COPY "source/" + file TO "destination/" + file
NEXT
TALK "Copied " + LEN(files_to_copy) + " files"
Return Value
Returns an object with copy details:
| Property | Description |
|---|---|
result.source | Original file path |
result.destination | New file path |
result.size | File size in bytes |
result.copied_at | Timestamp of copy operation |
Error Handling
ON ERROR RESUME NEXT
COPY "documents/important.pdf" TO "backup/important.pdf"
IF ERROR THEN
PRINT "Copy failed: " + ERROR_MESSAGE
TALK "Sorry, I couldn't copy that file."
ELSE
TALK "File copied successfully!"
END IF
Common Errors
| Error | Cause | Solution |
|---|---|---|
FILE_NOT_FOUND | Source doesn’t exist | Verify source path |
PERMISSION_DENIED | Access blocked | Check permissions |
DESTINATION_EXISTS | File already exists | Use different name or delete first |
STORAGE_FULL | No space available | Clean up storage |
Behavior Notes
- Overwrites by default: If destination exists, it’s replaced
- Creates directories: Parent folders created automatically
- Preserves metadata: File type and creation date preserved
- Atomic operation: Copy completes fully or not at all
Configuration
No specific configuration required. Uses bot’s standard drive settings from config.csv:
name,value
drive-provider,seaweedfs
drive-url,http://localhost:8333
drive-bucket,my-bot
Related Keywords
- MOVE — Move or rename files
- DELETE FILE — Remove files
- READ — Read file contents
- WRITE — Write file contents
- LIST — List directory contents
Summary
COPY creates duplicates of files in storage. Use it for backups, templates, archiving, and organizing files. The original file is preserved, and destination directories are created automatically.
MOVE
The MOVE keyword relocates or renames files within the bot’s drive storage.
Syntax
MOVE "source" TO "destination"
result = MOVE "source" TO "destination"
Parameters
| Parameter | Type | Description |
|---|---|---|
source | String | Current path of the file |
destination | String | New path for the file |
Description
MOVE transfers a file from one location to another within the bot’s storage. The original file is removed after the move completes. This keyword can also be used to rename files by moving them to a new name in the same directory.
Use cases include:
- Organizing files into folders
- Renaming files
- Archiving processed files
- Moving uploads to permanent storage
Examples
Basic File Move
' Move a file to a different folder
MOVE "inbox/document.pdf" TO "processed/document.pdf"
TALK "File moved to processed folder"
Rename a File
' Rename by moving to same directory with new name
MOVE "reports/report.pdf" TO "reports/sales-report-2025.pdf"
TALK "File renamed successfully"
Move After Processing
' Process file then move to archive
content = READ "incoming/data.csv"
' ... process the data ...
MOVE "incoming/data.csv" TO "archive/data-" + FORMAT(NOW(), "YYYYMMDD") + ".csv"
TALK "Data processed and archived"
Organize User Uploads
' Move uploaded file to user's folder
HEAR uploaded_file
temp_path = UPLOAD uploaded_file TO "temp"
permanent_path = "users/" + user.id + "/documents/" + uploaded_file.name
MOVE temp_path.path TO permanent_path
TALK "File saved to your documents"
Move with Category
' Organize files by type
file_type = GET_FILE_TYPE(filename)
SWITCH file_type
CASE "pdf"
MOVE "uploads/" + filename TO "documents/" + filename
CASE "jpg", "png"
MOVE "uploads/" + filename TO "images/" + filename
CASE "csv", "xlsx"
MOVE "uploads/" + filename TO "data/" + filename
CASE ELSE
MOVE "uploads/" + filename TO "other/" + filename
END SWITCH
TALK "File organized into " + file_type + " folder"
Batch Move
' Move all files from one folder to another
files = LIST "temp/"
FOR EACH file IN files
MOVE "temp/" + file.name TO "permanent/" + file.name
NEXT
TALK "Moved " + LEN(files) + " files"
Return Value
Returns an object with move details:
| Property | Description |
|---|---|
result.source | Original file path |
result.destination | New file path |
result.size | File size in bytes |
result.moved_at | Timestamp of move operation |
Error Handling
ON ERROR RESUME NEXT
MOVE "documents/report.pdf" TO "archive/report.pdf"
IF ERROR THEN
PRINT "Move failed: " + ERROR_MESSAGE
TALK "Sorry, I couldn't move that file."
ELSE
TALK "File moved successfully!"
END IF
Common Errors
| Error | Cause | Solution |
|---|---|---|
FILE_NOT_FOUND | Source doesn’t exist | Verify source path |
PERMISSION_DENIED | Access blocked | Check permissions |
DESTINATION_EXISTS | Target file exists | Delete target first or use different name |
SAME_PATH | Source equals destination | Use different destination |
Move vs Copy
| Operation | Source After | Use When |
|---|---|---|
MOVE | Deleted | Relocating or renaming |
COPY | Preserved | Creating duplicates |
' MOVE: Original is gone
MOVE "a/file.txt" TO "b/file.txt"
' Only exists at b/file.txt now
' COPY: Original remains
COPY "a/file.txt" TO "b/file.txt"
' Exists at both locations
Behavior Notes
- Atomic operation: Move completes fully or not at all
- Creates directories: Parent folders created automatically
- Overwrites by default: Destination replaced if exists
- Cross-folder: Can move between any directories in storage
Configuration
No specific configuration required. Uses bot’s standard drive settings from config.csv:
name,value
drive-provider,seaweedfs
drive-url,http://localhost:8333
drive-bucket,my-bot
Related Keywords
- COPY — Duplicate files
- DELETE FILE — Remove files
- READ — Read file contents
- WRITE — Write file contents
- LIST — List directory contents
- UPLOAD — Upload files to storage
Summary
MOVE relocates or renames files within storage. The original file is removed after the move. Use it to organize files, rename documents, archive processed data, and manage user uploads. Destination directories are created automatically.
LIST
The LIST keyword retrieves a directory listing from the bot’s drive storage, returning information about files and subdirectories.
Syntax
files = LIST "path/"
files = LIST "path/" FILTER "*.pdf"
files = LIST "path/" RECURSIVE
Parameters
| Parameter | Type | Description |
|---|---|---|
path | String | Directory path to list (must end with /) |
FILTER | String | Optional glob pattern to filter results |
RECURSIVE | Flag | Include files in subdirectories |
Description
LIST returns an array of file and directory information from the specified path in the bot’s storage. Each item in the result includes metadata such as name, size, type, and modification date.
Use cases include:
- Browsing user uploads
- Finding files matching patterns
- Checking if files exist
- Building file inventories
- Processing batches of files
Examples
Basic Directory Listing
' List all files in a directory
files = LIST "documents/"
FOR EACH file IN files
TALK file.name + " (" + file.size + " bytes)"
NEXT
Filter by Extension
' List only PDF files
pdfs = LIST "documents/" FILTER "*.pdf"
TALK "Found " + LEN(pdfs) + " PDF files"
FOR EACH pdf IN pdfs
TALK "- " + pdf.name
NEXT
Recursive Listing
' List all files including subdirectories
all_files = LIST "uploads/" RECURSIVE
TALK "Total files: " + LEN(all_files)
Check File Exists
' Check if a specific file exists
files = LIST "reports/"
found = false
FOR EACH file IN files
IF file.name = "monthly-report.pdf" THEN
found = true
EXIT FOR
END IF
NEXT
IF found THEN
TALK "Report found!"
ELSE
TALK "Report not found. Would you like me to generate one?"
END IF
Find Recent Files
' List files modified in last 24 hours
files = LIST "inbox/"
yesterday = DATEADD(NOW(), -1, "day")
recent = FILTER files WHERE modified > yesterday
TALK "You have " + LEN(recent) + " new files since yesterday"
Calculate Folder Size
' Sum up total size of files in folder
files = LIST "backups/" RECURSIVE
total_size = 0
FOR EACH file IN files
total_size = total_size + file.size
NEXT
size_mb = total_size / 1048576
TALK "Backup folder size: " + FORMAT(size_mb, "#,##0.00") + " MB"
Process All Files of Type
' Process all CSV files in a folder
csv_files = LIST "imports/" FILTER "*.csv"
FOR EACH csv_file IN csv_files
data = READ "imports/" + csv_file.name AS TABLE
' Process each file...
MOVE "imports/" + csv_file.name TO "processed/" + csv_file.name
NEXT
TALK "Processed " + LEN(csv_files) + " CSV files"
Return Value
Returns an array of file objects. Each object contains:
| Property | Type | Description |
|---|---|---|
name | String | File or directory name |
path | String | Full path relative to storage root |
size | Number | File size in bytes (0 for directories) |
type | String | file or directory |
mime_type | String | MIME type (e.g., application/pdf) |
modified | DateTime | Last modification timestamp |
created | DateTime | Creation timestamp |
Example Result
files = LIST "documents/"
' files[0] might be:
' {
' name: "report.pdf",
' path: "documents/report.pdf",
' size: 245678,
' type: "file",
' mime_type: "application/pdf",
' modified: "2025-01-15T10:30:00Z",
' created: "2025-01-10T09:00:00Z"
' }
Filter Patterns
| Pattern | Matches |
|---|---|
* | All files |
*.pdf | All PDF files |
*.csv | All CSV files |
report* | Files starting with “report” |
*2025* | Files containing “2025” |
*.jpg,*.png | Multiple extensions |
' Multiple extensions
images = LIST "photos/" FILTER "*.jpg,*.png,*.gif"
' Wildcard in name
reports = LIST "exports/" FILTER "sales-*"
Error Handling
ON ERROR RESUME NEXT
files = LIST "nonexistent-folder/"
IF ERROR THEN
PRINT "List failed: " + ERROR_MESSAGE
TALK "That folder doesn't exist."
ELSE IF LEN(files) = 0 THEN
TALK "The folder is empty."
ELSE
TALK "Found " + LEN(files) + " items"
END IF
Common Errors
| Error | Cause | Solution |
|---|---|---|
PATH_NOT_FOUND | Directory doesn’t exist | Check path spelling |
NOT_A_DIRECTORY | Path is a file, not folder | Add trailing / |
PERMISSION_DENIED | Access blocked | Check permissions |
Behavior Notes
- Trailing slash required: Paths must end with
/to indicate directory - Excludes hidden files: Files starting with
.are excluded by default - Sorted alphabetically: Results are sorted by name
- Non-recursive by default: Only lists immediate contents unless
RECURSIVEspecified
Configuration
No specific configuration required. Uses bot’s standard drive settings from config.csv:
name,value
drive-provider,seaweedfs
drive-url,http://localhost:8333
drive-bucket,my-bot
Related Keywords
- READ — Read file contents
- WRITE — Write file contents
- COPY — Copy files
- MOVE — Move or rename files
- DELETE FILE — Remove files
- UPLOAD — Upload files to storage
Summary
LIST retrieves directory contents from storage, returning detailed metadata about each file and subdirectory. Use it to browse files, find matching documents, check existence, calculate sizes, and process batches of files. Filter patterns and recursive options help narrow results to exactly what you need.
COMPRESS
The COMPRESS keyword creates ZIP archives from files and directories in the bot’s storage, enabling bots to bundle multiple files for download or transfer.
Syntax
COMPRESS files TO "archive.zip"
result = COMPRESS files TO "archive.zip"
COMPRESS "folder/" TO "archive.zip"
Parameters
| Parameter | Type | Description |
|---|---|---|
files | Array/String | List of file paths or a single folder path |
TO | Clause | Destination path for the archive |
Description
COMPRESS creates a ZIP archive containing the specified files or directory contents. The archive is stored in the bot’s drive storage and can be downloaded, emailed, or transferred.
Use cases include:
- Bundling multiple documents for download
- Creating backups
- Packaging exports for users
- Archiving old files
- Preparing files for email attachments
Examples
Compress Multiple Files
' Create archive from list of files
files = ["report.pdf", "data.csv", "images/logo.png"]
COMPRESS files TO "package.zip"
TALK "Files compressed into package.zip"
Compress a Folder
' Compress entire folder contents
COMPRESS "documents/project/" TO "project-backup.zip"
TALK "Project folder compressed"
Compress with Result
' Get compression result details
result = COMPRESS files TO "exports/archive.zip"
TALK "Archive created: " + result.filename
TALK "Size: " + FORMAT(result.size / 1024, "#,##0") + " KB"
TALK "Files included: " + result.file_count
Compress for Download
' Create archive and send to user
files = LIST "reports/" FILTER "*.pdf"
file_paths = []
FOR EACH file IN files
file_paths = APPEND(file_paths, "reports/" + file.name)
NEXT
result = COMPRESS file_paths TO "all-reports.zip"
DOWNLOAD "all-reports.zip" AS "Your Reports.zip"
TALK "Here are all your reports in a single download!"
Compress with Timestamp
' Create dated archive
timestamp = FORMAT(NOW(), "YYYYMMDD-HHmmss")
archive_name = "backup-" + timestamp + ".zip"
COMPRESS "data/" TO "backups/" + archive_name
TALK "Backup created: " + archive_name
Common Use Cases
Create Document Package
' Bundle documents for a customer
customer_files = [
"contracts/" + customer_id + "/agreement.pdf",
"contracts/" + customer_id + "/terms.pdf",
"invoices/" + customer_id + "/latest.pdf"
]
result = COMPRESS customer_files TO "temp/customer-package.zip"
DOWNLOAD "temp/customer-package.zip" AS "Your Documents.zip"
TALK "Here's your complete document package!"
Archive Old Data
' Archive and remove old files
old_files = LIST "logs/" FILTER "*" WHERE modified < DATEADD(NOW(), -90, "day")
file_paths = []
FOR EACH file IN old_files
file_paths = APPEND(file_paths, "logs/" + file.name)
NEXT
IF LEN(file_paths) > 0 THEN
archive_name = "logs-archive-" + FORMAT(NOW(), "YYYYMM") + ".zip"
COMPRESS file_paths TO "archives/" + archive_name
' Remove original files
FOR EACH path IN file_paths
DELETE FILE path
NEXT
TALK "Archived " + LEN(file_paths) + " old log files"
END IF
Export User Data
' GDPR data export
user_folder = "users/" + user.id + "/"
COMPRESS user_folder TO "exports/user-data-" + user.id + ".zip"
link = DOWNLOAD "exports/user-data-" + user.id + ".zip" AS LINK
TALK "Your data export is ready: " + link
TALK "This link expires in 24 hours."
Email Attachment Bundle
' Create attachment for email
attachments = [
"reports/summary.pdf",
"reports/details.xlsx",
"reports/charts.png"
]
COMPRESS attachments TO "temp/report-bundle.zip"
SEND MAIL recipient_email, "Monthly Report Bundle",
"Please find attached the complete monthly report package.",
"temp/report-bundle.zip"
TALK "Report bundle sent to " + recipient_email
Return Value
Returns an object with archive details:
| Property | Description |
|---|---|
result.path | Full path to the archive |
result.filename | Archive filename |
result.size | Archive size in bytes |
result.file_count | Number of files in archive |
result.created_at | Creation timestamp |
Error Handling
ON ERROR RESUME NEXT
result = COMPRESS files TO "archive.zip"
IF ERROR THEN
PRINT "Compression failed: " + ERROR_MESSAGE
IF INSTR(ERROR_MESSAGE, "not found") > 0 THEN
TALK "One or more files could not be found."
ELSE IF INSTR(ERROR_MESSAGE, "storage") > 0 THEN
TALK "Not enough storage space for the archive."
ELSE
TALK "Sorry, I couldn't create the archive. Please try again."
END IF
ELSE
TALK "Archive created successfully!"
END IF
Common Errors
| Error | Cause | Solution |
|---|---|---|
FILE_NOT_FOUND | Source file doesn’t exist | Verify file paths |
STORAGE_FULL | Insufficient space | Clean up storage |
EMPTY_ARCHIVE | No files to compress | Check file list |
PERMISSION_DENIED | Access blocked | Check permissions |
Compression Options
The default compression uses standard ZIP format with deflate compression. This balances file size reduction with compatibility.
Size Limits
| Limit | Default | Notes |
|---|---|---|
| Max archive size | 500 MB | Configurable |
| Max files per archive | 10,000 | Practical limit |
| Max single file | 100 MB | Per file in archive |
Configuration
No specific configuration required. Uses bot’s standard drive settings from config.csv:
name,value
drive-provider,seaweedfs
drive-url,http://localhost:8333
drive-bucket,my-bot
Implementation Notes
- Implemented in Rust under
src/file/archive.rs - Uses standard ZIP format for compatibility
- Preserves directory structure in archive
- Supports recursive folder compression
- Progress tracking for large archives
- Atomic operation (creates temp file, then moves)
Related Keywords
- EXTRACT — Extract archive contents
- LIST — List files to compress
- DOWNLOAD — Send archive to user
- DELETE FILE — Remove files after archiving
- COPY — Copy files before archiving
Summary
COMPRESS creates ZIP archives from files and folders. Use it to bundle documents for download, create backups, package exports, and prepare email attachments. The archive preserves directory structure and can be immediately downloaded or processed. Combine with LIST to dynamically select files and DOWNLOAD to deliver archives to users.
EXTRACT
The EXTRACT keyword unpacks ZIP archives to a specified destination in the bot’s storage, enabling bots to process uploaded archives and access their contents.
Syntax
EXTRACT "archive.zip" TO "destination/"
result = EXTRACT "archive.zip" TO "destination/"
Parameters
| Parameter | Type | Description |
|---|---|---|
archive | String | Path to the ZIP archive to extract |
TO | Clause | Destination folder for extracted contents |
Description
EXTRACT unpacks a ZIP archive and places its contents in the specified destination folder. The folder is created automatically if it doesn’t exist. Directory structure within the archive is preserved.
Use cases include:
- Processing user-uploaded archives
- Unpacking data imports
- Restoring backups
- Accessing bundled resources
- Handling bulk file uploads
Examples
Basic Extraction
' Extract archive to a folder
EXTRACT "uploads/documents.zip" TO "extracted/"
TALK "Archive extracted successfully"
Extract with Result
' Get extraction details
result = EXTRACT "backup.zip" TO "restored/"
TALK "Extracted " + result.file_count + " files"
TALK "Total size: " + FORMAT(result.total_size / 1024, "#,##0") + " KB"
Extract User Upload
' Handle uploaded archive from user
TALK "Please upload a ZIP file with your documents."
HEAR uploaded_file
IF uploaded_file.type = "application/zip" THEN
upload_result = UPLOAD uploaded_file TO "temp"
' Extract to user's folder
user_folder = "users/" + user.id + "/imports/" + FORMAT(NOW(), "YYYYMMDD") + "/"
result = EXTRACT upload_result.path TO user_folder
TALK "Extracted " + result.file_count + " files from your archive!"
' List extracted files
files = LIST user_folder
FOR EACH file IN files
TALK "- " + file.name
NEXT
ELSE
TALK "Please upload a ZIP file."
END IF
Extract and Process
' Extract data files and process them
result = EXTRACT "imports/data-batch.zip" TO "temp/batch/"
csv_files = LIST "temp/batch/" FILTER "*.csv"
FOR EACH csv_file IN csv_files
data = READ "temp/batch/" + csv_file.name AS TABLE
' Process each row
FOR EACH row IN data
INSERT INTO "imports" WITH
source_file = csv_file.name,
data = row,
imported_at = NOW()
NEXT
TALK "Processed: " + csv_file.name
NEXT
' Clean up temp files
DELETE FILE "temp/batch/"
TALK "Import complete: processed " + LEN(csv_files) + " files"
Restore Backup
' Restore from backup archive
TALK "Enter the backup filename to restore (e.g., backup-20250115.zip)"
HEAR backup_name
backup_path = "backups/" + backup_name
files = LIST "backups/"
found = false
FOR EACH file IN files
IF file.name = backup_name THEN
found = true
EXIT FOR
END IF
NEXT
IF found THEN
result = EXTRACT backup_path TO "restored/"
TALK "Backup restored: " + result.file_count + " files"
ELSE
TALK "Backup file not found. Available backups:"
FOR EACH file IN files
TALK "- " + file.name
NEXT
END IF
Common Use Cases
Bulk Document Upload
' Handle bulk document upload
TALK "Upload a ZIP file containing your documents."
HEAR archive
upload = UPLOAD archive TO "temp"
result = EXTRACT upload.path TO "documents/bulk-" + FORMAT(NOW(), "YYYYMMDDHHmmss") + "/"
TALK "Successfully uploaded " + result.file_count + " documents!"
' Clean up temp file
DELETE FILE upload.path
Process Image Pack
' Extract and catalog images
result = EXTRACT "uploads/images.zip" TO "temp/images/"
images = LIST "temp/images/" FILTER "*.jpg,*.png,*.gif"
FOR EACH image IN images
' Move to permanent storage with organized path
MOVE "temp/images/" + image.name TO "media/images/" + image.name
' Record in database
INSERT INTO "media" WITH
filename = image.name,
path = "media/images/" + image.name,
size = image.size,
uploaded_at = NOW()
NEXT
TALK "Cataloged " + LEN(images) + " images"
Template Installation
' Install a template pack
result = EXTRACT "templates/new-theme.zip" TO "themes/custom/"
TALK "Template installed with " + result.file_count + " files"
' Verify required files
required = ["style.css", "config.json", "templates/"]
missing = []
FOR EACH req IN required
files = LIST "themes/custom/" FILTER req
IF LEN(files) = 0 THEN
missing = APPEND(missing, req)
END IF
NEXT
IF LEN(missing) > 0 THEN
TALK "Warning: Missing required files: " + JOIN(missing, ", ")
ELSE
TALK "Template is complete and ready to use!"
END IF
Return Value
Returns an object with extraction details:
| Property | Description |
|---|---|
result.destination | Destination folder path |
result.file_count | Number of files extracted |
result.folder_count | Number of folders created |
result.total_size | Total size of extracted files |
result.files | Array of extracted file paths |
result.extracted_at | Extraction timestamp |
Error Handling
ON ERROR RESUME NEXT
result = EXTRACT "uploads/data.zip" TO "extracted/"
IF ERROR THEN
PRINT "Extraction failed: " + ERROR_MESSAGE
IF INSTR(ERROR_MESSAGE, "corrupt") > 0 THEN
TALK "The archive appears to be corrupted. Please upload again."
ELSE IF INSTR(ERROR_MESSAGE, "not found") > 0 THEN
TALK "Archive file not found."
ELSE IF INSTR(ERROR_MESSAGE, "storage") > 0 THEN
TALK "Not enough storage space to extract the archive."
ELSE
TALK "Sorry, I couldn't extract the archive. Please try again."
END IF
ELSE
TALK "Extraction complete!"
END IF
Common Errors
| Error | Cause | Solution |
|---|---|---|
FILE_NOT_FOUND | Archive doesn’t exist | Verify archive path |
INVALID_ARCHIVE | Not a valid ZIP file | Check file format |
CORRUPT_ARCHIVE | Archive is damaged | Request new upload |
STORAGE_FULL | Insufficient space | Clean up storage |
PERMISSION_DENIED | Access blocked | Check permissions |
Security Considerations
- Path validation: Extracted paths are validated to prevent directory traversal attacks
- Size limits: Maximum extracted size is enforced to prevent storage exhaustion
- File type filtering: Executable files can be blocked if configured
- Malware scanning: Uploaded archives can be scanned before extraction
Size Limits
| Limit | Default | Notes |
|---|---|---|
| Max archive size | 100 MB | For uploaded archives |
| Max extracted size | 500 MB | Total after extraction |
| Max files | 10,000 | Files in archive |
| Max path depth | 10 | Nested folder depth |
Configuration
No specific configuration required. Uses bot’s standard drive settings from config.csv:
name,value
drive-provider,seaweedfs
drive-url,http://localhost:8333
drive-bucket,my-bot
Implementation Notes
- Implemented in Rust under
src/file/archive.rs - Supports standard ZIP format
- Preserves directory structure
- Handles nested folders
- Progress tracking for large archives
- Atomic extraction (temp folder, then move)
- Cleans up on failure
Related Keywords
- COMPRESS — Create ZIP archives
- UPLOAD — Upload archives from users
- LIST — List extracted files
- MOVE — Organize extracted files
- DELETE FILE — Clean up after extraction
Summary
EXTRACT unpacks ZIP archives to a destination folder. Use it to process uploaded archives, restore backups, handle bulk imports, and access bundled resources. The archive’s directory structure is preserved, and the destination folder is created automatically. Combine with UPLOAD to accept user archives and LIST to discover extracted contents.
UPLOAD
The UPLOAD keyword transfers files from external URLs or local paths to the bot’s drive storage, enabling bots to collect documents, images, and other files from users or external sources.
Syntax
result = UPLOAD url
result = UPLOAD url TO "destination"
result = UPLOAD url TO "destination" AS "filename"
UPLOAD file_data TO "destination"
Parameters
| Parameter | Type | Description |
|---|---|---|
url | String | Source URL to download and upload |
destination | String | Target folder in bot’s storage |
filename | String | Custom filename (optional) |
file_data | Binary | File data from user input or API response |
Description
UPLOAD retrieves a file from a URL or accepts file data and stores it in the bot’s configured storage (drive bucket). It supports:
- Downloading files from external URLs
- Accepting file uploads from chat users
- Storing API response attachments
- Organizing files into folders
- Automatic filename detection or custom naming
The destination path is relative to the bot’s storage root. Directories are created automatically if they don’t exist.
Examples
Basic URL Upload
' Download and store a file from URL
result = UPLOAD "https://example.com/report.pdf"
TALK "File saved as: " + result.filename
Upload to Specific Folder
' Upload to a specific directory
result = UPLOAD "https://cdn.example.com/image.png" TO "images/products"
TALK "Image stored at: " + result.path
Upload with Custom Filename
' Upload with a custom name
result = UPLOAD "https://api.example.com/export/data" TO "exports" AS "monthly-report.xlsx"
TALK "Report saved as: " + result.filename
Handle User File Upload
' When user sends a file via WhatsApp/chat
TALK "Please send me the document you'd like to upload."
HEAR user_file
IF user_file.type = "file" THEN
result = UPLOAD user_file TO "user-uploads/" + user.id
TALK "Got it! I've saved your file: " + result.filename
ELSE
TALK "That doesn't look like a file. Please try again."
END IF
Upload from API Response
' Download attachment from external API
invoice_url = GET "https://api.billing.com/invoices/" + invoice_id + "/pdf"
result = UPLOAD invoice_url.download_url TO "invoices/" + customer_id
TALK "Invoice downloaded and saved!"
SEND MAIL customer_email, "Your Invoice", "Please find your invoice attached.", result.path
Return Value
UPLOAD returns an object with:
| Property | Description |
|---|---|
result.path | Full path in storage |
result.filename | Name of the saved file |
result.size | File size in bytes |
result.type | MIME type of the file |
result.url | Internal URL to access the file |
Common Use Cases
Collect User Documents
' Document collection flow
TALK "I need a few documents to process your application."
TALK "First, please upload your ID document."
HEAR id_doc
id_result = UPLOAD id_doc TO "applications/" + application_id + "/documents" AS "id-document"
TALK "Great! Now please upload proof of address."
HEAR address_doc
address_result = UPLOAD address_doc TO "applications/" + application_id + "/documents" AS "proof-of-address"
TALK "Thank you! I've received:"
TALK "✓ ID Document: " + id_result.filename
TALK "✓ Proof of Address: " + address_result.filename
Archive External Content
' Download and archive web content
urls = [
"https://example.com/report-2024.pdf",
"https://example.com/report-2025.pdf"
]
FOR EACH url IN urls
result = UPLOAD url TO "archive/reports"
TALK "Archived: " + result.filename
NEXT
TALK "All reports archived successfully!"
Profile Photo Upload
TALK "Would you like to update your profile photo? Send me an image."
HEAR photo
IF photo.type = "image" THEN
result = UPLOAD photo TO "profiles" AS user.id + "-avatar"
SET USER MEMORY "avatar_url", result.url
TALK "Profile photo updated! Looking good! 📸"
ELSE
TALK "Please send an image file."
END IF
Backup External Data
' Backup data from external service
backup_url = "https://api.service.com/export?format=json&date=" + FORMAT(NOW(), "YYYY-MM-DD")
SET HEADER "Authorization", "Bearer " + api_token
result = UPLOAD backup_url TO "backups" AS "backup-" + FORMAT(NOW(), "YYYYMMDD") + ".json"
TALK "Backup complete: " + FORMAT(result.size / 1024, "#,##0") + " KB"
Receipt Collection
' Expense report receipt upload
TALK "Please upload your receipt for the expense."
HEAR receipt
result = UPLOAD receipt TO "expenses/" + expense_id + "/receipts"
' Update expense record
UPDATE "expenses" SET receipt_path = result.path WHERE id = expense_id
TALK "Receipt attached to expense #" + expense_id
Supported File Types
| Category | Extensions |
|---|---|
| Documents | .pdf, .docx, .doc, .txt, .md, .rtf |
| Spreadsheets | .xlsx, .xls, .csv |
| Images | .jpg, .jpeg, .png, .gif, .webp, .svg |
| Archives | .zip, .tar, .gz, .rar |
| Audio | .mp3, .wav, .ogg, .m4a |
| Video | .mp4, .mov, .avi, .webm |
| Data | .json, .xml, .yaml |
Error Handling
ON ERROR RESUME NEXT
result = UPLOAD "https://example.com/large-file.zip" TO "downloads"
IF ERROR THEN
PRINT "Upload failed: " + ERROR_MESSAGE
TALK "Sorry, I couldn't download that file. The server might be unavailable."
ELSE IF result.size > 50000000 THEN
TALK "Warning: This is a large file (" + FORMAT(result.size / 1048576, "#,##0") + " MB)"
ELSE
TALK "File uploaded successfully!"
END IF
Validate File Type
HEAR user_file
allowed_types = ["application/pdf", "image/jpeg", "image/png"]
IF NOT CONTAINS(allowed_types, user_file.mime_type) THEN
TALK "Sorry, I only accept PDF and image files."
ELSE
result = UPLOAD user_file TO "uploads"
TALK "File accepted!"
END IF
Size Limits
| Limit | Default | Configurable |
|---|---|---|
| Maximum file size | 50 MB | Yes |
| Maximum files per folder | 10,000 | Yes |
| Total storage per bot | 10 GB | Yes |
Configuration
Configure upload settings in config.csv:
name,value
drive-provider,seaweedfs
drive-url,http://localhost:8333
drive-bucket,my-bot
upload-max-size,52428800
upload-allowed-types,pdf,docx,xlsx,jpg,png
upload-timeout,120
Security Considerations
- Files are scanned for malware before storage
- Executable files (
.exe,.sh,.bat) are blocked by default - File paths are sanitized to prevent directory traversal
- Original filenames are preserved but sanitized
- Large files are chunked for reliable upload
Implementation Notes
- Implemented in Rust under
src/file/mod.rs - Uses streaming upload for large files
- Supports resume for interrupted uploads
- Automatic retry on network failures (up to 3 attempts)
- Progress tracking available for large files
- Deduplication based on content hash (optional)
Related Keywords
- DOWNLOAD — Download files to user
- READ — Read file contents
- WRITE — Write content to files
- LIST — List files in storage
- DELETE FILE — Remove files
- COPY — Copy files within storage
Summary
UPLOAD is essential for collecting files from users and external sources. Use it to accept document uploads, archive web content, collect receipts and photos, and store API response attachments. Combined with folder organization and custom naming, it provides flexible file collection for any bot workflow.
DOWNLOAD
The DOWNLOAD keyword retrieves files from the bot’s storage and sends them to users or saves them to external locations, enabling bots to share documents, export data, and deliver files through chat channels.
Syntax
DOWNLOAD "filename"
DOWNLOAD "filename" TO user
DOWNLOAD "filename" AS "display_name"
url = DOWNLOAD "filename" AS LINK
Parameters
| Parameter | Type | Description |
|---|---|---|
filename | String | Path to the file in the bot’s storage |
TO user | Flag | Send file to specific user (default: current user) |
AS "name" | String | Custom display name for the file |
AS LINK | Flag | Return a download URL instead of sending file |
Description
DOWNLOAD retrieves a file from the bot’s configured storage (drive bucket) and delivers it to the user through their chat channel. It supports:
- Sending files directly in chat (WhatsApp, Telegram, web, etc.)
- Custom display names for downloaded files
- Generating shareable download links
- Sending files to specific users
- Automatic MIME type detection
The file path is relative to the bot’s storage root. Use forward slashes for subdirectories.
Examples
Basic File Download
' Send a file to the current user
DOWNLOAD "documents/user-guide.pdf"
TALK "Here's the user guide you requested!"
Download with Custom Name
' Send file with a friendly display name
DOWNLOAD "reports/rpt-2025-01.pdf" AS "January 2025 Report.pdf"
Generate Download Link
' Get a shareable URL instead of sending directly
link = DOWNLOAD "exports/data.xlsx" AS LINK
TALK "Download your data here: " + link
' Link expires after 24 hours by default
Send to Specific User
' Send file to a different user
DOWNLOAD "contracts/agreement.pdf" TO manager_email
TALK "I've sent the contract to your manager for review."
Download After Processing
' Generate a report and send it
report_content = "# Sales Report\n\n" + sales_data
WRITE report_content TO "temp/report.md"
' Convert to PDF (if configured)
GENERATE PDF "temp/report.md" TO "temp/report.pdf"
DOWNLOAD "temp/report.pdf" AS "Sales Report.pdf"
TALK "Here's your sales report!"
Common Use Cases
Send Invoice
' Lookup and send customer invoice
invoice_path = "invoices/" + customer_id + "/" + invoice_id + ".pdf"
DOWNLOAD invoice_path AS "Invoice-" + invoice_id + ".pdf"
TALK "Here's your invoice. Let me know if you have any questions!"
Export Data
' Export user's data to file and send
user_data = FIND "orders" WHERE customer_id = user.id
WRITE user_data TO "exports/user-" + user.id + "-orders.csv" AS TABLE
DOWNLOAD "exports/user-" + user.id + "-orders.csv" AS "My Orders.csv"
TALK "Here's a complete export of your order history."
Share Meeting Notes
' Send meeting notes from earlier session
meeting_date = FORMAT(NOW(), "YYYY-MM-DD")
notes_file = "meetings/" + meeting_date + "-notes.md"
IF FILE_EXISTS(notes_file) THEN
DOWNLOAD notes_file AS "Meeting Notes - " + meeting_date + ".md"
TALK "Here are the notes from today's meeting!"
ELSE
TALK "I don't have any meeting notes for today."
END IF
Provide Template
' Send a template file for user to fill out
TALK "I'll send you the application form. Please fill it out and send it back."
DOWNLOAD "templates/application-form.docx" AS "Application Form.docx"
Generate and Share Report
' Create report on demand
TALK "Generating your monthly report..."
' Build report content
report = "# Monthly Summary\n\n"
report = report + "**Period:** " + month_name + " " + year + "\n\n"
report = report + "## Key Metrics\n\n"
report = report + "- Revenue: $" + FORMAT(revenue, "#,##0.00") + "\n"
report = report + "- Orders: " + order_count + "\n"
report = report + "- New Customers: " + new_customers + "\n"
' Save and send
filename = "reports/monthly-" + FORMAT(NOW(), "YYYYMM") + ".md"
WRITE report TO filename
DOWNLOAD filename AS "Monthly Report - " + month_name + ".md"
Send Multiple Files
' Send several related files
files = ["contract.pdf", "terms.pdf", "schedule.pdf"]
TALK "I'm sending you the complete documentation package:"
FOR EACH file IN files
DOWNLOAD "documents/" + file
WAIT 1 ' Brief pause between files
NEXT
TALK "All documents sent! Please review and let me know if you have questions."
Return Values
Direct Download (default)
Returns a confirmation object:
| Property | Description |
|---|---|
result.sent | Boolean indicating success |
result.filename | Name of file sent |
result.size | File size in bytes |
Download as Link
Returns a URL string:
link = DOWNLOAD "file.pdf" AS LINK
' Returns: "https://storage.example.com/download/abc123?expires=..."
Channel-Specific Behavior
| Channel | Behavior |
|---|---|
| Sends as document attachment | |
| Telegram | Sends as document or media based on type |
| Web Chat | Triggers browser download |
| Attaches to email message | |
| SMS | Sends download link (files not supported) |
File Type Handling
| File Type | Display |
|---|---|
| Document with preview | |
| Images | Inline image display |
| Audio | Audio player |
| Video | Video player |
| Other | Generic document icon |
' Images display inline in most channels
DOWNLOAD "photos/product.jpg"
' PDFs show with document preview
DOWNLOAD "docs/manual.pdf"
Error Handling
ON ERROR RESUME NEXT
DOWNLOAD "reports/missing-file.pdf"
IF ERROR THEN
PRINT "Download failed: " + ERROR_MESSAGE
TALK "Sorry, I couldn't find that file. It may have been moved or deleted."
END IF
Check File Exists First
files = LIST "invoices/" + customer_id + "/"
found = false
FOR EACH file IN files
IF file.name = invoice_id + ".pdf" THEN
found = true
EXIT FOR
END IF
NEXT
IF found THEN
DOWNLOAD "invoices/" + customer_id + "/" + invoice_id + ".pdf"
ELSE
TALK "Invoice not found. Please check the invoice number."
END IF
Link Options
When using AS LINK, you can configure link behavior:
' Default link (expires in 24 hours)
link = DOWNLOAD "file.pdf" AS LINK
' Custom expiration (in config.csv)
' download-link-expiry,3600 (1 hour)
Size Limits
| Limit | Default | Notes |
|---|---|---|
| 100 MB | Documents, 16 MB for media | |
| Telegram | 50 MB | Standard, 2 GB for premium |
| Web Chat | No limit | Browser handles download |
| 25 MB | Typical email limit |
' For large files, use link instead
file_info = LIST "exports/large-file.zip"
IF file_info[0].size > 50000000 THEN
link = DOWNLOAD "exports/large-file.zip" AS LINK
TALK "This file is large. Download it here: " + link
ELSE
DOWNLOAD "exports/large-file.zip"
END IF
Configuration
Configure download settings in config.csv:
name,value
drive-provider,seaweedfs
drive-url,http://localhost:8333
drive-bucket,my-bot
download-link-expiry,86400
download-link-base-url,https://files.mybot.com
download-max-size,104857600
Implementation Notes
- Implemented in Rust under
src/file/mod.rs - Uses streaming for large file transfers
- Automatic MIME type detection
- Supports range requests for resumable downloads
- Files are served through secure signed URLs
- Access logging for audit trails
Related Keywords
- UPLOAD — Upload files to storage
- READ — Read file contents
- WRITE — Write content to files
- LIST — List files in storage
- GENERATE PDF — Create PDF documents
Summary
DOWNLOAD is essential for delivering files to users through chat. Use it to send invoices, share reports, provide templates, and export data. Combined with AS LINK for large files and custom display names, it provides flexible file delivery for any bot workflow.
GENERATE PDF
The GENERATE PDF keyword creates PDF documents from HTML templates or Markdown content, enabling bots to produce professional reports, invoices, certificates, and other documents.
Note: This keyword uses spaces, not underscores. Write
GENERATE PDFnotGENERATE_PDF.
Syntax
result = GENERATE PDF template, data, "output.pdf"
Parameters
| Parameter | Type | Description |
|---|---|---|
template | String | Path to HTML template or Markdown file |
data | Object | Template variables to substitute |
output | String | Output path for the generated PDF |
Description
GENERATE PDF renders an HTML or Markdown template into a PDF document, substituting placeholders with provided values. The generated PDF is stored in the bot’s drive storage and can be downloaded, emailed, or processed further.
Use cases include:
- Generating invoices and receipts
- Creating reports and summaries
- Producing certificates and credentials
- Building contracts and agreements
- Creating personalized documents
Examples
Basic PDF Generation
' Generate PDF from template with data
data = #{
"title": "Invoice",
"date": FORMAT(NOW(), "MMMM DD, YYYY")
}
result = GENERATE PDF "templates/invoice.html", data, "invoices/inv-001.pdf"
TALK "Invoice generated!"
With Template Variables
' Generate PDF with data substitution
data = #{
"customer_name": customer.name,
"customer_email": customer.email,
"invoice_number": invoice_id,
"date": FORMAT(NOW(), "MMMM DD, YYYY"),
"items": order_items,
"subtotal": order_subtotal,
"tax": order_tax,
"total": order_total
}
result = GENERATE PDF "templates/invoice.html", data, "invoices/inv-" + invoice_id + ".pdf"
TALK "Invoice #" + invoice_id + " generated!"
Generate and Download
' Create PDF and send to user
data = #{
"title": "Monthly Report",
"period": FORMAT(NOW(), "MMMM YYYY"),
"data": report_data
}
result = GENERATE PDF "templates/report.html", data, "temp/report.pdf"
DOWNLOAD result.url AS "Monthly Report.pdf"
TALK "Here's your report!"
Generate and Email
' Create PDF and email it
data = #{
"party_a": company_name,
"party_b": customer_name,
"effective_date": FORMAT(NOW(), "MMMM DD, YYYY"),
"terms": contract_terms
}
result = GENERATE PDF "templates/contract.html", data, "contracts/" + contract_id + ".pdf"
SEND MAIL customer_email, "Your Contract",
"Please find attached your contract for review.",
[result.localName]
TALK "Contract sent to " + customer_email
Template Format
HTML Template
<!DOCTYPE html>
<html>
<head>
<style>
body { font-family: Arial, sans-serif; }
.header { text-align: center; margin-bottom: 20px; }
.invoice-number { color: #666; }
table { width: 100%; border-collapse: collapse; }
th, td { border: 1px solid #ddd; padding: 8px; }
.total { font-weight: bold; font-size: 1.2em; }
</style>
</head>
<body>
<div class="header">
<h1>INVOICE</h1>
<p class="invoice-number">{{invoice_number}}</p>
</div>
<p><strong>Date:</strong> {{date}}</p>
<p><strong>Customer:</strong> {{customer_name}}</p>
<table>
<tr>
<th>Item</th>
<th>Quantity</th>
<th>Price</th>
</tr>
{{#each items}}
<tr>
<td>{{this.name}}</td>
<td>{{this.quantity}}</td>
<td>${{this.price}}</td>
</tr>
{{/each}}
</table>
<p class="total">Total: ${{total}}</p>
</body>
</html>
Markdown Template
# {{title}}
**Date:** {{date}}
**Prepared for:** {{customer_name}}
## Summary
{{summary}}
## Details
{{#each items}}
- **{{this.name}}:** {{this.description}}
{{/each}}
---
Generated by General Bots
Template Placeholders
| Syntax | Description |
|---|---|
{{variable}} | Simple variable substitution |
{{#each items}}...{{/each}} | Loop over array |
{{#if condition}}...{{/if}} | Conditional rendering |
{{#unless condition}}...{{/unless}} | Negative conditional |
{{this.property}} | Access property in loop |
Common Use Cases
Invoice Generation
' Generate a complete invoice
items = FIND "order_items" WHERE order_id = order.id
data = #{
"invoice_number": "INV-" + FORMAT(order.id, "00000"),
"date": FORMAT(NOW(), "MMMM DD, YYYY"),
"due_date": FORMAT(DATEADD(NOW(), 30, "day"), "MMMM DD, YYYY"),
"customer_name": customer.name,
"customer_address": customer.address,
"items": items,
"subtotal": FORMAT(order.subtotal, "#,##0.00"),
"tax": FORMAT(order.tax, "#,##0.00"),
"total": FORMAT(order.total, "#,##0.00")
}
result = GENERATE PDF "templates/invoice.html", data, "invoices/" + order.id + ".pdf"
TALK "Invoice generated: " + result.localName
Certificate Generation
' Generate completion certificate
data = #{
"recipient_name": user.name,
"course_name": course.title,
"completion_date": FORMAT(NOW(), "MMMM DD, YYYY"),
"certificate_id": GUID(),
"instructor_name": course.instructor
}
result = GENERATE PDF "templates/certificate.html", data, "certificates/" + user.id + "-" + course.id + ".pdf"
DOWNLOAD result.url AS "Certificate - " + course.title + ".pdf"
TALK "Congratulations! Here's your certificate!"
Report Generation
' Generate monthly sales report
sales_data = FIND "sales" WHERE
date >= DATEADD(NOW(), -30, "day")
summary = AGGREGATE sales_data SUM amount
count = AGGREGATE sales_data COUNT
data = #{
"title": "Monthly Sales Report",
"period": FORMAT(NOW(), "MMMM YYYY"),
"total_sales": FORMAT(summary, "$#,##0.00"),
"transaction_count": count,
"sales_data": sales_data,
"generated_at": FORMAT(NOW(), "YYYY-MM-DD HH:mm")
}
result = GENERATE PDF "templates/sales-report.html", data, "reports/sales-" + FORMAT(NOW(), "YYYYMM") + ".pdf"
TALK "Sales report generated!"
Contract Generation
' Generate service agreement
data = #{
"contract_number": contract_id,
"client_name": client.name,
"client_company": client.company,
"service_description": selected_service.description,
"monthly_fee": FORMAT(selected_service.price, "$#,##0.00"),
"start_date": FORMAT(start_date, "MMMM DD, YYYY"),
"term_months": contract_term,
"end_date": FORMAT(DATEADD(start_date, contract_term, "month"), "MMMM DD, YYYY")
}
result = GENERATE PDF "templates/service-agreement.html", data, "contracts/sa-" + contract_id + ".pdf"
TALK "Service agreement ready for signature!"
Return Value
Returns an object with generation details:
| Property | Description |
|---|---|
result.url | Full URL to the generated PDF (S3/MinIO path) |
result.localName | Local filename of the generated PDF |
Error Handling
ON ERROR RESUME NEXT
data = #{
"customer_name": customer_name,
"total": order_total
}
result = GENERATE PDF "templates/invoice.html", data, "invoices/test.pdf"
IF ERROR THEN
TALK "PDF generation failed: " + ERROR MESSAGE
IF INSTR(ERROR MESSAGE, "template") > 0 THEN
TALK "Template file not found."
ELSE IF INSTR(ERROR MESSAGE, "storage") > 0 THEN
TALK "Not enough storage space."
ELSE
TALK "Sorry, I couldn't generate the document. Please try again."
END IF
ELSE
TALK "PDF generated successfully!"
END IF
ON ERROR GOTO 0
Common Errors
| Error | Cause | Solution |
|---|---|---|
TEMPLATE_NOT_FOUND | Template file doesn’t exist | Verify template path |
INVALID_TEMPLATE | Template has syntax errors | Check template format |
MISSING_VARIABLE | Required placeholder not provided | Include all variables |
STORAGE_FULL | Insufficient space | Clean up storage |
RENDER_ERROR | HTML/CSS rendering issue | Simplify template |
Styling Tips
Supported CSS
- Basic typography (fonts, sizes, colors)
- Box model (margins, padding, borders)
- Tables and layouts
- Page breaks (
page-break-before,page-break-after) - Print media queries (
@media print)
Page Setup
<style>
@page {
size: A4;
margin: 2cm;
}
.page-break {
page-break-after: always;
}
@media print {
.no-print { display: none; }
}
</style>
Configuration
No specific configuration required. Uses bot’s standard drive settings from config.csv:
name,value
drive-provider,seaweedfs
drive-url,http://localhost:8333
drive-bucket,my-bot
Implementation Notes
- Implemented in Rust under
src/file/pdf.rs - Uses headless browser rendering for HTML
- Supports embedded images (base64 or relative paths)
- Handles Unicode and special characters
- Maximum PDF size: 50 MB
- Template caching for performance
Related Keywords
- MERGE PDF — Combine multiple PDFs
- FILL — Fill templates with data (alternative approach)
- READ — Read template content
- DOWNLOAD — Send PDF to user
- SEND MAIL — Email PDF as attachment
- WRITE — Create template dynamically
Summary
GENERATE PDF creates professional PDF documents from HTML or Markdown templates with variable substitution. Use it for invoices, reports, certificates, contracts, and any document that needs a polished format. Templates support loops, conditionals, and styling for flexible document generation. Combine with DOWNLOAD to deliver PDFs to users or SEND MAIL to email them as attachments.
Syntax reminder: Always use
GENERATE PDF(with space), notGENERATE_PDF.
MERGE PDF
The MERGE PDF keyword combines multiple PDF files into a single document, enabling bots to consolidate reports, compile documents, and create comprehensive file packages.
Syntax
result = MERGE PDF files, "output.pdf"
Parameters
| Parameter | Type | Description |
|---|---|---|
files | Array/String | Array of PDF file paths or single path |
output | String | Output filename for the merged PDF |
Description
MERGE PDF takes multiple PDF files and combines them into a single document in the order specified. This is useful for creating comprehensive reports, combining related documents, or building document packages for clients.
Use cases include:
- Combining invoice and receipt PDFs
- Merging report sections into complete reports
- Creating document packages for clients
- Consolidating scanned documents
- Building compliance document bundles
Examples
Basic PDF Merge
' Merge two PDF files
files = ["report-part1.pdf", "report-part2.pdf"]
result = MERGE PDF files, "complete-report.pdf"
TALK "Report merged: " + result.localName
Merge Multiple Documents
' Merge multiple documents into one package
documents = [
"contracts/agreement.pdf",
"documents/terms.pdf",
"documents/privacy-policy.pdf",
"documents/appendix-a.pdf"
]
result = MERGE PDF documents, "client-package-" + client_id + ".pdf"
TALK "Document package created!"
DOWNLOAD result.url AS "Complete Package.pdf"
Dynamic Document Collection
' Find and merge all invoices for a month
invoice_files = []
invoices = FIND "invoices" WHERE month = current_month
FOR EACH inv IN invoices
invoice_files = invoice_files + ["invoices/" + inv.filename]
END FOR
result = MERGE PDF invoice_files, "monthly-invoices-" + FORMAT(NOW(), "YYYYMM") + ".pdf"
TALK "Merged " + LEN(invoice_files) + " invoices into one document"
Merge with Generated PDFs
' Generate PDFs first, then merge them
cover = GENERATE PDF "templates/cover.html", cover_data, "temp/cover.pdf"
body = GENERATE PDF "templates/report.html", report_data, "temp/body.pdf"
appendix = GENERATE PDF "templates/appendix.html", appendix_data, "temp/appendix.pdf"
files = [cover.localName, body.localName, appendix.localName]
result = MERGE PDF files, "reports/full-report-" + report_id + ".pdf"
TALK "Complete report generated with " + LEN(files) + " sections"
Merge and Email
' Create document package and email to client
documents = [
"proposals/proposal-" + deal_id + ".pdf",
"documents/service-agreement.pdf",
"documents/pricing-schedule.pdf"
]
result = MERGE PDF documents, "packages/" + client_name + "-proposal.pdf"
SEND MAIL client_email,
"Your Proposal Package",
"Please find attached your complete proposal package.",
[result.localName]
TALK "Proposal package sent to " + client_email
Return Value
Returns an object with merge details:
| Property | Description |
|---|---|
result.url | Full URL to the merged PDF (S3/MinIO path) |
result.localName | Local filename of the merged PDF |
Common Use Cases
Monthly Report Compilation
' Compile all weekly reports into monthly report
weekly_reports = [
"reports/week1.pdf",
"reports/week2.pdf",
"reports/week3.pdf",
"reports/week4.pdf"
]
' Generate cover page
cover = GENERATE PDF "templates/monthly-cover.html", #{
"month": FORMAT(NOW(), "MMMM YYYY"),
"generated": FORMAT(NOW(), "YYYY-MM-DD")
}, "temp/cover.pdf"
' Merge cover with weekly reports
all_files = [cover.localName] + weekly_reports
result = MERGE PDF all_files, "reports/monthly-" + FORMAT(NOW(), "YYYYMM") + ".pdf"
TALK "Monthly report compiled!"
Client Onboarding Package
' Create onboarding document package for new client
package_files = [
"templates/welcome-letter.pdf",
"contracts/service-agreement-" + contract_id + ".pdf",
"documents/user-guide.pdf",
"documents/faq.pdf",
"documents/support-contacts.pdf"
]
result = MERGE PDF package_files, "onboarding/" + client_id + "-welcome-package.pdf"
SEND MAIL client_email,
"Welcome to Our Service!",
"Please find your complete onboarding package attached.",
[result.localName]
TALK "Onboarding package sent to " + client_name
Compliance Document Bundle
' Bundle all compliance documents for audit
compliance_docs = FIND "compliance_documents" WHERE year = audit_year
file_list = []
FOR EACH doc IN compliance_docs
file_list = file_list + [doc.file_path]
END FOR
' Add table of contents
toc = GENERATE PDF "templates/compliance-toc.html", #{
"documents": compliance_docs,
"audit_year": audit_year
}, "temp/toc.pdf"
all_files = [toc.localName] + file_list
result = MERGE PDF all_files, "audits/compliance-bundle-" + audit_year + ".pdf"
TALK "Compliance bundle ready with " + LEN(compliance_docs) + " documents"
Invoice Bundle for Accounting
' Create quarterly invoice bundle
quarter_start = DATEADD(NOW(), -3, "month")
invoices = FIND "generated_invoices" WHERE created_at >= quarter_start
invoice_files = []
FOR EACH inv IN invoices
invoice_files = invoice_files + ["invoices/" + inv.pdf_filename]
END FOR
IF LEN(invoice_files) > 0 THEN
result = MERGE PDF invoice_files, "accounting/Q" + quarter + "-invoices.pdf"
TALK "Bundled " + LEN(invoice_files) + " invoices for Q" + quarter
ELSE
TALK "No invoices found for this quarter"
END IF
Error Handling
ON ERROR RESUME NEXT
files = ["doc1.pdf", "doc2.pdf", "doc3.pdf"]
result = MERGE PDF files, "merged.pdf"
IF ERROR THEN
error_msg = ERROR_MESSAGE
IF INSTR(error_msg, "not found") > 0 THEN
TALK "One or more PDF files could not be found."
ELSE IF INSTR(error_msg, "invalid") > 0 THEN
TALK "One of the files is not a valid PDF."
ELSE IF INSTR(error_msg, "storage") > 0 THEN
TALK "Not enough storage space for the merged file."
ELSE
TALK "Merge failed: " + error_msg
END IF
ELSE
TALK "PDFs merged successfully!"
END IF
Validating Files Before Merge
' Check files exist before attempting merge
files_to_merge = ["report1.pdf", "report2.pdf", "report3.pdf"]
valid_files = []
FOR EACH f IN files_to_merge
file_info = LIST f
IF file_info THEN
valid_files = valid_files + [f]
ELSE
PRINT "Warning: " + f + " not found, skipping"
END IF
END FOR
IF LEN(valid_files) > 0 THEN
result = MERGE PDF valid_files, "merged-output.pdf"
TALK "Merged " + LEN(valid_files) + " of " + LEN(files_to_merge) + " files"
ELSE
TALK "No valid PDF files found to merge"
END IF
Common Errors
| Error | Cause | Solution |
|---|---|---|
FILE_NOT_FOUND | Source PDF doesn’t exist | Verify file paths |
INVALID_PDF | File is not a valid PDF | Check file format |
EMPTY_INPUT | No files provided | Ensure array has files |
STORAGE_FULL | Insufficient disk space | Clean up storage |
PERMISSION_DENIED | Cannot read source file | Check file permissions |
Best Practices
File Organization
' Organize files in logical order before merge
sections = [
"01-cover.pdf",
"02-executive-summary.pdf",
"03-introduction.pdf",
"04-analysis.pdf",
"05-recommendations.pdf",
"06-appendices.pdf"
]
result = MERGE PDF sections, "final-report.pdf"
Temporary File Cleanup
' Clean up temporary files after merge
temp_files = []
' Generate temporary PDFs
FOR i = 1 TO 5
temp_file = "temp/section-" + i + ".pdf"
GENERATE PDF "templates/section.html", section_data[i], temp_file
temp_files = temp_files + [temp_file]
END FOR
' Merge all sections
result = MERGE PDF temp_files, "final-document.pdf"
' Clean up temp files
FOR EACH tf IN temp_files
DELETE tf
END FOR
TALK "Document created and temp files cleaned up"
Large Document Sets
' For very large document sets, batch if needed
all_files = get_all_pdf_files() ' Assume this returns many files
IF LEN(all_files) > 100 THEN
' Process in batches
batch_size = 50
batch_outputs = []
FOR batch_num = 0 TO (LEN(all_files) / batch_size)
start_idx = batch_num * batch_size
batch_files = SLICE(all_files, start_idx, start_idx + batch_size)
batch_output = "temp/batch-" + batch_num + ".pdf"
MERGE PDF batch_files, batch_output
batch_outputs = batch_outputs + [batch_output]
END FOR
' Final merge of batches
result = MERGE PDF batch_outputs, "complete-archive.pdf"
ELSE
result = MERGE PDF all_files, "complete-archive.pdf"
END IF
Configuration
No specific configuration required. Uses the bot’s standard drive storage settings from config.csv.
Output files are stored in the bot’s .gbdrive storage location.
Implementation Notes
- Implemented in Rust under
src/basic/keywords/file_operations.rs - Maintains PDF metadata and bookmarks where possible
- Preserves page sizes and orientations
- Handles password-protected PDFs (if password provided)
- Maximum combined size: 500 MB
- Processing timeout: 120 seconds
Related Keywords
- GENERATE PDF — Create PDFs from templates
- READ — Read file contents
- DOWNLOAD — Send files to users
- COPY — Copy files
- DELETE — Remove files
- LIST — List files in directory
Summary
MERGE PDF combines multiple PDF files into a single document, making it easy to create comprehensive document packages, compile reports, and bundle related files. Use it with GENERATE PDF to create multi-section reports or with existing files to build client packages. The keyword handles the complexity of PDF merging while providing a simple array-based interface.
Chapter 07: Extending General Bots
Architecture and deployment reference for developers.
Overview
BotServer is built in Rust with a modular architecture. Extend it by creating custom keywords, services, or entire applications.
Architecture
┌─────────────────────────────────────────┐
│ Web Server (Axum) │
├─────────────────────────────────────────┤
│ BASIC Runtime (Rhai) │
├──────────┬──────────┬──────────┬────────┤
│ LLM │ Storage │ Vector │ Cache │
│ Service │ (MinIO) │ (Qdrant) │(Valkey)│
├──────────┴──────────┴──────────┴────────┤
│ PostgreSQL │
└─────────────────────────────────────────┘
Deployment Options
| Method | Use Case | Guide |
|---|---|---|
| Local | Development | Installation |
| Docker | Production | Docker Deployment |
| LXC | Isolated components | Container Deployment |
Module Structure
| Module | Purpose |
|---|---|
web_server | HTTP/WebSocket handling |
basic | BASIC language runtime |
llm | LLM provider integration |
drive | Object storage |
shared | Database models |
Creating Custom Keywords
#![allow(unused)] fn main() { // In src/basic/keywords/my_keyword.rs pub fn my_keyword(context: &mut EvalContext) -> Result<Dynamic, Box<EvalError>> { // Your keyword logic Ok(Dynamic::from("result")) } }
Register in keywords/mod.rs and rebuild.
Chapter Contents
- Architecture Overview - System design
- Building from Source - Compilation guide
- Container Deployment (LXC) - Linux containers
- Docker Deployment - Docker setup
- Scaling - Load balancing
- Infrastructure - Hardware planning
- Observability - Monitoring
- Custom Keywords - Extending BASIC
- Services - Service layer
See Also
- Installation - Getting started
- BASIC Reference - Scripting language
Architecture Overview
BotServer follows a modular architecture designed for scalability, maintainability, and extensibility. Each module handles specific responsibilities and communicates through well-defined interfaces. This chapter provides a comprehensive tour of the system architecture and how components work together.
Core Architecture
The architecture diagrams below illustrate the major components and their relationships.
Data Flow Architecture
System Architecture
Module Dependency Graph
Module Organization
The codebase is organized into modules that group related functionality together. Each module has clear responsibilities and well-defined interfaces with other modules.
Data Flow Through Modules
Core Modules
The auth/ module handles authentication and authorization throughout the system. It manages user accounts and group memberships, implements role-based access control (RBAC), handles JWT token generation and validation, provides OAuth integration for external identity providers, and supports two-factor authentication for enhanced security.
The automation/ module provides the workflow automation engine. It handles process automation for complex multi-step operations, manages scheduled tasks that run at specified intervals, enables event-driven automation that responds to system events, orchestrates workflows across multiple services, and integrates with external systems for extended capabilities.
The basic/ module implements the BASIC dialect interpreter and runtime environment. It provides keyword implementations for all BASIC commands, handles script compilation from source to executable form, manages variables and their scopes, implements flow control structures like loops and conditionals, integrates with external tools the LLM can invoke, and provides comprehensive error handling with helpful messages.
The bootstrap/ module handles system initialization and startup procedures. It verifies all required components are available, sequences service startup in the correct order, runs database migrations to update schema, deploys default templates for new installations, performs health checks to ensure system readiness, and loads configuration from files and environment variables.
The bot/ module manages bot instances and their interactions. It handles the bot lifecycle including creation, mounting, and unmounting. It processes conversations between users and bots, handles user input and routes it appropriately, coordinates response generation from various sources, manages multi-bot deployments on a single server, and ensures session isolation between different users and bots.
Communication Modules
The channels/ module provides multi-channel messaging adapters that allow bots to communicate across different platforms. Supported channels include the web interface for browser-based chat, WhatsApp Business API for messaging app integration, Microsoft Teams for enterprise collaboration, Slack for team communication, Instagram for social media engagement, SMS for text messaging, and voice for telephone interactions.
The meet/ module enables real-time communication features. It provides video conferencing capabilities for face-to-face meetings, voice calling for audio-only communication, screen sharing for presentations and collaboration, recording functionality for meeting archives, transcription services for accessibility, and meeting scheduling integration with calendars.
The web_server/ module implements the HTTP server and web interface. It serves static files for the UI, handles WebSocket connections for real-time chat, routes REST API requests to appropriate handlers, manages CORS policies for browser security, and processes requests and responses throughout the system.
AI and Knowledge Modules
The llm/ module provides large language model integration. It handles model selection based on configuration and requirements, formats prompts according to model expectations, manages token counting and context limits, streams responses for real-time display, tracks API costs for budgeting, and implements model fallbacks when primary providers are unavailable.
The llm_models/ module contains specific implementations for different model providers. OpenAI integration supports GPT-5 and o3 models. Anthropic integration provides access to Claude Sonnet 4.5 and Opus 4.5 models. Google integration enables Gemini model usage. Meta integration supports Llama models for local deployment. Local model support allows self-hosted inference. Custom model implementations can be added for specialized providers.
The prompt_manager/ module provides centralized prompt management capabilities. It maintains prompt templates for consistent interactions, handles variable substitution in prompts, optimizes prompts for specific models, supports version control of prompt changes, enables A/B testing of different approaches, and tracks prompt performance metrics.
The context/ module manages conversation context throughout interactions. It optimizes the context window to fit within model limits, manages conversation history retention, compresses context when necessary to preserve information, filters context for relevance to current queries, and tracks multi-turn conversations across messages.
Storage and Data Modules
The drive/ module handles file and document management. It supports file upload and download operations, processes documents for indexing and search, maintains version control of files, manages sharing permissions between users, enforces quota limits on storage usage, and indexes content for search functionality.
The drive_monitor/ module provides storage monitoring and synchronization. It detects changes to files for automatic processing, synchronizes content across storage locations, resolves conflicts when multiple changes occur, manages backups of important data, and provides analytics on storage usage patterns.
The package_manager/ module handles bot package management. It loads packages from storage into the runtime, resolves dependencies between packages, manages package versions and updates, supports hot reload of changed packages without restart, and validates packages before deployment.
Processing Modules
The engines/ module contains various processing engines for different tasks. The rule engine evaluates business rules and conditions. The workflow engine orchestrates complex processes. The event processor handles system and external events. The message queue manages asynchronous communication. The job scheduler executes background tasks.
The calendar_engine/ module provides calendar and scheduling functionality. It manages events and appointments, checks availability for scheduling, coordinates meetings between participants, sends reminders for upcoming events, and handles timezone conversions correctly.
The task_engine/ module implements the task management system. It creates tasks from user requests or automation, assigns tasks to appropriate parties, tracks task status through completion, manages dependencies between tasks, and sends notifications about task updates.
The email/ module provides email integration capabilities. It sends email via SMTP protocols, receives email via IMAP connections, manages email templates for consistent formatting, tracks email delivery and opens, and handles bounced emails appropriately.
Utility Modules
The session/ module manages user sessions throughout their interactions. It creates sessions for new users, persists session state to storage, enforces session timeouts for security, handles concurrent sessions from the same user, and recovers sessions after server restarts.
The config/ module handles configuration management. It loads configuration from files and databases, reads environment variables for deployment settings, supports hot reload of configuration changes, validates configuration values, and provides sensible defaults for optional settings.
The shared/ module contains shared utilities and models used across the system. It defines database models for persistence, provides common types used throughout the codebase, implements helper functions for repeated tasks, centralizes constants and magic values, and defines error types for consistent error handling.
The compliance/ module implements regulatory compliance features. It ensures GDPR compliance for data protection, enforces data retention policies, maintains comprehensive audit logging, provides privacy controls for sensitive data, and manages user consent records.
The nvidia/ module provides GPU acceleration support for local model inference. It integrates with CUDA for GPU computation, runs model inference on GPU hardware, batches requests for efficient processing, and optimizes performance for available hardware.
The ui_tree/ module manages UI component trees for the interface. It maintains a virtual DOM for efficient updates, manages component lifecycles, handles state across components, processes events from user interactions, and optimizes rendering performance.
The web_automation/ module provides web scraping and automation capabilities. It automates browser interactions for data gathering, extracts content from web pages, fills forms programmatically, captures screenshots for documentation, and monitors pages for changes.
Data Flow
Request Processing Pipeline
When a user sends a message, it flows through several processing stages. First, the Channel Adapter receives the user input from the appropriate platform. The Session Manager then identifies the existing session or creates a new one. The Context Manager loads conversation history and relevant context. The BASIC Interpreter executes the dialog script that handles the message. If needed, LLM Integration processes natural language to understand intent. The Knowledge Base provides relevant information from loaded documents. The Response Generator formats the output for the user. Finally, the Channel Adapter delivers the response back through the original platform.
Storage Architecture
The primary database uses PostgreSQL to store structured data including user accounts, bot configurations, session data, conversation history, and system metadata. The Diesel ORM provides type-safe database access.
Object storage using Drive provides S3-compatible storage for files including user uploads, processed documents, media files, system backups, and application logs.
The cache layer provides fast access to frequently needed data. It stores session information for quick retrieval, caches commonly accessed data, implements rate limiting counters, holds temporary processing data, and supports pub/sub messaging between components.
The vector database uses Qdrant to store document embeddings for semantic search. It maintains the semantic search index, stores knowledge base vectors, and performs similarity matching for relevant content retrieval.
Security Architecture
Authentication Flow
The authentication process follows a secure sequence. Users provide credentials through the login interface. The auth module validates credentials against stored records. Upon successful validation, a JWT token is issued. Each subsequent request includes this token for verification. A session is established to maintain state. Permissions are checked before any operation is performed.
Data Protection
Data protection operates at multiple layers. Encryption at rest protects data stored in the database and files. Encryption in transit using TLS/SSL protects data during transmission. Sensitive data masking prevents exposure in logs and displays. PII detection identifies and protects personal information. Secure key management protects cryptographic keys from exposure.
Access Control
Access control mechanisms ensure appropriate authorization. Role-based permissions determine what actions users can perform. Resource-level authorization controls access to specific objects. API rate limiting prevents abuse and ensures fair usage. IP allowlisting restricts access to known addresses when configured. Comprehensive audit logging records all significant actions.
Deployment Architecture
Container Structure
Production deployments typically use containers for isolation and portability. The main application container runs the BotServer binary. PostgreSQL runs in a separate database container. Drive storage uses an S3-compatible container like MinIO. The cache layer uses Valkey in its own container. Qdrant provides vector database functionality in another container. Nginx serves as a reverse proxy for external traffic.
Scaling Strategy
The system scales to handle increased load through several mechanisms. Horizontal scaling adds more web server instances behind a load balancer. Read replicas for the database handle query load. Distributed cache spreads session data across nodes. Load balancing distributes requests across available instances. Auto-scaling policies adjust capacity based on demand.
High Availability
High availability configurations ensure continuous operation. Multi-zone deployment protects against facility failures. Database replication maintains copies of data. Storage redundancy prevents data loss. Health monitoring detects problems quickly. Automatic failover redirects traffic when components fail.
Performance Optimization
Caching Strategy
Caching improves response times throughout the system. Response caching stores generated responses for reuse. Query result caching avoids repeated database queries. Static asset caching serves files directly from cache. API response caching stores external API results. Knowledge base caching keeps frequently accessed content in memory.
Async Processing
Asynchronous processing improves throughput and responsiveness. Background jobs handle long-running tasks without blocking. Message queues decouple producers from consumers. Event-driven architecture responds to changes efficiently. Non-blocking I/O maximizes resource utilization. Worker pools distribute processing across threads.
Resource Management
Careful resource management ensures efficient operation. Connection pooling reuses database connections. Memory management prevents leaks and excessive usage. Token optimization minimizes LLM API costs. Query optimization reduces database load. Lazy loading defers work until necessary.
Monitoring and Observability
Metrics Collection
Comprehensive metrics provide visibility into system behavior. System metrics track CPU, memory, and disk usage. Application metrics measure request rates and latencies. Business metrics track user engagement and outcomes. User analytics show usage patterns. Performance tracking identifies bottlenecks.
Logging
Structured logging supports debugging and analysis. All logs use consistent structured formats. Log aggregation collects logs from all components. Error tracking captures and groups exceptions. Audit trails record security-relevant events. Debug logging provides detailed information when needed.
Health Checks
Health checks ensure system availability and readiness. Liveness probes confirm the application is running. Readiness probes verify the application can serve requests. Dependency checks validate external services are available. Performance monitoring tracks response times. The alert system notifies operators of problems.
Extension Points
Plugin System
The system provides extension points for customization. Custom keywords extend the BASIC language with new capabilities. External tools integrate third-party services. API integrations connect to external systems. Custom channels add support for new platforms. Model providers integrate additional LLM services.
Webhook Support
Webhooks enable event-driven integrations. Incoming webhooks accept notifications from external systems. Outgoing webhooks notify external systems of events. Event subscriptions define what events trigger webhooks. Callback handling processes webhook responses. Retry mechanisms ensure delivery despite transient failures.
API Integration
Multiple API protocols support different integration needs. The REST API provides standard HTTP access. GraphQL support is planned for flexible queries. WebSocket connections enable real-time bidirectional communication. gRPC support is planned for high-performance integrations. OpenAPI specifications document all endpoints.
Development Workflow
Local Development
Setting up a local development environment follows a straightforward process. First, clone the repository to your machine. Install required dependencies using Cargo and system packages. Configure environment variables for local services. Run database migrations to set up the schema. Start the required services like PostgreSQL and cache. Load default templates for testing.
Testing Strategy
Testing ensures code quality at multiple levels. Unit tests verify individual functions and methods. Integration tests check interactions between components. End-to-end tests validate complete user workflows. Load testing measures performance under stress. Security testing identifies vulnerabilities.
CI/CD Pipeline
Continuous integration and deployment automates quality assurance. Automated testing runs on every commit. Code quality checks enforce standards. Security scanning identifies known vulnerabilities. The build process produces deployable artifacts. Deployment automation pushes releases to environments.
Future Architecture Plans
Planned Enhancements
Future development will expand system capabilities. Microservices migration will enable independent scaling of components. Kubernetes native deployment will simplify orchestration. Multi-region support will improve global performance. Edge deployment will reduce latency for distributed users. Serverless functions will enable elastic scaling for specific workloads.
Performance Goals
Performance targets guide optimization efforts. Response times should be sub-100ms for typical requests. The system should support 10,000 or more concurrent users. Uptime should reach 99.99% for production deployments. Elastic scaling should handle traffic spikes automatically. Global CDN integration should improve worldwide access times.
Building from Source
This guide covers building BotServer from source, including dependencies, feature flags, and platform-specific considerations.
Prerequisites
System Requirements
- Operating System: Linux, macOS, or Windows
- Rust: 1.70 or later (2021 edition)
- Memory: 4GB RAM minimum (8GB recommended)
- Disk Space: 8GB for development environment
Install Rust
If you don’t have Rust installed:
curl --proto '=https' --tlsv1.2 -sSf https://sh.rustup.rs | sh
source $HOME/.cargo/env
Verify installation:
rustc --version
cargo --version
System Dependencies
Linux (Ubuntu/Debian)
sudo apt update
sudo apt install -y \
build-essential \
pkg-config \
libssl-dev \
libpq-dev \
cmake
Linux (Fedora/RHEL)
sudo dnf install -y \
gcc \
gcc-c++ \
make \
pkg-config \
openssl-devel \
postgresql-devel \
cmake
macOS
brew install postgresql openssl cmake
Windows
Install Visual Studio Build Tools with C++ support, then:
# Install PostgreSQL (for libpq)
choco install postgresql
Clone Repository
git clone https://github.com/GeneralBots/BotServer.git
cd BotServer
Build Configurations
Standard Build
Build with default features (includes desktop support):
cargo build --release
The compiled binary will be at target/release/botserver.
Minimal Build
Build without any optional features:
cargo build --release --no-default-features
This excludes:
- Desktop GUI (Tauri)
- Vector database (Qdrant)
- Email integration (IMAP)
Feature-Specific Builds
With Vector Database
Enable Qdrant vector database support:
cargo build --release --features vectordb
With Email Support
Enable IMAP email integration:
cargo build --release --features email
Desktop Application
Build as desktop app with Tauri (default):
cargo build --release --features desktop
All Features
Build with all optional features:
cargo build --release --all-features
Feature Flags
BotServer supports the following features defined in Cargo.toml:
[features]
default = ["desktop"]
vectordb = ["qdrant-client"]
email = ["imap"]
desktop = ["dep:tauri", "dep:tauri-plugin-dialog", "dep:tauri-plugin-opener"]
Feature Details
| Feature | Dependencies | Purpose |
|---|---|---|
desktop | tauri, tauri-plugin-dialog, tauri-plugin-opener | Native desktop application with system integration |
vectordb | qdrant-client | Semantic search with Qdrant vector database |
email | imap | IMAP email integration for reading emails |
Build Profiles
Debug Build
For development with debug symbols and no optimizations:
cargo build
Binary location: target/debug/botserver
Release Build
Optimized for production with LTO and size optimization:
cargo build --release
Binary location: target/release/botserver
The release profile in Cargo.toml uses aggressive optimization:
[profile.release]
lto = true # Link-time optimization
opt-level = "z" # Optimize for size
strip = true # Strip symbols
panic = "abort" # Abort on panic (smaller binary)
codegen-units = 1 # Better optimization (slower build)
Platform-Specific Builds
Linux
Standard build works on most distributions:
cargo build --release
For static linking (portable binary):
RUSTFLAGS='-C target-feature=+crt-static' cargo build --release --target x86_64-unknown-linux-gnu
macOS
Build for current architecture:
cargo build --release
Build universal binary (Intel + Apple Silicon):
rustup target add x86_64-apple-darwin aarch64-apple-darwin
cargo build --release --target x86_64-apple-darwin
cargo build --release --target aarch64-apple-darwin
lipo -create \
target/x86_64-apple-darwin/release/botserver \
target/aarch64-apple-darwin/release/botserver \
-output botserver-universal
Windows
Build with MSVC toolchain:
cargo build --release
Binary location: target\release\botserver.exe
Cross-Compilation
Install Cross-Compilation Tools
cargo install cross
Build for Linux from macOS/Windows
cross build --release --target x86_64-unknown-linux-gnu
Build for Windows from Linux/macOS
cross build --release --target x86_64-pc-windows-gnu
Troubleshooting
OpenSSL Errors
If you encounter OpenSSL linking errors:
Linux:
sudo apt install libssl-dev
macOS:
export OPENSSL_DIR=$(brew --prefix openssl)
cargo build --release
Windows:
# Use vcpkg
vcpkg install openssl:x64-windows
set OPENSSL_DIR=C:\vcpkg\installed\x64-windows
cargo build --release
PostgreSQL Library Errors
If libpq is not found:
Linux:
sudo apt install libpq-dev
macOS:
brew install postgresql
export PQ_LIB_DIR=$(brew --prefix postgresql)/lib
Windows:
# Ensure PostgreSQL is in PATH
set PQ_LIB_DIR=C:\Program Files\PostgreSQL\15\lib
Out of Memory During Build
Reduce parallel jobs:
cargo build --release -j 2
Or limit memory per job:
CARGO_BUILD_JOBS=2 cargo build --release
Linker Errors
Ensure you have a C/C++ compiler:
Linux:
sudo apt install build-essential
macOS:
xcode-select --install
Windows: Install Visual Studio Build Tools with C++ support.
Verify Build
After building, verify the binary works:
./target/release/botserver --version
Expected output: botserver 6.0.8 or similar.
Development Builds
Watch Mode
Auto-rebuild on file changes:
cargo install cargo-watch
cargo watch -x 'build --release'
Check Without Building
Fast syntax and type checking:
cargo check
With specific features:
cargo check --features vectordb,email
Testing
Run All Tests
cargo test
Run Tests for Specific Module
cargo test --package botserver --lib bootstrap::tests
Run Integration Tests
cargo test --test '*'
Code Quality
Format Code
cargo fmt
Lint Code
cargo clippy -- -D warnings
Check Dependencies
cargo tree
Find duplicate dependencies:
cargo tree --duplicates
Security Audit
Run security audit to check for known vulnerabilities in dependencies:
cargo install cargo-audit
cargo audit
This should be run regularly during development to ensure dependencies are secure.
Build Artifacts
After a successful release build, you’ll have:
target/release/botserver- Main executabletarget/release/build/- Build script outputstarget/release/deps/- Compiled dependencies
Size Optimization
The release profile already optimizes for size. To further reduce:
Strip Binary Manually
strip target/release/botserver
Use UPX Compression
upx --best --lzma target/release/botserver
Note: UPX may cause issues with some systems. Test thoroughly.
Incremental Compilation
For faster development builds:
export CARGO_INCREMENTAL=1
cargo build
Note: This is enabled by default for debug builds.
Clean Build
Remove all build artifacts:
cargo clean
LXC Build
Build inside LXC container:
# Create build container
lxc-create -n botserver-build -t download -- -d ubuntu -r jammy -a amd64
# Configure container with build resources
cat >> /var/lib/lxc/botserver-build/config << EOF
lxc.cgroup2.memory.max = 4G
lxc.cgroup2.cpu.max = 400000 100000
EOF
# Start container
lxc-start -n botserver-build
# Install build dependencies
lxc-attach -n botserver-build -- bash -c "
apt-get update
apt-get install -y build-essential pkg-config libssl-dev libpq-dev cmake curl git
curl --proto '=https' --tlsv1.2 -sSf https://sh.rustup.rs | sh -s -- -y
source \$HOME/.cargo/env
"
# Build BotServer
lxc-attach -n botserver-build -- bash -c "
git clone https://github.com/GeneralBots/BotServer /build
cd /build
source \$HOME/.cargo/env
cargo build --release --no-default-features
"
# Copy binary from container
lxc-attach -n botserver-build -- cat /build/target/release/botserver > /usr/local/bin/botserver
chmod +x /usr/local/bin/botserver
Installation
After building, install system-wide:
sudo install -m 755 target/release/botserver /usr/local/bin/
Or create a symlink:
ln -s $(pwd)/target/release/botserver ~/.local/bin/botserver
Next Steps
After building:
- Run the bootstrap process to install dependencies
- Configure
.envfile with database credentials - Start BotServer and access web interface
- Create your first bot from templates
See Chapter 01: Run and Talk for next steps.
Container Deployment (LXC)
BotServer uses LXC (Linux Containers) for isolated component deployment with system-level containerization.
What is LXC?
- System containers - Full Linux userspace (lightweight VMs)
- Shared kernel - More efficient than virtual machines
- Isolation - Separate processes, networking, filesystems
- Resource control - CPU, memory, I/O limits
Automatic Setup
./botserver --container
This automatically:
- Detects LXC/LXD availability
- Initializes LXD if needed
- Creates Debian 12 containers per component
- Mounts directories for persistent data
- Configures networking and ports
- Installs and starts services
Container Architecture
Container Naming
{tenant}-tables → PostgreSQL
{tenant}-drive → S3-compatible storage
{tenant}-cache → Valkey cache
{tenant}-llm → LLM server (optional)
{tenant}-email → Mail server (optional)
Default tenant: default → default-tables, default-drive, etc.
Directory Mounting
Host: botserver-stack/tables/data/ → Container: /opt/gbo/data/
Host: botserver-stack/tables/conf/ → Container: /opt/gbo/conf/
Host: botserver-stack/tables/logs/ → Container: /opt/gbo/logs/
Data persists even if containers are deleted.
Port Forwarding
| Container Port | Host Port | Service |
|---|---|---|
| 5432 | 5432 | PostgreSQL |
| 9000 | 9000 | Drive API |
| 9001 | 9001 | Drive Console |
| 6379 | 6379 | Cache |
Common Operations
# List containers
lxc list
# Execute command in container
lxc exec default-tables -- psql -U gbuser botserver
# View logs
lxc exec default-tables -- journalctl -u tables
# Stop/Start
lxc stop default-tables
lxc start default-tables
# Delete (data in mounts persists)
lxc delete default-tables --force
Resource Limits
lxc config set default-tables limits.cpu 2
lxc config set default-tables limits.memory 4GB
Snapshots
# Create
lxc snapshot default-tables backup-2024-01-15
# List
lxc info default-tables
# Restore
lxc restore default-tables backup-2024-01-15
Troubleshooting
| Issue | Solution |
|---|---|
| LXC not installed | sudo snap install lxd && sudo lxd init --auto |
| Permission denied | sudo usermod -aG lxd $USER && newgrp lxd |
| Container won’t start | lxc console default-tables --show-log |
| Port in use | sudo netstat -tulpn | grep PORT |
Container vs Local
| Use Containers When | Use Local When |
|---|---|
| Clean isolation needed | Maximum performance |
| Multiple instances | LXC not available |
| Easy cleanup/reinstall | Simple deployment |
| Security isolation | Direct service access |
Migration
Local → Container
pg_dump botserver > backup.sql
./botserver --container
lxc exec default-tables -- psql -U gbuser botserver < backup.sql
Container → Local
lxc exec default-tables -- pg_dump -U gbuser botserver > backup.sql
./botserver uninstall tables
./botserver install tables --local
psql -U gbuser botserver < backup.sql
See Also
- Installation - Local setup
- Docker Deployment - Docker alternative
- Architecture - System design
Docker Deployment
Note: Docker support is currently experimental.
Deployment Options
| Option | Description | Best For |
|---|---|---|
| All-in-One | Single container with all components | Development, testing |
| Microservices | Separate containers per component | Production, scaling |
Option 1: All-in-One Container
Quick Start
docker run -d \
--name botserver \
-p 8000:8000 \
-p 9000:9000 \
-v botserver-data:/opt/gbo/data \
-e ADMIN_PASS=your-secure-password \
pragmatismo/botserver:latest
Docker Compose
version: '3.8'
services:
botserver:
image: pragmatismo/botserver:latest
restart: unless-stopped
ports:
- "8000:8000"
- "9000:9000"
- "9001:9001"
volumes:
- botserver-data:/opt/gbo/data
- ./work:/opt/gbo/work
environment:
- ADMIN_PASS=${ADMIN_PASS:-changeme}
- DOMAIN=${DOMAIN:-localhost}
healthcheck:
test: ["CMD", "curl", "-f", "http://localhost:8000/health"]
interval: 30s
timeout: 10s
retries: 3
volumes:
botserver-data:
Resources: 2 CPU cores, 4GB RAM minimum
Option 2: Microservices
version: '3.8'
services:
postgres:
image: postgres:16-alpine
volumes:
- postgres-data:/var/lib/postgresql/data
environment:
POSTGRES_USER: botserver
POSTGRES_PASSWORD: ${DB_PASSWORD}
POSTGRES_DB: botserver
networks:
- gb-network
minio:
image: minio/minio:latest
command: server /data --console-address ":9001"
ports:
- "9000:9000"
- "9001:9001"
volumes:
- minio-data:/data
environment:
MINIO_ROOT_USER: ${DRIVE_ACCESSKEY}
MINIO_ROOT_PASSWORD: ${DRIVE_SECRET}
networks:
- gb-network
qdrant:
image: qdrant/qdrant:latest
ports:
- "6333:6333"
volumes:
- qdrant-data:/qdrant/storage
networks:
- gb-network
botserver:
image: pragmatismo/botserver:latest
depends_on:
- postgres
- minio
- qdrant
ports:
- "8000:8000"
volumes:
- ./work:/opt/gbo/work
environment:
DATABASE_URL: postgres://botserver:${DB_PASSWORD}@postgres:5432/botserver
DRIVE_URL: http://minio:9000
DRIVE_ACCESSKEY: ${DRIVE_ACCESSKEY}
DRIVE_SECRET: ${DRIVE_SECRET}
QDRANT_URL: http://qdrant:6333
ADMIN_PASS: ${ADMIN_PASS}
networks:
- gb-network
networks:
gb-network:
volumes:
postgres-data:
minio-data:
qdrant-data:
Environment File (.env)
DB_PASSWORD=secure-db-password
DRIVE_ACCESSKEY=minioadmin
DRIVE_SECRET=secure-minio-secret
ADMIN_PASS=admin-password
DOMAIN=your-domain.com
Kubernetes
apiVersion: apps/v1
kind: Deployment
metadata:
name: botserver
spec:
replicas: 3
selector:
matchLabels:
app: botserver
template:
spec:
containers:
- name: botserver
image: pragmatismo/botserver:latest
ports:
- containerPort: 8000
resources:
requests:
memory: "512Mi"
cpu: "250m"
limits:
memory: "2Gi"
cpu: "1000m"
livenessProbe:
httpGet:
path: /health
port: 8000
---
apiVersion: v1
kind: Service
metadata:
name: botserver
spec:
selector:
app: botserver
ports:
- port: 80
targetPort: 8000
type: LoadBalancer
Health Endpoints
| Service | Endpoint |
|---|---|
| BotServer | GET /health |
| PostgreSQL | pg_isready |
| MinIO | GET /minio/health/live |
| Qdrant | GET /health |
Troubleshooting
| Issue | Solution |
|---|---|
| Container won’t start | docker logs gb-botserver |
| DB connection failed | docker exec -it gb-botserver psql $DATABASE_URL -c "SELECT 1" |
| Memory issues | Increase limits in compose or add deploy.resources.limits.memory |
Migration from Non-Docker
# 1. Backup data
pg_dump botserver > backup.sql
mc cp --recursive /path/to/drive minio/backup/
# 2. Start Docker containers
# 3. Restore
docker exec -i gb-postgres psql -U botserver < backup.sql
docker exec gb-minio mc cp --recursive /backup minio/drive/
See Also
- Installation - Local installation
- Container Deployment (LXC) - Linux containers
- Scaling - Load balancing and scaling
Scaling and Load Balancing
General Bots is designed to scale from a single instance to a distributed cluster using LXC containers. This chapter covers auto-scaling, load balancing, sharding strategies, and failover systems.
Scaling Architecture
General Bots uses a horizontal scaling approach with LXC containers:
┌─────────────────┐
│ Caddy Proxy │
│ (Load Balancer)│
└────────┬────────┘
│
┌───────────────────┼───────────────────┐
│ │ │
▼ ▼ ▼
┌─────────────────┐ ┌─────────────────┐ ┌─────────────────┐
│ LXC Container │ │ LXC Container │ │ LXC Container │
│ botserver-1 │ │ botserver-2 │ │ botserver-3 │
└────────┬────────┘ └────────┬────────┘ └────────┬────────┘
│ │ │
└───────────────────┼───────────────────┘
│
┌───────────────────┼───────────────────┐
│ │ │
▼ ▼ ▼
┌─────────────────┐ ┌─────────────────┐ ┌─────────────────┐
│ PostgreSQL │ │ Redis │ │ Qdrant │
│ (Primary) │ │ (Cluster) │ │ (Cluster) │
└─────────────────┘ └─────────────────┘ └─────────────────┘
Auto-Scaling Configuration
config.csv Parameters
Configure auto-scaling behavior in your bot’s config.csv:
# Auto-scaling settings
scale-enabled,true
scale-min-instances,1
scale-max-instances,10
scale-cpu-threshold,70
scale-memory-threshold,80
scale-request-threshold,1000
scale-cooldown-seconds,300
scale-check-interval,30
| Parameter | Description | Default |
|---|---|---|
scale-enabled | Enable auto-scaling | false |
scale-min-instances | Minimum container count | 1 |
scale-max-instances | Maximum container count | 10 |
scale-cpu-threshold | CPU % to trigger scale-up | 70 |
scale-memory-threshold | Memory % to trigger scale-up | 80 |
scale-request-threshold | Requests/min to trigger scale-up | 1000 |
scale-cooldown-seconds | Wait time between scaling events | 300 |
scale-check-interval | Seconds between metric checks | 30 |
Scaling Rules
Define custom scaling rules:
# Scale up when average response time exceeds 2 seconds
scale-rule-response-time,2000
scale-rule-response-action,up
# Scale down when CPU drops below 30%
scale-rule-cpu-low,30
scale-rule-cpu-low-action,down
# Scale up on queue depth
scale-rule-queue-depth,100
scale-rule-queue-action,up
LXC Container Management
Creating Scaled Instances
# Create additional botserver containers
for i in {2..5}; do
lxc launch images:debian/12 botserver-$i
lxc config device add botserver-$i port-$((8080+i)) proxy \
listen=tcp:0.0.0.0:$((8080+i)) connect=tcp:127.0.0.1:8080
done
Container Resource Limits
Set resource limits per container:
# CPU limits (number of cores)
lxc config set botserver-1 limits.cpu 4
# Memory limits
lxc config set botserver-1 limits.memory 8GB
# Disk I/O priority (0-10)
lxc config set botserver-1 limits.disk.priority 5
# Network bandwidth (ingress/egress)
lxc config device set botserver-1 eth0 limits.ingress 100Mbit
lxc config device set botserver-1 eth0 limits.egress 100Mbit
Auto-Scaling Script
Create /opt/gbo/scripts/autoscale.sh:
#!/bin/bash
# Configuration
MIN_INSTANCES=1
MAX_INSTANCES=10
CPU_THRESHOLD=70
SCALE_COOLDOWN=300
LAST_SCALE_FILE="/tmp/last_scale_time"
get_avg_cpu() {
local total=0
local count=0
for container in $(lxc list -c n --format csv | grep "^botserver-"); do
cpu=$(lxc exec $container -- cat /proc/loadavg | awk '{print $1}')
total=$(echo "$total + $cpu" | bc)
count=$((count + 1))
done
echo "scale=2; $total / $count * 100" | bc
}
get_instance_count() {
lxc list -c n --format csv | grep -c "^botserver-"
}
can_scale() {
if [ ! -f "$LAST_SCALE_FILE" ]; then
return 0
fi
last_scale=$(cat "$LAST_SCALE_FILE")
now=$(date +%s)
diff=$((now - last_scale))
[ $diff -gt $SCALE_COOLDOWN ]
}
scale_up() {
current=$(get_instance_count)
if [ $current -ge $MAX_INSTANCES ]; then
echo "Already at max instances ($MAX_INSTANCES)"
return 1
fi
new_id=$((current + 1))
echo "Scaling up: creating botserver-$new_id"
lxc launch images:debian/12 botserver-$new_id
lxc config set botserver-$new_id limits.cpu 4
lxc config set botserver-$new_id limits.memory 8GB
# Copy configuration
lxc file push /opt/gbo/conf/botserver.env botserver-$new_id/opt/gbo/conf/
# Start botserver
lxc exec botserver-$new_id -- /opt/gbo/bin/botserver &
# Update load balancer
update_load_balancer
date +%s > "$LAST_SCALE_FILE"
echo "Scale up complete"
}
scale_down() {
current=$(get_instance_count)
if [ $current -le $MIN_INSTANCES ]; then
echo "Already at min instances ($MIN_INSTANCES)"
return 1
fi
# Remove highest numbered instance
target="botserver-$current"
echo "Scaling down: removing $target"
# Drain connections
lxc exec $target -- /opt/gbo/bin/botserver drain
sleep 30
# Stop and delete
lxc stop $target
lxc delete $target
# Update load balancer
update_load_balancer
date +%s > "$LAST_SCALE_FILE"
echo "Scale down complete"
}
update_load_balancer() {
# Generate upstream list
upstreams=""
for container in $(lxc list -c n --format csv | grep "^botserver-"); do
ip=$(lxc list $container -c 4 --format csv | cut -d' ' -f1)
upstreams="$upstreams\n to $ip:8080"
done
# Update Caddy config
cat > /opt/gbo/conf/caddy/upstream.conf << EOF
upstream botserver {
$upstreams
lb_policy round_robin
health_uri /api/health
health_interval 10s
}
EOF
# Reload Caddy
lxc exec proxy-1 -- caddy reload --config /etc/caddy/Caddyfile
}
# Main loop
while true; do
avg_cpu=$(get_avg_cpu)
echo "Average CPU: $avg_cpu%"
if can_scale; then
if (( $(echo "$avg_cpu > $CPU_THRESHOLD" | bc -l) )); then
scale_up
elif (( $(echo "$avg_cpu < 30" | bc -l) )); then
scale_down
fi
fi
sleep 30
done
Load Balancing
Caddy Configuration
Primary load balancer configuration (/opt/gbo/conf/caddy/Caddyfile):
{
admin off
auto_https on
}
(common) {
encode gzip zstd
header {
-Server
X-Content-Type-Options "nosniff"
X-Frame-Options "DENY"
Referrer-Policy "strict-origin-when-cross-origin"
}
}
bot.example.com {
import common
# Health check endpoint (no load balancing)
handle /api/health {
reverse_proxy localhost:8080
}
# WebSocket connections (sticky sessions)
handle /ws* {
reverse_proxy botserver-1:8080 botserver-2:8080 botserver-3:8080 {
lb_policy cookie
lb_try_duration 5s
health_uri /api/health
health_interval 10s
health_timeout 5s
}
}
# API requests (round robin)
handle /api/* {
reverse_proxy botserver-1:8080 botserver-2:8080 botserver-3:8080 {
lb_policy round_robin
lb_try_duration 5s
health_uri /api/health
health_interval 10s
fail_duration 30s
}
}
# Static files (any instance)
handle {
reverse_proxy botserver-1:8080 botserver-2:8080 botserver-3:8080 {
lb_policy first
}
}
}
Load Balancing Policies
| Policy | Description | Use Case |
|---|---|---|
round_robin | Rotate through backends | General API requests |
first | Use first available | Static content |
least_conn | Fewest active connections | Long-running requests |
ip_hash | Consistent by client IP | Session affinity |
cookie | Sticky sessions via cookie | WebSocket, stateful |
random | Random selection | Testing |
Rate Limiting
Configure rate limits in config.csv:
# Rate limiting
rate-limit-enabled,true
rate-limit-requests,100
rate-limit-window,60
rate-limit-burst,20
rate-limit-by,ip
# Per-endpoint limits
rate-limit-api-chat,30
rate-limit-api-files,50
rate-limit-api-auth,10
Rate limiting in Caddy:
bot.example.com {
# Global rate limit
rate_limit {
zone global {
key {remote_host}
events 100
window 1m
}
}
# Stricter limit for auth endpoints
handle /api/auth/* {
rate_limit {
zone auth {
key {remote_host}
events 10
window 1m
}
}
reverse_proxy botserver:8080
}
}
Sharding Strategies
Database Sharding Options
Option 1: Tenant-Based Sharding
Each tenant gets their own database:
┌─────────────────┐
│ Router/Proxy │
└────────┬────────┘
│
┌────┴────┬──────────┐
│ │ │
▼ ▼ ▼
┌───────┐ ┌───────┐ ┌───────┐
│Tenant1│ │Tenant2│ │Tenant3│
│ DB │ │ DB │ │ DB │
└───────┘ └───────┘ └───────┘
Configuration:
# Tenant sharding
shard-strategy,tenant
shard-tenant-db-prefix,gb_tenant_
shard-auto-create,true
Option 2: Hash-Based Sharding
Distribute data by hash of primary key:
User ID: 12345
Hash: 12345 % 4 = 1
Shard: shard-1
Configuration:
# Hash sharding
shard-strategy,hash
shard-count,4
shard-key,user_id
shard-algorithm,modulo
Option 3: Range-Based Sharding
Partition by ID ranges:
# Range sharding
shard-strategy,range
shard-ranges,0-999999:shard1,1000000-1999999:shard2,2000000-:shard3
Option 4: Geographic Sharding
Route by user location:
# Geographic sharding
shard-strategy,geo
shard-geo-us,postgres-us.example.com
shard-geo-eu,postgres-eu.example.com
shard-geo-asia,postgres-asia.example.com
shard-default,postgres-us.example.com
Vector Database Sharding (Qdrant)
Qdrant supports automatic sharding:
# Qdrant sharding
qdrant-shard-count,4
qdrant-replication-factor,2
qdrant-write-consistency,majority
Collection creation with sharding:
#![allow(unused)] fn main() { // In vectordb code let collection_config = CreateCollection { collection_name: format!("kb_{}", bot_id), vectors_config: VectorsConfig::Single(VectorParams { size: 384, distance: Distance::Cosine, }), shard_number: Some(4), replication_factor: Some(2), write_consistency_factor: Some(1), ..Default::default() }; }
Redis Cluster
For high-availability caching:
# Redis cluster
cache-mode,cluster
cache-nodes,redis-1:6379,redis-2:6379,redis-3:6379
cache-replicas,1
Failover Systems
Health Checks
Configure health check endpoints:
# Health check configuration
health-enabled,true
health-endpoint,/api/health
health-interval,10
health-timeout,5
health-retries,3
Health check response:
{
"status": "healthy",
"version": "6.1.0",
"uptime": 86400,
"checks": {
"database": "ok",
"cache": "ok",
"vectordb": "ok",
"llm": "ok"
},
"metrics": {
"cpu": 45.2,
"memory": 62.1,
"connections": 150
}
}
Automatic Failover
Database Failover (PostgreSQL)
Using Patroni for PostgreSQL HA:
# patroni.yml
scope: botserver-cluster
name: postgres-1
restapi:
listen: 0.0.0.0:8008
connect_address: postgres-1:8008
etcd:
hosts: etcd-1:2379,etcd-2:2379,etcd-3:2379
bootstrap:
dcs:
ttl: 30
loop_wait: 10
retry_timeout: 10
maximum_lag_on_failover: 1048576
postgresql:
use_pg_rewind: true
parameters:
max_connections: 200
shared_buffers: 2GB
postgresql:
listen: 0.0.0.0:5432
connect_address: postgres-1:5432
data_dir: /var/lib/postgresql/data
authentication:
superuser:
username: postgres
password: ${POSTGRES_PASSWORD}
replication:
username: replicator
password: ${REPLICATION_PASSWORD}
Cache Failover (Redis Sentinel)
# Redis Sentinel configuration
cache-mode,sentinel
cache-sentinel-master,mymaster
cache-sentinel-nodes,sentinel-1:26379,sentinel-2:26379,sentinel-3:26379
Circuit Breaker
Prevent cascade failures:
# Circuit breaker settings
circuit-breaker-enabled,true
circuit-breaker-threshold,5
circuit-breaker-timeout,30
circuit-breaker-half-open-requests,3
States:
- Closed: Normal operation
- Open: Failing, reject requests immediately
- Half-Open: Testing if service recovered
Graceful Degradation
Configure fallback behavior:
# Fallback configuration
fallback-llm-enabled,true
fallback-llm-provider,local
fallback-llm-model,DeepSeek-R3-Distill-Qwen-1.5B
fallback-cache-enabled,true
fallback-cache-mode,memory
fallback-vectordb-enabled,true
fallback-vectordb-mode,keyword-search
Monitoring Scaling
Metrics Collection
Key metrics to monitor:
# Scaling metrics
metrics-scaling-enabled,true
metrics-container-count,true
metrics-scaling-events,true
metrics-load-distribution,true
Alerting Rules
Configure alerts for scaling issues:
# alerting-rules.yml
groups:
- name: scaling
rules:
- alert: HighCPUUsage
expr: avg(cpu_usage) > 80
for: 5m
labels:
severity: warning
annotations:
summary: "High CPU usage detected"
- alert: MaxInstancesReached
expr: container_count >= max_instances
for: 1m
labels:
severity: critical
annotations:
summary: "Maximum instances reached, cannot scale up"
- alert: ScalingFailed
expr: scaling_errors > 0
for: 1m
labels:
severity: critical
annotations:
summary: "Scaling operation failed"
Best Practices
Scaling
- Start small - Begin with auto-scaling disabled, monitor patterns first
- Set appropriate thresholds - Too low causes thrashing, too high causes poor performance
- Use cooldown periods - Prevent rapid scale up/down cycles
- Test failover - Regularly test your failover procedures
- Monitor costs - More instances = higher infrastructure costs
Load Balancing
- Use sticky sessions for WebSockets - Required for real-time features
- Enable health checks - Remove unhealthy instances automatically
- Configure timeouts - Prevent hanging connections
- Use connection pooling - Reduce connection overhead
Sharding
- Choose the right strategy - Tenant-based is simplest for SaaS
- Plan for rebalancing - Have procedures to move data between shards
- Avoid cross-shard queries - Design to minimize these
- Monitor shard balance - Uneven distribution causes hotspots
Next Steps
- Container Deployment - LXC container basics
- Architecture Overview - System design
- Monitoring Dashboard - Observe your cluster
Infrastructure Design
This chapter covers the complete infrastructure design for General Bots, including scaling, security, secrets management, observability, and high availability.
Architecture Overview
General Bots uses a modular architecture where each component runs in isolated LXC containers. This provides isolation where each service has its own filesystem and process space, scalability through adding more containers to handle increased load, security since compromised components cannot affect others, and portability allowing containers to move between hosts easily.
Component Diagram
High Availability Architecture
Production-ready infrastructure with automatic scaling, load balancing, and multi-tenant isolation.
Encryption at Rest
All data stored by General Bots is encrypted at rest using AES-256-GCM.
Database Encryption
PostgreSQL uses Transparent Data Encryption (TDE):
# config.csv
encryption-at-rest,true
encryption-algorithm,aes-256-gcm
encryption-key-source,vault
Enable in PostgreSQL:
-- Enable pgcrypto extension
CREATE EXTENSION IF NOT EXISTS pgcrypto;
-- Encrypted columns use pgp_sym_encrypt
ALTER TABLE bot_memories
ADD COLUMN value_encrypted bytea;
UPDATE bot_memories
SET value_encrypted = pgp_sym_encrypt(value, current_setting('app.encryption_key'));
File Storage Encryption
MinIO server-side encryption is enabled using SSE-S3 for automatic encryption or SSE-C for customer-managed keys:
# Enable SSE-S3 encryption
mc encrypt set sse-s3 local/gbo-bucket
# Or use customer-managed keys (SSE-C)
mc encrypt set sse-c local/gbo-bucket
Configuration:
# config.csv
drive-encryption,true
drive-encryption-type,sse-s3
drive-encryption-key,vault:gbo/encryption/drive_key
Redis Encryption
Redis with TLS and encrypted RDB provides secure caching:
# redis.conf
tls-port 6379
port 0
tls-cert-file /opt/gbo/conf/certificates/redis/server.crt
tls-key-file /opt/gbo/conf/certificates/redis/server.key
tls-ca-cert-file /opt/gbo/conf/certificates/ca.crt
# Enable RDB encryption (Redis 7.2+)
rdb-save-incremental-fsync yes
Vector Database Encryption
Qdrant with encrypted storage uses TLS for transport and filesystem-level encryption for data at rest:
# qdrant/config.yaml
storage:
storage_path: /opt/gbo/data/qdrant
on_disk_payload: true
service:
enable_tls: true
# Disk encryption handled at filesystem level
Filesystem-Level Encryption
For comprehensive encryption, use LUKS on the data partition:
# Create encrypted partition for /opt/gbo/data
cryptsetup luksFormat /dev/sdb1
cryptsetup open /dev/sdb1 gbo-data
mkfs.ext4 /dev/mapper/gbo-data
mount /dev/mapper/gbo-data /opt/gbo/data
Media Processing: LiveKit
LiveKit handles all media processing needs for General Bots. WebRTC is native to LiveKit. Recording is built-in via the Egress service. Transcoding uses the Egress service. Streaming and AI integration are built into LiveKit.
LiveKit’s Egress service handles room recording, participant recording, livestreaming to YouTube and Twitch, and track composition.
LiveKit Configuration
# config.csv
meet-provider,livekit
meet-server-url,wss://localhost:7880
meet-api-key,vault:gbo/meet/api_key
meet-api-secret,vault:gbo/meet/api_secret
meet-recording-enabled,true
meet-transcription-enabled,true
Messaging: Redis
General Bots uses Redis for all messaging needs including session state, PubSub for real-time communication, and Streams for persistence:
#![allow(unused)] fn main() { // Session state redis::cmd("SET").arg("session:123").arg(state_json) // PubSub for real-time redis::cmd("PUBLISH").arg("channel:bot-1").arg(message) // Streams for persistence (optional) redis::cmd("XADD").arg("stream:events").arg("*").arg("event").arg(data) }
Configuration:
# config.csv
messaging-provider,redis
messaging-persistence,streams
messaging-retention-hours,24
Sharding Strategies
Option 1: Tenant-Based Sharding (Recommended)
Each tenant or organization gets isolated databases.
Multi-Tenant Architecture
Each tenant gets isolated resources with dedicated database schemas, cache namespaces, and vector collections. The router maps tenant IDs to their respective data stores automatically.
Key isolation features include database-per-tenant or schema-per-tenant options, namespace isolation in Valkey cache, collection isolation in Qdrant vectors, and bucket isolation in SeaweedFS storage.
Configuration:
# config.csv
shard-strategy,tenant
shard-auto-provision,true
shard-isolation-level,database
Advantages include complete data isolation which is compliance friendly, easy backup and restore per tenant, simplicity, and no cross-tenant queries. Disadvantages include more resources per tenant, complex tenant migration, and connection pool overhead.
Option 2: Hash-Based Sharding
Distribute by user or session ID hash. For example, a user_id of 12345 produces a hash that modulo num_shards equals 2, routing to shard-2.
Configuration:
# config.csv
shard-strategy,hash
shard-count,4
shard-key,user_id
shard-algorithm,consistent-hash
Advantages include even distribution, predictable routing, and good performance for high-volume single-tenant deployments. Disadvantages include complex resharding, difficult cross-shard queries, and no tenant isolation.
Option 3: Time-Based Sharding
For time-series data like logs and analytics:
# config.csv
shard-strategy,time
shard-interval,monthly
shard-retention-months,12
shard-auto-archive,true
This automatically creates partitions named messages_2024_01, messages_2024_02, messages_2024_03, and so on.
Option 4: Geographic Sharding
Route by user location:
# config.csv
shard-strategy,geo
shard-regions,us-east,eu-west,ap-south
shard-default,us-east
shard-detection,ip
Geographic Distribution
The global router uses GeoIP to direct users to the nearest regional cluster. US-East in Virginia runs a full cluster, EU-West in Frankfurt runs a full cluster, and AP-South in Singapore runs a full cluster. Each regional cluster runs independently with data replication between regions for disaster recovery.
Auto-Scaling with LXC
Configuration
# config.csv - Auto-scaling settings
scale-enabled,true
scale-min-instances,1
scale-max-instances,10
scale-cpu-threshold,70
scale-memory-threshold,80
scale-request-threshold,1000
scale-cooldown-seconds,300
scale-check-interval,30
Scaling Rules
| Metric | Scale Up | Scale Down |
|---|---|---|
| CPU | > 70% for 2 min | < 30% for 5 min |
| Memory | > 80% for 2 min | < 40% for 5 min |
| Requests/sec | > 1000 | < 200 |
| Response time | > 2000ms | < 500ms |
| Queue depth | > 100 | < 10 |
Auto-Scale Service
The auto-scaler runs as a systemd service:
# /etc/systemd/system/gbo-autoscale.service
[Unit]
Description=General Bots Auto-Scaler
After=network.target
[Service]
Type=simple
ExecStart=/opt/gbo/scripts/autoscale.sh
Restart=always
RestartSec=10
[Install]
WantedBy=multi-user.target
Container Lifecycle
The startup flow begins with creating the LXC container from a template, then configuring resources for CPU, memory, and storage, then starting the BotServer binary, and finally marking the container as ready and adding it to the load balancer pool.
The shutdown flow begins with an active container serving requests, then draining to stop accepting new connections, then stopping with a graceful BotServer shutdown, and finally deleting or returning the container to the pool.
Load Balancing
Caddy Configuration
{
admin off
auto_https on
}
bot.example.com {
# Rate limiting
rate_limit {
zone api {
key {remote_host}
events 100
window 1m
}
}
# WebSocket (sticky sessions)
handle /ws* {
reverse_proxy botserver-1:8080 botserver-2:8080 {
lb_policy cookie
health_uri /api/health
health_interval 10s
}
}
# API (round robin)
handle /api/* {
reverse_proxy botserver-1:8080 botserver-2:8080 {
lb_policy round_robin
fail_duration 30s
}
}
}
Rate Limiting Configuration
# config.csv - Rate limiting
rate-limit-enabled,true
rate-limit-requests,100
rate-limit-window,60
rate-limit-burst,20
rate-limit-by,ip
# Per-endpoint limits
rate-limit-api-chat,30
rate-limit-api-files,50
rate-limit-api-auth,10
rate-limit-api-llm,20
Failover Systems
Health Checks
Every service exposes /health:
{
"status": "healthy",
"version": "6.1.0",
"checks": {
"database": {"status": "ok", "latency_ms": 5},
"cache": {"status": "ok", "latency_ms": 2},
"vectordb": {"status": "ok", "latency_ms": 10},
"llm": {"status": "ok", "latency_ms": 50}
}
}
Circuit Breaker
# config.csv
circuit-breaker-enabled,true
circuit-breaker-threshold,5
circuit-breaker-timeout,30
circuit-breaker-half-open-requests,3
The circuit breaker has three states. Closed represents normal operation while counting failures. Open means failing fast and returning errors immediately. Half-Open tests with limited requests before deciding to close or reopen.
Database Failover
PostgreSQL with streaming replication provides high availability.
Database Replication
PostgreSQL replication is managed by Patroni for automatic failover. The Primary serves as the write leader handling all write operations. The Replica provides synchronous replication from the primary for read scaling. Patroni acts as the failover manager performing automatic leader election on failure.
Failover happens automatically within seconds, with clients redirected via the connection pooler.
Graceful Degradation
# config.csv - Fallbacks
fallback-llm-enabled,true
fallback-llm-provider,local
fallback-llm-model,DeepSeek-R3-Distill-Qwen-1.5B
fallback-cache-enabled,true
fallback-cache-mode,memory
fallback-vectordb-enabled,true
fallback-vectordb-mode,keyword-search
Secrets Management (Vault)
Architecture
The minimal .env file contains only Vault connection details. All other secrets are stored in Vault and fetched at runtime. The Vault server stores secrets organized by path including gbo/drive for access keys, gbo/tables for database credentials, gbo/cache for passwords, gbo/directory for client credentials, gbo/email for mail credentials, gbo/llm for provider API keys, gbo/encryption for master and data keys, and gbo/meet for API credentials.
Zitadel vs Vault
Zitadel handles user authentication, OAuth/OIDC, and MFA. Vault handles service credentials, API keys, and encryption keys. Use both together where Zitadel manages user identity and SSO while Vault manages service secrets and encryption keys.
Minimal .env with Vault
# .env - Only Vault and Directory needed
VAULT_ADDR=https://localhost:8200
VAULT_TOKEN=hvs.your-token-here
# Directory for user auth (Zitadel)
DIRECTORY_URL=https://localhost:8080
DIRECTORY_CLIENT_ID=your-client-id
DIRECTORY_CLIENT_SECRET=your-client-secret
# All other secrets fetched from Vault at runtime
Observability
Option 1: InfluxDB + Grafana (Current)
For time-series metrics:
# config.csv
observability-provider,influxdb
observability-url,http://localhost:8086
observability-org,pragmatismo
observability-bucket,metrics
Option 2: Vector + InfluxDB (Recommended)
Vector serves as a log and metric aggregator. BotServer logs flow to Vector which pipelines them to InfluxDB for metrics storage and Grafana for dashboards.
Vector configuration:
# vector.toml
[sources.botserver_logs]
type = "file"
include = ["/opt/gbo/logs/*.log"]
[transforms.parse_logs]
type = "remap"
inputs = ["botserver_logs"]
source = '''
. = parse_json!(.message)
'''
[sinks.influxdb]
type = "influxdb_metrics"
inputs = ["parse_logs"]
endpoint = "http://localhost:8086"
org = "pragmatismo"
bucket = "metrics"
Replacing log.* Calls with Vector
Instead of replacing all log calls, configure Vector to collect logs from files, parse and enrich them, and route to appropriate sinks:
# Route errors to alerts
[transforms.filter_errors]
type = "filter"
inputs = ["parse_logs"]
condition = '.level == "error"'
[sinks.alertmanager]
type = "http"
inputs = ["filter_errors"]
uri = "http://alertmanager:9093/api/v1/alerts"
Search: Qdrant
Qdrant handles all search needs in General Bots, providing both vector similarity search for semantic queries and payload filtering for keyword-like queries.
Hybrid Search with Qdrant
Qdrant supports hybrid search combining vector similarity with keyword filters:
#![allow(unused)] fn main() { // Combine vector similarity + keyword filter let search_request = SearchPoints { collection_name: "kb".to_string(), vector: query_embedding, limit: 10, filter: Some(Filter { must: vec![ Condition::Field(FieldCondition { key: "content".to_string(), r#match: Some(Match::Text("keyword".to_string())), }), ], ..Default::default() }), ..Default::default() }; }
Workflow Scheduling: SET SCHEDULE
General Bots uses the SET SCHEDULE keyword for all scheduling needs:
REM Run every day at 9 AM
SET SCHEDULE "daily-report" TO "0 9 * * *"
TALK "Running daily report..."
result = GET "/api/reports/daily"
SEND MAIL "admin@example.com", "Daily Report", result
END SCHEDULE
MFA with Zitadel
Configuration
MFA is handled transparently by Zitadel:
# config.csv
auth-mfa-enabled,true
auth-mfa-methods,totp,sms,email,whatsapp
auth-mfa-required-for,admin,sensitive-operations
auth-mfa-grace-period-days,7
Zitadel MFA Settings
In the Zitadel console, navigate to Settings then Login Behavior. Enable Multi-Factor Authentication and select allowed methods including TOTP for authenticator apps, SMS, Email, and WebAuthn/FIDO2.
WhatsApp MFA Channel
# config.csv
auth-mfa-whatsapp-enabled,true
auth-mfa-whatsapp-provider,twilio
auth-mfa-whatsapp-template,mfa_code
The flow proceeds as follows: the user logs in with password, Zitadel triggers MFA, a code is sent via WhatsApp, the user enters the code, and the session is established.
Summary: What You Need
PostgreSQL, Redis, Qdrant, MinIO, and Zitadel are required components. Vault, InfluxDB, and LiveKit are recommended for production deployments. Vector is optional for log aggregation.
Next Steps
The Scaling and Load Balancing chapter provides a detailed scaling guide. The Container Deployment chapter covers LXC setup. The Security Features chapter offers a security deep dive. The LLM Providers appendix helps with model selection.
Observability
This chapter describes the observability infrastructure that General Bots provides for monitoring system health, collecting logs, and tracking metrics. The observability system operates automatically without requiring code changes, giving administrators visibility into platform behavior and helping identify issues before they impact users.
Understanding the Observability System
General Bots implements observability through an integrated pipeline that collects, parses, routes, and stores operational data from all system components. The pipeline reads log files from the centralized logs directory within the botserver-stack folder, extracts structured information including log levels, timestamps, and messages, routes different types of data to appropriate destinations such as alerts for errors and storage for metrics, and enriches entries with contextual information like hostnames and service names.
This automated approach means administrators don’t need to instrument code or configure complex logging frameworks. The system captures operational data from all components using consistent formats and routes it to useful destinations without manual intervention.
Log Directory Organization
The logging system organizes output by component within the ./botserver-stack/logs/ directory. System logs from the main BotServer application appear in the system subdirectory. Storage service operations are captured in the drive subdirectory. Database activity from PostgreSQL goes to the tables subdirectory. The cache subdirectory contains logs from the caching layer. LLM server interactions are recorded in the llm subdirectory.
Additional services have their own logging locations. Email service logs appear in the email subdirectory. Identity and authentication events are captured in the directory subdirectory. Vector database operations go to the vectordb subdirectory. Video meeting activities are logged in the meet subdirectory.
This organization makes it straightforward to investigate issues in specific components without wading through unrelated log entries.
Installation and Configuration
The observability component installs automatically during the bootstrap process, ensuring that monitoring begins from the first system start. Administrators who need to install it separately can use the botserver install command with the observability parameter.
Configuration for the observability pipeline resides in the monitoring configuration file within the botserver-stack conf directory. This Vector configuration file controls how logs are collected, parsed, transformed, and routed to their destinations.
Log Format Conventions
BotServer generates logs in a standard format that includes the timestamp in ISO 8601 format with millisecond precision, the log level indicating severity, the module path identifying the code location, and the message describing what occurred. This structured format enables automated parsing while remaining human-readable for direct inspection.
The pipeline parses these logs automatically, extracting fields for indexing and routing. Errors are identified by level and routed to alerting systems while informational messages flow to long-term storage for historical analysis.
Metrics Collection
The platform exposes operational metrics through a Prometheus-compatible endpoint at /api/metrics, enabling integration with standard monitoring infrastructure. Available metrics track log event counts by severity level, error totals broken down by service, currently active session counts, total messages processed since startup, and LLM response latency measurements.
These metrics enable administrators to understand system behavior over time, identify trends that might indicate developing problems, and verify that the platform operates within expected parameters. The Prometheus format ensures compatibility with common visualization and alerting tools.
Alerting Configuration
The observability system can send alerts automatically when error conditions occur. Webhook alerts POST event data to the admin alerts API endpoint, enabling integration with custom alerting systems. Slack integration sends notifications to configured channels when properly configured. Email alerts reach administrators directly when SMTP settings are provided.
Alert thresholds are configurable through the bot’s config.csv file. The CPU threshold setting triggers alerts when processor utilization exceeds the specified percentage. Memory threshold configuration works similarly for RAM usage. Response time thresholds flag slow operations that might indicate performance degradation.
Tuning these thresholds for your environment prevents alert fatigue from false positives while ensuring genuine issues receive attention.
Dashboard Visualization
A pre-built Grafana dashboard template is available in the templates directory, providing immediate visualization of key metrics. The dashboard includes panels for active sessions showing current load, messages per minute indicating throughput, error rates highlighting problems, and LLM latency percentiles revealing AI response performance.
Importing this dashboard into a Grafana instance connected to your metrics storage creates an operational overview suitable for operations teams and helps during incident investigation.
Log Level Configuration
The logging system supports four severity levels that control which messages are captured and the volume of output generated.
Error level captures failures that require attention, such as database connection losses or file permission problems. Warning level records unexpected conditions that were handled but might indicate developing issues. Info level logs normal operations and key events, providing a record of system activity without excessive detail. Debug level includes detailed flow information useful during development and troubleshooting but too verbose for normal production operation.
The log level setting in config.csv controls the minimum severity that produces output. Setting it to info captures everything except debug messages, providing operational visibility without overwhelming log storage.
Troubleshooting Common Issues
When logs aren’t being collected as expected, several common causes should be investigated. First, verify that the observability service is running and hasn’t crashed or been stopped. Second, check that the log directory permissions allow the collection process to read the files. Third, review the observability service’s own logs for errors that might explain the collection failure.
High log volume can overwhelm storage and make analysis difficult. Raising the log level from debug to info significantly reduces volume by eliminating detailed trace messages. Configuring retention policies in the metrics storage prevents unbounded growth. Filtering debug-level logs before they reach long-term storage reduces costs while preserving important operational data.
Operational Guidelines
Effective observability requires attention to both technical configuration and operational practices. Log content should never include sensitive data like passwords, tokens, or personally identifiable information, as logs often flow to systems with broader access than the application itself.
Using appropriate log levels keeps signal-to-noise ratios manageable. Reserve error level for actual failures requiring investigation. Use info level for normal operations that help understand system behavior. Avoid overusing warning level, which loses meaning when applied too broadly.
Monitoring should focus on trends rather than just instantaneous values. Gradual increases in error rates or response times often indicate developing problems before they become critical failures. Alert configuration should consider baseline behavior and flag deviations rather than simple threshold crossings.
Establishing observability early in deployment ensures that baseline data exists when problems occur. Trying to instrument a system during an incident rarely produces useful results.
Related Documentation
For additional context on operating General Bots at scale, the Scaling and Load Balancing chapter explains how observability integrates with clustered deployments. The Infrastructure Design chapter provides the full architectural overview showing how observability fits into the complete system. The Monitoring Dashboard section describes the built-in monitoring interface available through the administrative UI.
The gbapp Philosophy: Let Machines Do Machine Work
Core Principle: Automation First
In 2025, the gbapp philosophy is simple and powerful: if a machine can do the work, let it do the work. This principle guides every decision about how to build and extend General Bots. Rather than writing code manually, you describe what you need and let AI handle the implementation details.
The Hierarchy of Development
The development approach in General Bots follows a clear hierarchy based on what percentage of work falls into each category.
LLM First (90% of cases)
The vast majority of work should be handled by letting AI write the code for you. Instead of implementing complex logic yourself, describe what you want in natural language and let the LLM generate the solution.
' Don't write complex logic - describe what you want
result = LLM "Generate a function that validates email addresses and returns true/false: " + email
BASIC for Flow Control (9% of cases)
BASIC serves as the orchestration layer that connects AI calls together. Think of it as glue code that manages the flow between different operations. The logic itself lives in LLM calls while BASIC handles sequencing and data flow.
' BASIC is just glue between AI calls
data = GET "api/data"
processed = LLM "Process this: " + data
SET "results", processed
Rust for Core Only (1% of cases)
Writing Rust code should be reserved for rare situations where you are contributing new keywords to the core platform, building fundamental infrastructure that many bots will use, or optimizing critical performance paths where every millisecond matters. Most developers will never need to write Rust because BASIC and LLM calls handle nearly every use case.
What gbapp Really Is
Understanding what gbapp is and is not helps clarify the development model.
The gbapp concept is not about external plugin packages that you download separately. It is not about separate npm modules or package managers. It is not a way to bypass BASIC and write custom code. It is not about runtime extensions that modify behavior dynamically.
Instead, gbapp represents virtual crates inside the src/ directory that are Rust modules compiling together into a single binary. The concept serves as a bridge between older plugin-based thinking and the modern integrated approach. It provides a familiar mental model for developers who want to contribute to the platform. Most importantly, gbapp embodies a mindset of coding through automation rather than manual implementation.
Real-World Examples
The contrast between traditional development and the General Bots approach becomes clear through examples.
Traditional Approach
In the old way of thinking, you might write hundreds of lines of custom Node.js, Python, or C# code for data validation. A function like validateComplexBusinessRules would contain extensive logic handling edge cases, format checking, and business rule verification. This code requires maintenance, testing, and documentation.
The General Bots Approach
With the automation-first philosophy, the same task takes three lines. You fetch your business rules from a file, ask the LLM to validate data against those rules, and handle the result. The AI understands the rules and applies them correctly without you implementing the validation logic.
' 3 lines - let AI handle complexity
rules = GET "business-rules.txt"
validation = LLM "Validate this data against these rules: " + data + " Rules: " + rules
IF validation CONTAINS "valid" THEN TALK "Approved" ELSE TALK "Rejected: " + validation
The Multi-SDK Reality
You do not need separate SDKs or plugins for different services. Everything integrates through BASIC combined with LLM calls.
Integrating Any API
When you need to work with an external API, you do not need to find and install an SDK. Just fetch the data and let the LLM interpret and format it.
' No SDK needed - just describe what you want
data = GET "https://server/data"
answer = LLM "Do a good report from this json: " + data
TALK answer
Working with Any Database
Database operations do not require an ORM or query builder. The AI understands SQL and can generate queries from natural language descriptions.
' No ORM needed - AI understands SQL
results = FIND "users", "all users who logged in today"
Processing Any Format
You do not need parser libraries for different file formats. The LLM can transform data between formats based on your description.
' No parser library needed
xml_data = GET "complex.xml"
json = LLM "Convert this XML to JSON: " + xml_data
SET BOT MEMORY "processed_data", json
When to Write Code
Understanding when each approach applies helps you work efficiently.
Use LLM When
LLM calls are appropriate for processing unstructured data, implementing business logic, transforming between formats, making decisions, generating content, and analyzing patterns. This covers roughly ninety percent of everything you might want to do.
Use BASIC When
BASIC code handles orchestrating AI calls in sequence, simple flow control with conditionals and loops, managing state and variables, and connecting different systems together. Think of BASIC as the glue that holds everything together.
Use Rust When
Rust development is only necessary when building new keywords that will become part of the core platform, creating a new gbapp module in the src/ directory, performing system-level optimization for critical paths, or contributing new features that will benefit all users. Almost no one needs to write Rust for their bots.
The gbapp Mindset
Shifting your thinking is the most important part of adopting this philosophy.
Stop thinking about how to code a solution, what library you need to import, or how to extend the system with plugins. Start thinking about how to describe what you want to AI, what the simplest BASIC flow looks like, and how your patterns could help everyone using the platform.
Data Enrichment Example
Consider a data enrichment task that pulls information about companies from their websites.
The traditional approach requires over a thousand lines of code spread across multiple npm packages. You need complex error handling for network requests, HTML parsing for different website structures, and a maintenance nightmare as websites change their formats.
The General Bots approach handles the same task in a few lines. You find companies that need enrichment, loop through them, fetch each website, ask the LLM to extract company information, and save the results. The AI handles all the complexity of parsing different website formats.
items = FIND "companies", "needs_enrichment=true"
FOR EACH item IN items
website = WEBSITE OF item.company
page = GET website
enriched = LLM "Extract company info from: " + page
SET "companies", "id=" + item.id, "data=" + enriched
NEXT
Report Generation Example
Generating reports traditionally requires a custom reporting engine, template systems, complex formatting logic, and PDF libraries. That infrastructure takes significant development and ongoing maintenance.
With General Bots, you find the relevant data, ask the LLM to create an executive summary, and generate a site with the results. Three lines replace an entire reporting infrastructure.
data = FIND "sales", "month=current"
report = LLM "Create executive summary from: " + data
CREATE SITE "report", "template", report
The Ultimate Test
Before writing any code, ask yourself three questions in order. First, can the LLM do this? The answer is usually yes. Second, can BASIC orchestrate it? Almost always yes. Third, do you really need Rust? Almost never.
Only proceed to writing custom code if you have genuinely exhausted the first two options. The LLM and BASIC combination handles far more than most developers initially expect.
Benefits of This Approach
For Developers
This approach enables development that is roughly one hundred times faster than traditional coding. You have no dependency management headaches and no version conflicts between packages. The maintenance burden drops dramatically because there is no custom code to maintain. You can focus on business logic and what you want to accomplish rather than implementation details.
For Organizations
Organizations benefit from reduced complexity in their bot deployments. Maintenance costs drop because there is less custom code to support. Iterations happen faster since changes involve modifying descriptions rather than rewriting code. There is no vendor lock-in to specific libraries or frameworks. Anyone in the organization can contribute because they do not need traditional programming skills.
For the Community
Shared improvements benefit everyone using the platform. There is no fragmentation into incompatible plugin ecosystems. Users experience consistency across different bots and deployments. The community advances collectively rather than each organization maintaining separate extensions.
The Future is Already Here
In 2025, this approach is not aspirational but reality. Applications built entirely with BASIC and LLM calls run in production today. Most use cases require zero custom code. AI handles complexity better than hand-written algorithms in many domains. Machines do machine work while humans focus on human work like understanding requirements and making decisions.
Migration Path
From Extensions to Virtual Crates
If you have existing plugin-style extensions, the migration path consolidates them into the main source tree. An old extension might have been a separate folder with hundreds of lines of JavaScript, a package.json, and complex logic. The new approach places a small Rust module in src/ that registers BASIC keywords, while the actual logic moves to a few lines of BASIC in your .gbdialog folder that leverage LLM calls.
From Code to Descriptions
Migration from traditional code involves converting algorithms into natural language descriptions. Instead of writing the logic to process data, you describe what processing you need and let the LLM implement it.
From Libraries to LLM
Instead of importing twenty npm packages for various functionality, you make single LLM calls with descriptions of what you need. The AI has knowledge of countless libraries and formats built into its training.
Development Guidelines
Follow these practices to work effectively with the automation-first philosophy. Describe problems to the LLM in clear, specific terms. Use BASIC as minimal glue between AI operations. Contribute keywords to the core when you discover patterns that would benefit everyone. Share your patterns with the community so others can learn. Think automation-first for every task you encounter.
Avoid common mistakes that fight against this philosophy. Do not write complex algorithms when a description would suffice. Do not build separate plugins that fragment the ecosystem. Do not create custom frameworks that add unnecessary complexity. Do not maintain separate codebases when everything should be in one place. Do not fight the machine by insisting on manual implementation.
The Virtual Crate Architecture
Each gbapp is now a module in the src/ directory. The structure maps conceptually familiar package names to Rust modules. The core gbapp lives in src/core/. The BASIC interpreter is src/basic/. Channel adapters are in src/channels/. Your contribution would go in src/your_feature/. This elegant mapping preserves the conceptual model of separate packages while leveraging Rust’s module system and compiling everything into a single optimized binary.
Conclusion
The gbapp concept in 2025 has evolved from external packages to virtual crates. These Rust modules inside src/ compile into a single, optimized binary while preserving the familiar mental model of separate functional packages.
The philosophy remains constant: machines are better at machine work. Your job is to describe what you want, not implement how to do it. The combination of BASIC for orchestration and LLM for logic eliminates the need for traditional programming in almost all cases.
Examples Repository
The /templates/ directory contains real-world examples of applications built entirely with BASIC and LLM calls. A CRM system requires about fifty lines of BASIC. Email automation needs around thirty lines. Data pipelines work in twenty lines. Report generators take about fifteen lines. Each of these would have required thousands of lines of traditional code, demonstrating the dramatic productivity improvement this philosophy enables.
Example: Creating a New gbapp Virtual Crate
This guide walks through creating a new gbapp virtual crate called analytics that adds analytics capabilities to BotServer.
Step 1: Create the Module Structure
Create your gbapp directory in src/:
src/analytics/ # analytics.gbapp virtual crate
├── mod.rs # Module definition
├── keywords.rs # BASIC keywords
├── services.rs # Core functionality
├── models.rs # Data structures
└── tests.rs # Unit tests
Step 2: Define the Module
src/analytics/mod.rs
#![allow(unused)] fn main() { //! Analytics gbapp - Provides analytics and reporting functionality //! //! This virtual crate adds analytics keywords to BASIC and provides //! services for tracking and reporting bot interactions. pub mod keywords; pub mod services; pub mod models; #[cfg(test)] mod tests; use crate::shared::state::AppState; use std::sync::Arc; /// Initialize the analytics gbapp pub fn init(state: Arc<AppState>) -> Result<(), Box<dyn std::error::Error>> { log::info!("Initializing analytics.gbapp virtual crate"); // Initialize analytics services services::init_analytics_service(&state)?; Ok(()) } }
Step 3: Add BASIC Keywords
src/analytics/keywords.rs
#![allow(unused)] fn main() { use crate::shared::state::AppState; use rhai::{Engine, Dynamic}; use std::sync::Arc; /// Register analytics keywords with the BASIC interpreter pub fn register_keywords(engine: &mut Engine, state: Arc<AppState>) { let state_clone = state.clone(); // TRACK EVENT keyword engine.register_fn("TRACK EVENT", move |event_name: String, properties: String| -> String { let result = tokio::task::block_in_place(|| { tokio::runtime::Handle::current().block_on(async { crate::analytics::services::track_event(&state_clone, &event_name, &properties).await }) }); match result { Ok(_) => format!("Event '{}' tracked", event_name), Err(e) => format!("Failed to track event: {}", e), } }); // GET ANALYTICS keyword engine.register_fn("GET ANALYTICS", move |metric: String, timeframe: String| -> Dynamic { let result = tokio::task::block_in_place(|| { tokio::runtime::Handle::current().block_on(async { crate::analytics::services::get_analytics(&metric, &timeframe).await }) }); match result { Ok(data) => Dynamic::from(data), Err(_) => Dynamic::UNIT, } }); // GENERATE REPORT keyword engine.register_fn("GENERATE REPORT", move |report_type: String| -> String { // Use LLM to generate natural language report let data = crate::analytics::services::get_report_data(&report_type); let prompt = format!( "Generate a {} report from this data: {}", report_type, data ); // This would call the LLM service format!("Report generated for: {}", report_type) }); } }
Step 4: Implement Services
src/analytics/services.rs
#![allow(unused)] fn main() { use crate::shared::state::AppState; use crate::shared::models::AnalyticsEvent; use std::sync::Arc; use anyhow::Result; /// Initialize analytics service pub fn init_analytics_service(state: &Arc<AppState>) -> Result<()> { // Set up database tables, connections, etc. log::debug!("Analytics service initialized"); Ok(()) } /// Track an analytics event pub async fn track_event( state: &Arc<AppState>, event_name: &str, properties: &str, ) -> Result<()> { // Store event in database let conn = state.conn.get()?; // Implementation details... log::debug!("Tracked event: {}", event_name); Ok(()) } /// Get analytics data pub async fn get_analytics(metric: &str, timeframe: &str) -> Result<String> { // Query analytics data let results = match metric { "user_count" => get_user_count(timeframe).await?, "message_volume" => get_message_volume(timeframe).await?, "engagement_rate" => get_engagement_rate(timeframe).await?, _ => return Err(anyhow::anyhow!("Unknown metric: {}", metric)), }; Ok(results) } /// Get data for report generation pub fn get_report_data(report_type: &str) -> String { // Gather data based on report type match report_type { "daily" => get_daily_report_data(), "weekly" => get_weekly_report_data(), "monthly" => get_monthly_report_data(), _ => "{}".to_string(), } } // Helper functions async fn get_user_count(timeframe: &str) -> Result<String> { // Implementation Ok("100".to_string()) } async fn get_message_volume(timeframe: &str) -> Result<String> { // Implementation Ok("5000".to_string()) } async fn get_engagement_rate(timeframe: &str) -> Result<String> { // Implementation Ok("75%".to_string()) } fn get_daily_report_data() -> String { // Gather daily metrics r#"{"users": 100, "messages": 1500, "sessions": 50}"#.to_string() } fn get_weekly_report_data() -> String { // Gather weekly metrics r#"{"users": 500, "messages": 8000, "sessions": 300}"#.to_string() } fn get_monthly_report_data() -> String { // Gather monthly metrics r#"{"users": 2000, "messages": 35000, "sessions": 1200}"#.to_string() } }
Step 5: Define Data Models
src/analytics/models.rs
#![allow(unused)] fn main() { use serde::{Deserialize, Serialize}; use chrono::{DateTime, Utc}; #[derive(Debug, Serialize, Deserialize)] pub struct AnalyticsEvent { pub id: uuid::Uuid, pub event_name: String, pub properties: serde_json::Value, pub user_id: Option<String>, pub session_id: String, pub timestamp: DateTime<Utc>, } #[derive(Debug, Serialize, Deserialize)] pub struct MetricSnapshot { pub metric_name: String, pub value: f64, pub timestamp: DateTime<Utc>, pub dimensions: serde_json::Value, } #[derive(Debug, Serialize, Deserialize)] pub struct Report { pub report_type: String, pub generated_at: DateTime<Utc>, pub data: serde_json::Value, pub summary: String, } }
Step 6: Register with Core
Update src/basic/keywords/mod.rs to include your gbapp:
#![allow(unused)] fn main() { use crate::analytics; pub fn register_all_keywords(engine: &mut Engine, state: Arc<AppState>) { // ... existing keywords // Register analytics.gbapp keywords analytics::keywords::register_keywords(engine, state.clone()); } }
Update src/main.rs or initialization code:
#![allow(unused)] fn main() { // Initialize analytics gbapp analytics::init(state.clone())?; }
Step 7: Add Tests
src/analytics/tests.rs
#![allow(unused)] fn main() { #[cfg(test)] mod tests { use super::*; #[test] fn test_track_event() { // Test event tracking let event_name = "user_login"; let properties = r#"{"user_id": "123"}"#; // Test implementation assert!(true); } #[tokio::test] async fn test_get_analytics() { // Test analytics retrieval let metric = "user_count"; let timeframe = "daily"; // Test implementation assert!(true); } } }
Step 8: Use in BASIC Scripts
Now your gbapp keywords are available in BASIC:
' Track user actions
TRACK EVENT "button_clicked", "button=submit"
' Get metrics
daily_users = GET ANALYTICS "user_count", "daily"
TALK "Daily active users: " + daily_users
' Generate AI-powered report
report = GENERATE REPORT "weekly"
TALK report
' Combine with LLM for insights
metrics = GET ANALYTICS "all", "monthly"
insights = LLM "Analyze these metrics and provide insights: " + metrics
TALK insights
Step 9: Add Feature Flag (Optional)
If your gbapp should be optional, add it to Cargo.toml:
[features]
analytics = []
# Include in default features if always needed
default = ["ui-server", "chat", "analytics"]
Then conditionally compile:
#![allow(unused)] fn main() { #[cfg(feature = "analytics")] pub mod analytics; #[cfg(feature = "analytics")] analytics::keywords::register_keywords(engine, state.clone()); }
Benefits of This Approach
- Clean Separation: Your gbapp is self-contained
- Easy Discovery: Visible in
src/analytics/ - Type Safety: Rust compiler checks everything
- Native Performance: Compiles into the main binary
- Familiar Structure: Like the old
.gbapppackages
Best Practices
✅ DO:
- Keep your gbapp focused on one domain
- Provide clear BASIC keywords
- Use LLM for complex logic
- Write comprehensive tests
- Document your keywords
❌ DON’T:
- Create overly complex implementations
- Duplicate existing functionality
- Skip error handling
- Forget about async/await
- Ignore the BASIC-first philosophy
Summary
Creating a gbapp virtual crate is straightforward:
- Create a module in
src/ - Define keywords for BASIC
- Implement services
- Register with core
- Use in BASIC scripts
Your gbapp becomes part of BotServer’s compiled binary, providing native performance while maintaining the conceptual clarity of the package system. Most importantly, remember that the implementation should be minimal - let BASIC + LLM handle the complexity!
Module Structure
BotServer is a single Rust crate (not a workspace) with multiple modules. The application is defined in Cargo.toml as the botserver crate, version 6.0.8.
Main Entry Points
The primary entry point is src/main.rs, which starts the Axum web server and initializes all components. The public library interface in src/lib.rs exports all major modules for external use.
Core Modules
The following modules are exported in src/lib.rs and comprise the core functionality:
User & Bot Management
The auth module handles user authentication, password hashing using Argon2, and session token management. The bot module manages bot lifecycle, configuration, and runtime operations. The session module provides user session handling and state management across conversations.
Conversation & Scripting
The basic module implements the BASIC-like scripting language interpreter for .gbdialog files. The context module manages conversation context and memory throughout user interactions. The channels module provides multi-channel support for web, voice, and various messaging platforms.
Knowledge & AI
The llm module provides LLM provider integration for OpenAI and local models. The llm_models module contains model-specific implementations and configurations. The nvidia module offers NVIDIA GPU acceleration support for local inference.
Infrastructure
The bootstrap module handles system initialization and the auto-bootstrap process. The package_manager module manages component installation and lifecycle. The config module provides application configuration and environment management. The shared module contains shared utilities, database models, and common types used throughout the codebase. The web_server module implements the Axum-based HTTP server and API endpoints.
Features & Integration
The automation module provides scheduled tasks and event-driven triggers. The drive_monitor module handles file system monitoring and change detection. The email module provides email integration via IMAP and SMTP as a conditional feature. The file module handles file processing and operations. The meet module integrates video meeting functionality through LiveKit.
Testing & Development
The tests module contains test utilities and test suites for validating functionality across the codebase.
Internal Modules
Several directories exist in src/ that are either internal implementations or not fully integrated into the public API.
The api/ directory contains the api/drive subdirectory with drive-related API code. The drive/ directory provides drive (S3-compatible) integration and vector database functionality through vectordb.rs. The ui/ directory contains UI-related modules including drive.rs, stream.rs, sync.rs, and local-sync.rs. The ui_tree/ directory provides UI tree structure functionality used in main.rs but not exported in lib.rs. The prompt_manager/ directory stores the prompt library and is not a Rust module but contains prompts.csv. The riot_compiler/ directory contains a Riot.js component compiler that exists but is currently unused. The web_automation/ directory is an empty placeholder for future functionality.
Dependency Management
All dependencies are managed through a single Cargo.toml at the project root.
The web framework layer uses axum, tower, and tower-http for HTTP handling. The async runtime is tokio for concurrent operations. Database access uses diesel for PostgreSQL and redis for cache component connectivity. AI and ML functionality relies on qdrant-client for vector database operations as an optional feature. Storage operations use aws-sdk-s3 for drive and S3-compatible storage backends. Scripting uses rhai as the BASIC-like language runtime. Security features include argon2 for password hashing and aes-gcm for encryption. Desktop support uses tauri as an optional feature.
Feature Flags
The crate supports optional features for customizing builds:
[features]
default = ["desktop"]
vectordb = ["qdrant-client"]
email = ["imap"]
desktop = ["dep:tauri", "dep:tauri-plugin-dialog", "dep:tauri-plugin-opener"]
Building
To build the project with different configurations:
# Standard build
cargo build --release
# Build without desktop features
cargo build --release --no-default-features
# Build with vector database support
cargo build --release --features vectordb
# Build with all features
cargo build --release --all-features
Module Organization Pattern
Most modules follow a consistent structure with a mod.rs file containing the main module implementation and a module_name.test.rs file for module-specific tests. Some modules have additional submodules or specialized files such as drive/vectordb.rs and ui/drive.rs for feature-specific functionality.
Service Layer
BotServer’s service layer is organized into functional modules that handle specific aspects of the platform. Each module encapsulates related functionality and provides a clear API for interaction with other parts of the system. This chapter describes each service module and its responsibilities within the overall architecture.
Core Service Modules
Authentication and Security
The auth module provides secure user authentication and session management throughout the platform. Password hashing uses the Argon2 algorithm for secure password storage that resists both CPU and GPU-based attacks. Session token generation creates and validates unique tokens for maintaining authenticated state. User verification authenticates users against the database using stored credentials. Bot authentication manages bot-level authentication for API access, allowing bots to make authenticated requests to external services.
The module’s key responsibilities include hashing passwords with Argon2 before storage, generating cryptographically secure session tokens, validating user credentials during login, and managing the complete session lifecycle from creation through expiration.
Bot Management
The bot module handles bot lifecycle and configuration throughout the system. Bot creation initializes new bot instances with their required components. Configuration management loads and applies bot settings from config.csv files. Bot state tracking monitors bot status and health for operational awareness. Multi-tenant support isolates bots by tenant to prevent data leakage between organizations.
This module creates and deletes bot instances, loads bot configuration from the database, manages bot lifecycle including start, stop, and restart operations, and associates bots with users and sessions for proper isolation.
Session Management
The session module maintains user conversation state across interactions. Session storage persists conversation context to both cache and database. State management tracks user progress through dialogs and remembers variable values. Session cleanup removes expired sessions to free resources. Multi-user support isolates sessions by user to ensure privacy.
The module creates new sessions when users connect, stores and retrieves session variables, maintains conversation history for context, and cleans up abandoned sessions after timeout periods.
Conversation and Scripting Services
BASIC Interpreter
The basic module implements the BASIC-like scripting language for .gbdialog files. Script parsing reads BASIC dialog scripts and converts them to executable form. The execution engine powered by the Rhai scripting engine runs the parsed scripts. Keyword implementation provides custom keywords like TALK, HEAR, and LLM for bot functionality. Variable management handles script variables and maintains execution context across statements.
This module loads and parses .gbdialog scripts from bot packages, executes BASIC commands in sequence, provides custom keywords that extend the language for bot functionality, and manages script execution context including variables and flow control state.
Context Management
The context module manages conversation context and memory for LLM interactions. Conversation history storage maintains the message history for each session. Context retrieval loads relevant context for LLM calls based on the current query. Memory management limits context size to fit within model token limits. Context compaction summarizes old conversations to preserve meaning while reducing tokens.
The module appends messages to conversation history as they occur, retrieves appropriate context for LLM queries, implements context window management to stay within limits, and provides context to knowledge base queries for improved relevance.
Channel Abstraction
The channels module provides a unified interface for multiple communication channels. The web interface enables browser-based chat through the default UI. WebSocket support provides real-time bidirectional communication for responsive interactions. Voice integration handles audio input and output for voice-enabled bots. Platform adapters provide an extensible channel system for adding new platforms.
This module abstracts channel-specific implementations behind a common interface, routes messages to appropriate handlers based on channel type, formats responses appropriately for specific channels, and handles channel-specific features like typing indicators and read receipts.
AI and Knowledge Services
LLM Integration
The llm module integrates with large language models for natural language understanding and generation. Provider abstraction supports multiple LLM providers through a common interface. API communication handles API calls to LLM services including authentication and rate limiting. Streaming responses support token streaming for real-time response display. Error handling provides graceful degradation when API calls fail.
The module sends prompts to LLM providers using appropriate formats, parses and streams responses back to callers, handles API authentication and key management, and manages rate limiting with automatic retries when necessary.
LLM Models
The llm_models module contains model-specific implementations for different providers. Model configurations define parameters and capabilities for different models. Prompt templates handle model-specific prompt formatting requirements. Token counting estimates token usage before making API calls. Model selection chooses the appropriate model for each task based on requirements.
This module defines model capabilities and limits for each supported model, formats prompts according to each model’s expectations, calculates token costs for usage tracking, and selects optimal models for specific query types.
NVIDIA Integration
The nvidia module provides GPU acceleration support for local model inference. GPU detection identifies available NVIDIA GPUs in the system. Acceleration enables GPU-accelerated inference for local models. Resource management allocates GPU resources among concurrent requests.
Infrastructure Services
Bootstrap
The bootstrap module handles system initialization and first-time setup. Component installation downloads and installs required components including PostgreSQL, cache, and drive storage. Database setup creates schemas and applies migrations to prepare the database. Credential generation creates secure passwords for all services. Environment configuration writes .env files with generated settings. Template upload deploys bot templates to storage for immediate use.
The module detects installation mode to determine whether it is running locally or in containers, installs and starts all system components in the correct order, initializes the database with migrations and seed data, configures drive storage with appropriate buckets, and creates default bots from included templates.
Package Manager
The package_manager module manages component installation and lifecycle. The component registry tracks available components and their versions. Installation downloads and installs components from configured sources. Lifecycle management starts, stops, and restarts components as needed. Dependency resolution ensures components start in the correct order based on their dependencies.
Managed components include tables for PostgreSQL database, cache for Valkey caching, drive for S3-compatible object storage, llm for local LLM server, email for email server integration, proxy for reverse proxy functionality, directory for LDAP directory services, alm for application lifecycle management, dns for DNS server operations, meeting for LiveKit video conferencing, and vector_db for Qdrant vector database functionality.
Configuration
The config module loads and validates application configuration. Environment variables load from .env files and system environment. Validation ensures all required configuration is present before startup. Defaults provide sensible values for optional settings. Type safety parses configuration into strongly-typed structs for compile-time checking.
The module loads DATABASE_URL, DRIVE_SERVER, API keys, and other settings, validates configuration completeness at startup, provides configuration access to other modules through a shared struct, and handles configuration errors with helpful messages.
Shared Utilities
The shared module contains common functionality used across the system. Database models define the Diesel schema and models for all tables. Connection pooling manages R2D2 connection pools for efficient database access. Utilities provide common helper functions for repeated tasks. Types define shared type definitions used throughout the codebase.
This module defines the database schema with Diesel macros, provides database connection helpers for consistent access patterns, implements common utility functions for string manipulation and data transformation, and shares types across modules to ensure consistency.
Web Server
The web_server module implements the HTTP API using Axum. API routes define RESTful endpoints for bot interaction and management. The WebSocket handler manages real-time communication channels. Static files serve web UI assets for the browser interface. CORS configuration enables cross-origin resource sharing for embedded deployments. Middleware handles logging, authentication, and error handling for all requests.
This module defines API routes and their handlers, processes HTTP requests and generates responses, manages WebSocket connections for real-time chat, and serves static web interface files for the UI.
Feature Services
Automation
The automation module provides scheduled and event-driven task execution. Cron scheduling runs tasks on defined schedules using standard cron syntax. Event triggers react to system events by executing associated handlers. Background jobs execute long-running tasks without blocking the main thread. Job management tracks running jobs and allows cancellation when needed.
Drive Monitor
The drive_monitor module watches for file system changes in bot packages. File watching detects file creation, modification, and deletion events. Event processing handles file change events by triggering appropriate actions. Automatic indexing adds new documents to the knowledge base when they appear in monitored directories.
Email Integration
The email module handles email communication as an optional feature. IMAP support reads emails from configured inbox folders. SMTP support sends emails via the Lettre library. Email parsing extracts text content and attachments from received messages. Template rendering generates HTML emails from templates with variable substitution.
File Handling
The file module processes various file types for knowledge base ingestion. PDF extraction pulls text from PDF documents using pdf-extract. Document parsing handles various document formats including Word and plain text. File upload processes multipart file uploads from users. Storage integration saves processed files to drive storage for persistence.
Meeting Integration
The meet module integrates with LiveKit for video conferencing capabilities. Room creation establishes meeting rooms with appropriate settings. Token generation creates access tokens for meeting participants. Participant management tracks who is in each meeting. Recording captures meeting sessions for later review.
Storage Services
Drive
The drive module provides S3-compatible object storage integration. Drive integration uses the AWS SDK S3 client for compatibility with various providers. Bucket management creates and manages storage buckets for different bots. Object operations handle upload, download, and delete operations for files. Vector database integration connects to Qdrant for semantic search functionality.
UI Components
The ui module contains UI-related functionality for the web interface. Drive UI provides a file browser interface for managing documents. Stream handling implements server-sent events for real-time updates. Sync logic manages synchronization between local and remote files. Local sync enables desktop app file synchronization for offline access.
Testing
The tests module provides test utilities and integration tests for the platform. Test fixtures provide common test data and setup procedures. Integration tests validate end-to-end functionality across modules. Mock services substitute for external dependencies during testing. Test helpers provide utilities for writing consistent, readable tests.
Service Interaction Patterns
Layered Architecture
Services are organized into layers with clear dependencies. The infrastructure layer contains bootstrap, package_manager, config, shared, and web_server modules that provide foundational capabilities. The data layer contains drive, file, and session modules that handle persistence. The domain layer contains bot, auth, context, and basic modules that implement core business logic. The AI layer contains llm, llm_models, and nvidia modules for machine learning integration. The feature layer contains automation, email, meet, and drive_monitor modules that add optional capabilities. The presentation layer contains channels and ui modules that handle user interaction.
Dependency Injection
Services use Rust’s module system and trait-based design for dependency injection. Database connections are shared via connection pools managed by R2D2. Configuration is passed through the AppConfig struct which is initialized at startup and shared immutably. Services access their dependencies through function parameters rather than global state.
Error Handling
All services use anyhow::Result<T> for error handling, allowing errors to propagate up the call stack with context. Each layer adds relevant context to errors before propagating them. Critical services log errors using the log crate with appropriate severity levels. User-facing errors are translated to helpful messages without exposing internal details.
Async Operations
Most services are async and use Tokio as the runtime. This design allows concurrent handling of multiple user sessions without blocking. External API calls run concurrently to minimize latency. Background tasks use Tokio’s task spawning for parallel execution. The async design enables efficient resource utilization even under high load.
Creating Custom Keywords
BotServer’s BASIC scripting language can be extended with custom keywords. All keywords are implemented as Rust functions in the src/basic/keywords/ directory.
Overview
Keywords in BotServer are Rust functions that get registered with the Rhai scripting engine. They provide the core functionality that BASIC scripts can use to interact with the system.
Keyword Implementation Structure
File Organization
Each keyword is typically implemented in its own module file:
src/basic/keywords/
├── mod.rs # Module registration
├── hear_talk.rs # HEAR and TALK keywords
├── llm_keyword.rs # LLM keyword
├── bot_memory.rs # GET BOT MEMORY, SET BOT MEMORY
├── use_kb.rs # USE KB keyword
├── clear_kb.rs # CLEAR KB keyword
├── get.rs # GET keyword
├── format.rs # FORMAT keyword
└── [other keywords].rs
Creating a New Keyword
Step 1: Create the Module File
Create a new file in src/basic/keywords/ for your keyword:
src/basic/keywords/my_keyword.rs
Step 2: Implement the Keyword Function
Keywords are implemented using one of two Rhai registration methods:
Method 1: Simple Function Registration
For basic keywords that return values:
#![allow(unused)] fn main() { use rhai::Engine; use std::sync::Arc; use crate::core::shared::state::AppState; use crate::core::session::UserSession; pub fn my_keyword( state: Arc<AppState>, user_session: UserSession, engine: &mut Engine ) { let state_clone = Arc::clone(&state); let user_clone = user_session.clone(); engine.register_fn("MY_KEYWORD", move |param: String| -> String { // Your keyword logic here format!("Processed: {}", param) }); } }
Method 2: Custom Syntax Registration
For keywords with special syntax or side effects:
#![allow(unused)] fn main() { use rhai::{Engine, EvalAltResult}; use std::sync::Arc; use crate::core::shared::state::AppState; use crate::core::session::BotSession; pub fn register_my_keyword( state: Arc<AppState>, session: Arc<BotSession>, engine: &mut Engine ) -> Result<(), Box<EvalAltResult>> { let state_clone = Arc::clone(&state); let session_clone = Arc::clone(&session); engine.register_custom_syntax( &["MY_KEYWORD", "$expr$"], // Syntax pattern true, // Is statement (not expression) move |context, inputs| { let param = context.eval_expression_tree(&inputs[0])?.to_string(); // Your keyword logic here info!("MY_KEYWORD executed with: {}", param); Ok(().into()) } )?; Ok(()) } }
Step 3: Register in mod.rs
Add your module to src/basic/keywords/mod.rs:
#![allow(unused)] fn main() { pub mod my_keyword; }
Step 4: Add to Keyword Registry
Keywords are registered in the BASIC interpreter initialization. The registration happens in the main interpreter setup where all keywords are added to the Rhai engine.
Keyword Patterns
Pattern 1: Database Operations
Keywords that interact with the database (like GET BOT MEMORY):
#![allow(unused)] fn main() { pub fn database_keyword(state: Arc<AppState>, user: UserSession, engine: &mut Engine) { let state_clone = Arc::clone(&state); let user_clone = user.clone(); engine.register_fn("DB_KEYWORD", move |key: String| -> String { let state = Arc::clone(&state_clone); let conn_result = state.conn.get(); if let Ok(mut conn) = conn_result { // Database query using Diesel // Return result } else { String::new() } }); } }
Pattern 2: Async Operations
Keywords that need async operations (like WEATHER):
#![allow(unused)] fn main() { pub fn async_keyword(state: Arc<AppState>, user: UserSession, engine: &mut Engine) { engine.register_custom_syntax(&["ASYNC_OP", "$expr$"], false, move |context, inputs| { let param = context.eval_expression_tree(&inputs[0])?; // Create channel for async result let (tx, rx) = std::sync::mpsc::channel(); // Spawn blocking task std::thread::spawn(move || { let rt = tokio::runtime::Runtime::new().unwrap(); let result = rt.block_on(async { // Async operation here "result".to_string() }); let _ = tx.send(result); }); // Wait for result match rx.recv_timeout(Duration::from_secs(30)) { Ok(result) => Ok(result.into()), Err(_) => Ok("Timeout".into()), } }); } }
Pattern 3: Session Management
Keywords that modify session state (like USE KB, CLEAR KB):
#![allow(unused)] fn main() { pub fn register_session_keyword( state: Arc<AppState>, session: Arc<BotSession>, engine: &mut Engine ) -> Result<(), Box<EvalAltResult>> { let session_clone = Arc::clone(&session); engine.register_custom_syntax(&["SESSION_OP", "$expr$"], true, move |context, inputs| { let param = context.eval_expression_tree(&inputs[0])?.to_string(); // Modify session state let mut session_lock = session_clone.blocking_write(); // Update session fields Ok(().into()) })?; Ok(()) } }
Available Dependencies
Keywords have access to:
-
AppState: Application-wide state including:
- Database connection pool (
state.conn) - Drive client for S3-compatible storage (
state.drive) - Cache client (
state.cache) - Configuration (
state.config) - LLM provider (
state.llm_provider)
- Database connection pool (
-
UserSession: Current user’s session data:
- User ID (
user_session.user_id) - Bot ID (
user_session.bot_id) - Session ID (
user_session.session_id)
- User ID (
-
BotSession: Bot conversation state:
- Context collections
- Tool definitions
- Conversation history
- Session variables
Error Handling
Keywords should handle errors gracefully:
#![allow(unused)] fn main() { engine.register_fn("SAFE_KEYWORD", move |param: String| -> String { match risky_operation(¶m) { Ok(result) => result, Err(e) => { error!("Keyword error: {}", e); format!("Error: {}", e) } } }); }
Testing Keywords
Keywords can be tested with unit tests:
#![allow(unused)] fn main() { #[cfg(test)] mod tests { use super::*; #[test] fn test_my_keyword() { // Create test engine let mut engine = Engine::new(); // Register keyword // Test keyword execution // Assert results } } }
Best Practices
- Clone Arc References: Always clone Arc-wrapped state before moving into closures
- Use Logging: Add info/debug logging for keyword execution
- Handle Errors: Don’t panic, return error messages as strings
- Timeout Async Ops: Use timeouts for network operations
- Document Parameters: Use clear parameter names and add comments
- Keep It Simple: Each keyword should do one thing well
- Thread Safety: Ensure all operations are thread-safe
Example: Complete Keyword Implementation
Here’s a complete example of a custom keyword that saves data:
#![allow(unused)] fn main() { // src/basic/keywords/save_data.rs use rhai::Engine; use std::sync::Arc; use log::{info, error}; use crate::core::shared::state::AppState; use crate::core::session::UserSession; pub fn save_data_keyword( state: Arc<AppState>, user_session: UserSession, engine: &mut Engine ) { let state_clone = Arc::clone(&state); let user_clone = user_session.clone(); engine.register_fn("SAVE_DATA", move |key: String, value: String| -> String { info!("SAVE_DATA called: key={}, value={}", key, value); let state = Arc::clone(&state_clone); let conn_result = state.conn.get(); match conn_result { Ok(mut conn) => { // Save to database using Diesel // (actual implementation would use proper Diesel queries) info!("Data saved successfully"); "OK".to_string() } Err(e) => { error!("Database error: {}", e); format!("Error: {}", e) } } }); } }
Limitations
- Keywords must be synchronous or use blocking operations
- Direct async/await is not supported (use channels for async)
- Keywords are registered globally for all scripts
- Cannot dynamically add keywords at runtime
- All keywords must be compiled into the binary
Summary
Creating custom keywords extends BotServer’s BASIC language capabilities. Keywords are Rust functions registered with the Rhai engine that provide access to system features, databases, external APIs, and more. Follow the patterns shown above to create robust, thread-safe keywords that integrate seamlessly with the BotServer ecosystem.
Adding Dependencies
BotServer is a single-crate Rust application, so all dependencies are managed through the root Cargo.toml file. This guide covers how to add, update, and manage dependencies effectively.
Adding a Dependency
Basic Dependency
To add a new crate, edit Cargo.toml and add it to the [dependencies] section:
[dependencies]
serde = "1.0"
Then update your dependencies:
cargo build
Dependency with Features
Many crates offer optional features that you can enable selectively. The syntax uses curly braces to specify both the version and the features array:
[dependencies]
tokio = { version = "1.41", features = ["full"] }
serde = { version = "1.0", features = ["derive"] }
Version-Specific Dependencies
Cargo supports several version constraint formats to control which versions are acceptable. An exact version uses the equals sign prefix, while minimum versions use the greater-than-or-equal operator. The caret symbol indicates compatible versions according to semantic versioning, and wildcards allow any version within a major release:
[dependencies]
# Exact version
diesel = "=2.1.0"
# Minimum version
anyhow = ">=1.0.0"
# Compatible version (caret)
regex = "^1.11"
# Wildcard
uuid = "1.*"
Git Dependencies
You can add dependencies directly from Git repositories when you need unreleased features or custom forks. Specify the repository URL along with an optional branch name:
[dependencies]
rhai = { git = "https://github.com/therealprof/rhai.git", branch = "features/use-web-time" }
For reproducible builds, pin to a specific commit using the rev field:
[dependencies]
my-crate = { git = "https://github.com/user/repo", rev = "abc123" }
You can also reference a tagged release:
[dependencies]
my-crate = { git = "https://github.com/user/repo", tag = "v1.0.0" }
Optional Dependencies
Some dependencies aren’t always needed and can be marked as optional. These won’t be compiled unless explicitly enabled through feature flags:
[dependencies]
qdrant-client = { version = "1.12", optional = true }
imap = { version = "3.0.0-alpha.15", optional = true }
Then define features that enable them:
[features]
vectordb = ["qdrant-client"]
email = ["imap"]
Platform-Specific Dependencies
Certain dependencies are only needed on specific operating systems. Cargo’s target configuration syntax lets you conditionally include dependencies based on the compilation target:
[target.'cfg(unix)'.dependencies]
libc = "0.2"
[target.'cfg(windows)'.dependencies]
winapi = "0.3"
[target.'cfg(target_os = "macos")'.dependencies]
core-foundation = "0.9"
Existing Dependencies
BotServer relies on a comprehensive set of dependencies organized by functionality.
Web Framework
The HTTP layer is built on axum as the primary web framework, with tower providing middleware and service abstractions. The tower-http crate adds HTTP-specific middleware for CORS, static file serving, and tracing. At the lowest level, hyper handles the HTTP protocol implementation.
Async Runtime
Asynchronous execution is powered by tokio with its full feature set enabled. Supporting crates include tokio-stream for stream utilities, async-trait for async trait definitions, async-stream for async stream macros, and async-lock for asynchronous locking primitives.
Database
Database operations use diesel as the ORM for PostgreSQL, with diesel_migrations handling schema migrations. Connection pooling is managed by r2d2, and the redis crate provides cache client functionality compatible with both Valkey and Redis.
Storage
Cloud storage integration relies on the AWS SDK, with aws-config for configuration and aws-sdk-s3 for S3-compatible storage operations through the drive component. The optional qdrant-client crate enables vector database functionality.
Security
Cryptographic operations use several specialized crates. Password hashing is handled by argon2, encryption by aes-gcm, HMAC authentication by hmac, and SHA-256 hashing by sha2.
Scripting
The BASIC interpreter is powered by rhai, which provides a safe and fast embedded scripting engine.
Data Formats
Serialization and deserialization use serde as the core framework, with serde_json for JSON support. Additional format support comes from csv for CSV parsing and base64 for Base64 encoding.
Document Processing
Document handling includes pdf-extract for PDF text extraction, mailparse for email parsing, and zip for ZIP archive handling.
Communication
Network communication uses reqwest as the HTTP client. Email functionality is split between lettre for SMTP sending and the optional imap crate for reading emails. Video conferencing is provided by the livekit crate.
Desktop (Optional)
Desktop application builds use tauri as the framework, along with tauri-plugin-dialog for native file dialogs and tauri-plugin-opener for opening files and URLs.
Utilities
Common utilities include anyhow for error handling, log and env_logger for logging, tracing for structured logging, chrono for date and time handling, uuid for UUID generation, regex for regular expressions, and rand for random number generation.
Testing
Test support comes from mockito for HTTP mocking and tempfile for temporary file handling.
Adding a New Dependency: Example
This walkthrough demonstrates adding JSON Web Token (JWT) support to the project.
1. Choose a Crate
Search on crates.io to find suitable crates:
cargo search jsonwebtoken
2. Add to Cargo.toml
[dependencies]
jsonwebtoken = "9.2"
3. Update Dependencies
cargo build
4. Import in Code
In your Rust file (e.g., src/auth/mod.rs):
#![allow(unused)] fn main() { use jsonwebtoken::{encode, decode, Header, Validation, EncodingKey, DecodingKey}; }
5. Use the Dependency
#![allow(unused)] fn main() { use serde::{Deserialize, Serialize}; #[derive(Debug, Serialize, Deserialize)] struct Claims { sub: String, exp: usize, } pub fn create_jwt(user_id: &str) -> Result<String, jsonwebtoken::errors::Error> { let expiration = chrono::Utc::now() .checked_add_signed(chrono::Duration::hours(24)) .unwrap() .timestamp() as usize; let claims = Claims { sub: user_id.to_owned(), exp: expiration, }; let secret = std::env::var("JWT_SECRET").unwrap_or_else(|_| "secret".to_string()); let token = encode( &Header::default(), &claims, &EncodingKey::from_secret(secret.as_ref()), )?; Ok(token) } }
Managing Dependencies
Updating Dependencies
To update all dependencies to their latest compatible versions, run cargo update. For updating a specific dependency, use cargo update -p serde with the package name.
Checking for Outdated Dependencies
The cargo-outdated tool helps identify dependencies that have newer versions available:
cargo install cargo-outdated
cargo outdated
Upgrading to Latest Compatible Versions
The cargo-edit tool provides convenient commands for managing dependencies:
cargo install cargo-edit
cargo upgrade
Auditing for Security Vulnerabilities
Regular security audits are essential for production applications:
cargo install cargo-audit
cargo audit
Viewing the Dependency Tree
Understanding your dependency graph helps identify bloat and conflicts:
cargo tree
To view dependencies for a specific package:
cargo tree -p diesel
Finding Duplicate Dependencies
Different versions of the same crate increase binary size and compile time:
cargo tree --duplicates
Feature Management
BotServer uses feature flags to enable optional functionality, allowing users to compile only what they need.
Current Features
[features]
default = ["desktop"]
vectordb = ["qdrant-client"]
email = ["imap"]
desktop = ["dep:tauri", "dep:tauri-plugin-dialog", "dep:tauri-plugin-opener"]
Adding a New Feature
Start by adding the dependency as optional:
[dependencies]
elasticsearch = { version = "8.5", optional = true }
Then create a feature that enables it:
[features]
search = ["elasticsearch"]
Use conditional compilation in your code to only include the functionality when the feature is enabled:
#![allow(unused)] fn main() { #[cfg(feature = "search")] pub mod search { use elasticsearch::Elasticsearch; pub fn create_client() -> Elasticsearch { // Implementation } } }
Build with the feature enabled:
cargo build --features search
Build Dependencies
Dependencies needed only at build time (used in build.rs) go in a separate section:
[build-dependencies]
tauri-build = { version = "2", features = [] }
Development Dependencies
Dependencies needed only during testing should be placed in the dev-dependencies section. These are not included in release builds:
[dev-dependencies]
mockito = "1.7.0"
tempfile = "3"
Dependency Best Practices
Version Pinning
For production builds, prefer specific versions over ranges to ensure reproducible builds. While serde = "1.0.193" guarantees a specific version, serde = "1" could pull in any 1.x release, potentially introducing unexpected changes.
Minimize Dependencies
Every dependency you add increases build time, binary size, and maintenance burden while introducing potential security risks. Only add dependencies that provide significant value and aren’t easily implemented inline.
Check License Compatibility
All dependencies must have licenses compatible with AGPL-3.0. The cargo-license tool helps audit your dependency licenses:
cargo install cargo-license
cargo license
Prefer Maintained Crates
When choosing between crates that provide similar functionality, evaluate them based on recent release activity, GitHub repository engagement, maintainer responsiveness, and documentation quality.
Review Security Advisories
Make dependency auditing part of your regular development workflow. Running cargo audit regularly helps catch known vulnerabilities before they become problems.
Use Features to Reduce Size
Many crates include features you don’t need. Instead of enabling everything with tokio = "1.41", specify only the features you actually use:
tokio = { version = "1.41", features = ["rt-multi-thread", "net", "sync"] }
Common Issues
Conflicting Versions
When multiple crates require different versions of the same dependency, Cargo will fail to resolve the dependency graph. Use cargo tree to identify which crates are causing the conflict, then update dependencies or look for alternative crates.
Missing System Libraries
Some crates require system libraries to be installed. If you see linker errors mentioning cc, check the crate’s documentation for required system packages and refer to the Building from Source guide.
Feature Not Found
Referencing a non-existent feature will cause a build error. Double-check feature names in the crate’s Cargo.toml on crates.io or in its repository.
Removing Dependencies
To remove a dependency, first delete it from Cargo.toml. Then find and remove all import statements using grep or ripgrep:
rg "use dependency_name" src/
After removing the imports, clean and rebuild:
cargo clean
cargo build
Verify the dependency is completely removed:
cargo tree | grep dependency_name
Alternative Registries
Using a Custom Registry
For private crates or custom registries, configure the registry in your Cargo.toml:
[dependencies]
my-crate = { version = "1.0", registry = "my-registry" }
[registries.my-registry]
index = "https://my-registry.example.com/index"
For private company crates, consider Git dependencies or a private registry like Artifactory or CloudSmith.
Dependency Documentation
Good documentation makes dependencies easier to maintain. Add comments in Cargo.toml explaining why each dependency exists:
[dependencies]
# JWT token generation and validation
jsonwebtoken = "9.2"
Document usage in your code with doc comments that explain the dependency’s role:
#![allow(unused)] fn main() { /// Creates a JWT token for user authentication. /// /// Uses the `jsonwebtoken` crate to encode user claims /// with an expiration time. pub fn create_jwt(user_id: &str) -> Result<String, Error> { // Implementation } }
Next Steps
Review the Module Structure documentation to understand where to use new dependencies within the codebase. The Service Layer guide shows how dependencies integrate into the application architecture. For extending BASIC with new functionality that leverages dependencies, see Creating Custom Keywords.
Bot Configuration
This chapter covers bot configuration through the config.csv file system. Each bot’s behavior is controlled by a simple CSV configuration file in its .gbot package.
Configuration System
BotServer uses a straightforward name-value CSV format for configuration:
name,value
setting_name,setting_value
another_setting,another_value
File Location
mybot.gbai/
└── mybot.gbot/
└── config.csv
Configuration Categories
Server Settings
- Web server binding and ports
- Site generation paths
- Service endpoints
LLM Configuration
- Model paths (local GGUF files)
- Service URLs
- Cache settings
- Server parameters (when embedded)
Prompt Management
- Context compaction levels
- History retention
- Token management
Email Integration
- SMTP server settings
- Authentication credentials
- Sender configuration
Theme Customization
- Color schemes
- Logo URLs
- Bot titles
Custom Database
- External database connections
- Authentication details
Key Features
Simple Format
- Plain CSV with name-value pairs
- No complex syntax
- Human-readable
Flexible Structure
- Empty rows for visual grouping
- Optional settings with defaults
- Extensible for custom needs
Local-First
- Designed for local LLM models
- Self-hosted services
- No cloud dependency by default
Example Configurations
Minimal Setup
Just the essentials to run a bot:
name,value
llm-url,http://localhost:8081
llm-model,../../../../data/llm/model.gguf
Production Setup
Full configuration with all services:
name,value
,
server_host,0.0.0.0
server_port,8080
,
llm-url,http://localhost:8081
llm-model,../../../../data/llm/production-model.gguf
llm-cache,true
,
email-server,smtp.company.com
email-from,bot@company.com
,
theme-title,Company Assistant
Configuration Philosophy
- Defaults Work: Most settings have sensible defaults
- Local First: Assumes local services, not cloud APIs
- Simple Values: All values are strings, parsed as needed
- No Magic: What you see is what you get
See Also
- config.csv Format - Complete reference
- LLM Configuration - Language model settings
- Parameters - All available parameters
config.csv Format
The config.csv file is the heart of bot configuration in General Bots. Located in each bot’s .gbot package, it uses a simple, human-readable format that anyone can edit.
Why CSV?
We chose CSV because:
- No syntax errors - Just name,value pairs
- Spreadsheet compatible - Edit in Excel, Google Sheets, or any text editor
- Human readable - No brackets, no indentation wars
- Git friendly - Clean diffs, easy merges
Basic Format
name,value
server-port,8080
llm-model,../../../../data/llm/model.gguf
That’s it. No quotes, no special characters, just names and values.
Visual Organization
Use empty rows to group related settings:
name,value
# Server settings
server-host,0.0.0.0
server-port,8080
# LLM settings (see Configuration Management for details)
llm-url,http://localhost:8081
llm-model,model.gguf
# Email settings
email-from,bot@example.com
email-server,smtp.example.com
Key Points
- Case matters:
server-portnotServer-Port - No spaces: Around commas or in names
- Paths: Can be relative or absolute
- Booleans: Use
trueorfalse - Numbers: Just write them directly
Quick Example
A complete working configuration:
name,value
server-port,8080
llm-url,http://localhost:8081
llm-model,../../../../data/llm/DeepSeek-R3-Distill-Qwen-1.5B-Q3_K_M.gguf
episodic-memory-threshold,4
Four lines. Bot configured. That’s the General Bots way.
LLM Configuration
Basic LLM settings in config.csv:
llm-url- Where your LLM server is (local or cloud)llm-model- Which model to usellm-key- API key if using cloud services like Groq
For detailed LLM configuration including GPU settings, cache, performance tuning, and hardware-specific recommendations, see Configuration Management.
Where to Find Settings
For the complete list of available settings and detailed explanations, see Configuration Management.
Philosophy
Configuration should be boring. You should spend time on your bot’s personality and capabilities, not fighting with config files. CSV keeps it simple so you can focus on what matters.
Configuration Parameters
Complete reference of all available parameters in config.csv.
Server Parameters
Web Server
| Parameter | Description | Default | Type |
|---|---|---|---|
server-host | Server bind address | 0.0.0.0 | IP address |
server-port | Server listen port | 8080 | Number (1-65535) |
sites-root | Generated sites directory | /tmp | Path |
MCP Server
| Parameter | Description | Default | Type |
|---|---|---|---|
mcp-server | Enable MCP protocol server | false | Boolean |
LLM Parameters
Core LLM Settings
| Parameter | Description | Default | Type |
|---|---|---|---|
llm-key | API key for LLM service | none | String |
llm-url | LLM service endpoint | http://localhost:8081 | URL |
llm-model | Model path or identifier | Required | Path/String |
llm-models | Available model aliases for routing | default | Semicolon-separated |
LLM Cache
| Parameter | Description | Default | Type |
|---|---|---|---|
llm-cache | Enable response caching | false | Boolean |
llm-cache-ttl | Cache time-to-live | 3600 | Seconds |
llm-cache-semantic | Semantic similarity cache | true | Boolean |
llm-cache-threshold | Similarity threshold | 0.95 | Float (0-1) |
Embedded LLM Server
| Parameter | Description | Default | Type |
|---|---|---|---|
llm-server | Run embedded server | false | Boolean |
llm-server-path | Server binary path | botserver-stack/bin/llm/build/bin | Path |
llm-server-host | Server bind address | 0.0.0.0 | IP address |
llm-server-port | Server port | 8081 | Number |
llm-server-gpu-layers | GPU offload layers | 0 | Number |
llm-server-n-moe | MoE experts count | 0 | Number |
llm-server-ctx-size | Context size | 4096 | Tokens |
llm-server-n-predict | Max predictions | 1024 | Tokens |
llm-server-parallel | Parallel requests | 6 | Number |
llm-server-cont-batching | Continuous batching | true | Boolean |
llm-server-mlock | Lock in memory | false | Boolean |
llm-server-no-mmap | Disable mmap | false | Boolean |
llm-server-reasoning-format | Reasoning output format for llama.cpp | none | String |
Hardware-Specific LLM Tuning
For RTX 3090 (24GB VRAM)
You can run impressive models with proper configuration:
- DeepSeek-R3-Distill-Qwen-7B: Set
llm-server-gpu-layersto 35-40 - Qwen2.5-32B-Instruct (Q4_K_M): Fits with
llm-server-gpu-layersto 40-45 - DeepSeek-V3 (with MoE): Set
llm-server-n-moeto 2-4 to run even 120B models! MoE only loads active experts - Optimization: Use
llm-server-ctx-sizeof 8192 for longer contexts
For RTX 4070/4070Ti (12-16GB VRAM)
Mid-range cards work great with quantized models:
- Qwen2.5-14B (Q4_K_M): Set
llm-server-gpu-layersto 25-30 - DeepSeek-R3-Distill-Llama-8B: Fully fits with layers at 32
- Tips: Keep
llm-server-ctx-sizeat 4096 to save VRAM
For CPU-Only (No GPU)
Modern CPUs can still run capable models:
- DeepSeek-R3-Distill-Qwen-1.5B: Fast on CPU, great for testing
- Phi-3-mini (3.8B): Excellent CPU performance
- Settings: Set
llm-server-mlocktotrueto prevent swapping - Parallel: Increase
llm-server-parallelto CPU cores -2
Recommended Models (GGUF Format)
- Best Overall: DeepSeek-R3-Distill series (1.5B to 70B)
- Best Small: Qwen2.5-3B-Instruct-Q5_K_M
- Best Medium: DeepSeek-R3-Distill-Qwen-14B-Q4_K_M
- Best Large: DeepSeek-V3, Qwen2.5-32B, or GPT2-120B-GGUF (with MoE enabled)
Pro Tip: The llm-server-n-moe parameter is magic for large models - it enables Mixture of Experts, letting you run 120B+ models on consumer hardware by only loading the experts needed for each token!
Local vs Cloud: A Practical Note
General Bots excels at local deployment - you own your hardware, your data stays private, and there are no recurring costs. However, if you need cloud inference:
Groq is the speed champion - They use custom LPU (Language Processing Unit) chips instead of GPUs, delivering 10x faster inference than traditional cloud providers. Their hardware is purpose-built for transformers, avoiding the general-purpose overhead of NVIDIA GPUs.
This isn’t about market competition - it’s about architecture. NVIDIA GPUs are designed for many tasks, while Groq’s chips do one thing incredibly well: transformer inference. If speed matters and you’re using cloud, Groq is currently the fastest option available.
For local deployment, stick with General Bots and the configurations above. For cloud bursts or when you need extreme speed, consider Groq’s API with these settings:
llm-url,https://api.groq.com/openai/v1
llm-key,your-groq-api-key
llm-model,mixtral-8x7b-32768
Embedding Parameters
| Parameter | Description | Default | Type |
|---|---|---|---|
embedding-url | Embedding service endpoint | http://localhost:8082 | URL |
embedding-model | Embedding model path | Required for KB | Path |
Email Parameters
| Parameter | Description | Default | Type |
|---|---|---|---|
email-from | Sender address | Required for email | |
email-server | SMTP hostname | Required for email | Hostname |
email-port | SMTP port | 587 | Number |
email-user | SMTP username | Required for email | String |
email-pass | SMTP password | Required for email | String |
email-read-pixel | Enable read tracking pixel in HTML emails | false | Boolean |
Email Read Tracking
When email-read-pixel is enabled, a 1x1 transparent tracking pixel is automatically injected into HTML emails sent via the API. This allows you to:
- Track when emails are opened
- See how many times an email was opened
- Get the approximate location (IP) and device (user agent) of the reader
API Endpoints for tracking:
| Endpoint | Method | Description |
|---|---|---|
/api/email/tracking/pixel/{tracking_id} | GET | Serves the tracking pixel (called by email client) |
/api/email/tracking/status/{tracking_id} | GET | Get read status for a specific email |
/api/email/tracking/list | GET | List all sent emails with tracking status |
/api/email/tracking/stats | GET | Get overall tracking statistics |
Example configuration:
email-read-pixel,true
server-url,https://yourdomain.com
Note: The server-url parameter is used to generate the tracking pixel URL. Make sure it’s accessible from the recipient’s email client.
Privacy considerations: Email tracking should be used responsibly. Consider disclosing tracking in your email footer for transparency.
Theme Parameters
| Parameter | Description | Default | Type |
|---|---|---|---|
theme-color1 | Primary color | Not set | Hex color |
theme-color2 | Secondary color | Not set | Hex color |
theme-logo | Logo URL | Not set | URL |
theme-title | Bot display title | Not set | String |
bot-name | Bot display name | Not set | String |
welcome-message | Initial greeting message | Not set | String |
Custom Database Parameters
These parameters configure external database connections for use with BASIC keywords like MariaDB/MySQL connections.
| Parameter | Description | Default | Type |
|---|---|---|---|
custom-server | Database server hostname | localhost | Hostname |
custom-port | Database port | 5432 | Number |
custom-database | Database name | Not set | String |
custom-username | Database user | Not set | String |
custom-password | Database password | Not set | String |
Website Crawling Parameters
| Parameter | Description | Default | Type |
|---|---|---|---|
website-expires | Cache expiration for crawled content | 1d | Duration |
website-max-depth | Maximum crawl depth | 3 | Number |
website-max-pages | Maximum pages to crawl | 100 | Number |
Image Generator Parameters
| Parameter | Description | Default | Type |
|---|---|---|---|
image-generator-model | Diffusion model path | Not set | Path |
image-generator-steps | Inference steps | 4 | Number |
image-generator-width | Output width | 512 | Pixels |
image-generator-height | Output height | 512 | Pixels |
image-generator-gpu-layers | GPU offload layers | 20 | Number |
image-generator-batch-size | Batch size | 1 | Number |
Video Generator Parameters
| Parameter | Description | Default | Type |
|---|---|---|---|
video-generator-model | Video model path | Not set | Path |
video-generator-frames | Frames to generate | 24 | Number |
video-generator-fps | Frames per second | 8 | Number |
video-generator-width | Output width | 320 | Pixels |
video-generator-height | Output height | 576 | Pixels |
video-generator-gpu-layers | GPU offload layers | 15 | Number |
video-generator-batch-size | Batch size | 1 | Number |
BotModels Service Parameters
| Parameter | Description | Default | Type |
|---|---|---|---|
botmodels-enabled | Enable BotModels service | true | Boolean |
botmodels-host | BotModels bind address | 0.0.0.0 | IP address |
botmodels-port | BotModels port | 8085 | Number |
Generator Parameters
| Parameter | Description | Default | Type |
|---|---|---|---|
default-generator | Default content generator | all | String |
Teams Channel Parameters
| Parameter | Description | Default | Type |
|---|---|---|---|
teams-app-id | Microsoft Teams App ID | Not set | String |
teams-app-password | Microsoft Teams App Password | Not set | String |
teams-tenant-id | Microsoft Teams Tenant ID | Not set | String |
teams-bot-id | Microsoft Teams Bot ID | Not set | String |
SMS Parameters
| Parameter | Description | Default | Type |
|---|---|---|---|
sms-provider | SMS provider (twilio, aws, vonage, messagebird, custom) | Not set | String |
sms-fallback-provider | Fallback provider if primary fails | Not set | String |
Twilio Parameters
| Parameter | Description | Default | Type |
|---|---|---|---|
twilio-account-sid | Twilio Account SID | Not set | String |
twilio-auth-token | Twilio Auth Token | Not set | String |
twilio-phone-number | Twilio phone number (E.164 format) | Not set | String |
twilio-messaging-service-sid | Messaging Service SID for routing | Not set | String |
twilio-status-callback | Webhook URL for delivery status | Not set | URL |
AWS SNS Parameters
| Parameter | Description | Default | Type |
|---|---|---|---|
aws-access-key-id | AWS Access Key ID | Not set | String |
aws-secret-access-key | AWS Secret Access Key | Not set | String |
aws-region | AWS Region (e.g., us-east-1) | Not set | String |
aws-sns-sender-id | Sender ID (alphanumeric) | Not set | String |
aws-sns-message-type | Promotional or Transactional | Transactional | String |
Vonage (Nexmo) Parameters
| Parameter | Description | Default | Type |
|---|---|---|---|
vonage-api-key | Vonage API Key | Not set | String |
vonage-api-secret | Vonage API Secret | Not set | String |
vonage-from | Sender number or alphanumeric ID | Not set | String |
vonage-callback-url | Delivery receipt webhook | Not set | URL |
MessageBird Parameters
| Parameter | Description | Default | Type |
|---|---|---|---|
messagebird-access-key | MessageBird Access Key | Not set | String |
messagebird-originator | Sender number or name | Not set | String |
messagebird-report-url | Status report webhook | Not set | URL |
Custom Provider Parameters
| Parameter | Description | Default | Type |
|---|---|---|---|
sms-custom-url | API endpoint URL | Not set | URL |
sms-custom-method | HTTP method (POST, GET) | POST | String |
sms-custom-auth-header | Authorization header value | Not set | String |
sms-custom-body-template | JSON body with {{to}}, {{message}} placeholders | Not set | String |
sms-custom-from | Sender number for custom provider | Not set | String |
Example: Twilio Configuration
sms-provider,twilio
twilio-account-sid,ACxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
twilio-auth-token,your_auth_token
twilio-phone-number,+15551234567
Example: AWS SNS Configuration
sms-provider,aws
aws-access-key-id,AKIAIOSFODNN7EXAMPLE
aws-secret-access-key,wJalrXUtnFEMI/K7MDENG/bPxRfiCYEXAMPLEKEY
aws-region,us-east-1
aws-sns-message-type,Transactional
See SMS Provider Configuration for detailed setup instructions.
WhatsApp Parameters
| Parameter | Description | Default | Type |
|---|---|---|---|
whatsapp-api-key | Access token from Meta Business | Not set | String |
whatsapp-phone-number-id | Phone number ID from WhatsApp Business | Not set | String |
whatsapp-verify-token | Token for webhook verification | Not set | String |
whatsapp-business-account-id | WhatsApp Business Account ID | Not set | String |
whatsapp-api-version | Graph API version | v17.0 | String |
Example: WhatsApp Configuration
whatsapp-api-key,EAABs...your_access_token
whatsapp-phone-number-id,123456789012345
whatsapp-verify-token,my-secret-verify-token
whatsapp-business-account-id,987654321098765
See WhatsApp Channel Configuration for detailed setup instructions.
Multi-Agent Parameters
Agent-to-Agent (A2A) Communication
| Parameter | Description | Default | Type |
|---|---|---|---|
a2a-enabled | Enable agent-to-agent communication | true | Boolean |
a2a-timeout | Default delegation timeout | 30 | Seconds |
a2a-max-hops | Maximum delegation chain depth | 5 | Number |
a2a-retry-count | Retry attempts on failure | 3 | Number |
a2a-queue-size | Maximum pending messages | 100 | Number |
a2a-protocol-version | A2A protocol version | 1.0 | String |
a2a-persist-messages | Persist A2A messages to database | false | Boolean |
Bot Reflection
| Parameter | Description | Default | Type |
|---|---|---|---|
bot-reflection-enabled | Enable bot self-analysis | true | Boolean |
bot-reflection-interval | Messages between reflections | 10 | Number |
bot-reflection-prompt | Custom reflection prompt | (none) | String |
bot-reflection-types | Reflection types to perform | ConversationQuality | Semicolon-separated |
bot-improvement-auto-apply | Auto-apply suggested improvements | false | Boolean |
bot-improvement-threshold | Score threshold for improvements (0-10) | 6.0 | Float |
Reflection Types
Available values for bot-reflection-types:
ConversationQuality- Analyze conversation quality and user satisfactionResponseAccuracy- Analyze response accuracy and relevanceToolUsage- Analyze tool usage effectivenessKnowledgeRetrieval- Analyze knowledge retrieval performancePerformance- Analyze overall bot performance
Example:
bot-reflection-enabled,true
bot-reflection-interval,10
bot-reflection-types,ConversationQuality;ResponseAccuracy;ToolUsage
bot-improvement-auto-apply,false
bot-improvement-threshold,7.0
Memory Parameters
User Memory (Cross-Bot)
| Parameter | Description | Default | Type |
|---|---|---|---|
user-memory-enabled | Enable user-level memory | true | Boolean |
user-memory-max-keys | Maximum keys per user | 1000 | Number |
user-memory-default-ttl | Default time-to-live (0=no expiry) | 0 | Seconds |
Episodic Memory (Context Compaction)
| Parameter | Description | Default | Type |
|---|---|---|---|
episodic-memory-enabled | Enable episodic memory system | true | Boolean |
episodic-memory-threshold | Exchanges before compaction triggers | 4 | Number |
episodic-memory-history | Recent exchanges to keep in full | 2 | Number |
episodic-memory-model | Model for summarization | fast | String |
episodic-memory-max-episodes | Maximum episodes per user | 100 | Number |
episodic-memory-retention-days | Days to retain episodes | 365 | Number |
episodic-memory-auto-summarize | Enable automatic summarization | true | Boolean |
Episodic memory automatically manages conversation context to stay within LLM token limits. When conversation exchanges exceed episodic-memory-threshold, older messages are summarized and only the last episodic-memory-history exchanges are kept in full. See Chapter 03 - Episodic Memory for details.
Model Routing Parameters
These parameters configure multi-model routing for different task types. Requires multiple llama.cpp server instances.
| Parameter | Description | Default | Type |
|---|---|---|---|
llm-models | Available model aliases | default | Semicolon-separated |
model-routing-strategy | Routing strategy (manual/auto/load-balanced/fallback) | auto | String |
model-default | Default model alias | default | String |
model-fast | Model for fast/simple tasks | (configured) | Path/String |
model-quality | Model for quality/complex tasks | (configured) | Path/String |
model-code | Model for code generation | (configured) | Path/String |
model-fallback-enabled | Enable automatic fallback | true | Boolean |
model-fallback-order | Order to try on failure | quality,fast,local | Comma-separated |
Multi-Model Example
llm-models,default;fast;quality;code
llm-url,http://localhost:8081
model-routing-strategy,auto
model-default,fast
model-fallback-enabled,true
model-fallback-order,quality,fast
Hybrid RAG Search Parameters
General Bots uses hybrid search combining dense (embedding) and sparse (BM25 keyword) search for optimal retrieval. The BM25 implementation is powered by Tantivy, a full-text search engine library similar to Apache Lucene.
| Parameter | Description | Default | Type |
|---|---|---|---|
rag-hybrid-enabled | Enable hybrid dense+sparse search | true | Boolean |
rag-dense-weight | Weight for semantic results | 0.7 | Float (0-1) |
rag-sparse-weight | Weight for keyword results | 0.3 | Float (0-1) |
rag-reranker-enabled | Enable LLM reranking | false | Boolean |
rag-reranker-model | Model for reranking | cross-encoder/ms-marco-MiniLM-L-6-v2 | String |
rag-reranker-top-n | Candidates for reranking | 20 | Number |
rag-max-results | Maximum results to return | 10 | Number |
rag-min-score | Minimum relevance score threshold | 0.0 | Float (0-1) |
rag-rrf-k | RRF smoothing constant | 60 | Number |
rag-cache-enabled | Enable search result caching | true | Boolean |
rag-cache-ttl | Cache time-to-live | 3600 | Seconds |
BM25 Sparse Search (Tantivy)
BM25 is a keyword-based ranking algorithm that excels at finding exact term matches. It’s powered by Tantivy when the vectordb feature is enabled.
| Parameter | Description | Default | Type |
|---|---|---|---|
bm25-enabled | Enable/disable BM25 sparse search | true | Boolean |
bm25-k1 | Term frequency saturation (0.5-3.0 typical) | 1.2 | Float |
bm25-b | Document length normalization (0.0-1.0) | 0.75 | Float |
bm25-stemming | Apply word stemming (running→run) | true | Boolean |
bm25-stopwords | Filter common words (the, a, is) | true | Boolean |
Switching Search Modes
Hybrid Search (Default - Best for most use cases)
bm25-enabled,true
rag-dense-weight,0.7
rag-sparse-weight,0.3
Uses both semantic understanding AND keyword matching. Best for general queries.
Dense Only (Semantic Search)
bm25-enabled,false
rag-dense-weight,1.0
rag-sparse-weight,0.0
Uses only embedding-based search. Faster, good for conceptual/semantic queries where exact words don’t matter.
Sparse Only (Keyword Search)
bm25-enabled,true
rag-dense-weight,0.0
rag-sparse-weight,1.0
Uses only BM25 keyword matching. Good for exact term searches, technical documentation, or when embeddings aren’t available.
BM25 Parameter Tuning
The k1 and b parameters control BM25 behavior:
-
bm25-k1(Term Saturation): Controls how much additional term occurrences contribute to the score- Lower values (0.5-1.0): Diminishing returns for repeated terms
- Higher values (1.5-2.0): More weight to documents with many term occurrences
- Default
1.2works well for most content
-
bm25-b(Length Normalization): Controls document length penalty0.0: No length penalty (long documents scored equally)1.0: Full length normalization (strongly penalizes long documents)- Default
0.75balances length fairness
Tuning for specific content:
# For short documents (tweets, titles)
bm25-b,0.3
# For long documents (articles, manuals)
bm25-b,0.9
# For code search (exact matches important)
bm25-k1,1.5
bm25-stemming,false
Code Sandbox Parameters
| Parameter | Description | Default | Type |
|---|---|---|---|
sandbox-enabled | Enable code sandbox | true | Boolean |
sandbox-runtime | Isolation backend (lxc/docker/firecracker/process) | lxc | String |
sandbox-timeout | Maximum execution time | 30 | Seconds |
sandbox-memory-mb | Memory limit in megabytes | 256 | MB |
sandbox-cpu-percent | CPU usage limit | 50 | Percent |
sandbox-network | Allow network access | false | Boolean |
sandbox-python-packages | Pre-installed Python packages | (none) | Comma-separated |
sandbox-allowed-paths | Accessible filesystem paths | /data,/tmp | Comma-separated |
Example: Python Sandbox
sandbox-enabled,true
sandbox-runtime,lxc
sandbox-timeout,60
sandbox-memory-mb,512
sandbox-cpu-percent,75
sandbox-network,false
sandbox-python-packages,numpy,pandas,requests,matplotlib
sandbox-allowed-paths,/data,/tmp,/uploads
SSE Streaming Parameters
| Parameter | Description | Default | Type |
|---|---|---|---|
sse-enabled | Enable Server-Sent Events | true | Boolean |
sse-heartbeat | Heartbeat interval | 30 | Seconds |
sse-max-connections | Maximum concurrent connections | 1000 | Number |
Parameter Types
Boolean
Values: true or false (case-sensitive)
Number
Integer values, must be within valid ranges:
- Ports: 1-65535
- Tokens: Positive integers
- Percentages: 0-100
Float
Decimal values:
- Thresholds: 0.0 to 1.0
- Weights: 0.0 to 1.0
Path
File system paths:
- Relative:
../../../../data/model.gguf - Absolute:
/opt/models/model.gguf
URL
Valid URLs:
- HTTP:
http://localhost:8081 - HTTPS:
https://api.example.com
String
Any text value (no quotes needed in CSV)
Valid email format: user@domain.com
Hex Color
HTML color codes: #RRGGBB format
Semicolon-separated
Multiple values separated by semicolons: value1;value2;value3
Comma-separated
Multiple values separated by commas: value1,value2,value3
Required vs Optional
Always Required
- None - all parameters have defaults or are optional
Required for Features
- LLM:
llm-modelmust be set - Email:
email-from,email-server,email-user - Embeddings:
embedding-modelfor knowledge base - Custom DB:
custom-databaseif using external database
Configuration Precedence
- Built-in defaults (hardcoded)
- config.csv values (override defaults)
- Environment variables (if implemented, override config)
Special Values
none- Explicitly no value (forllm-key)- Empty string - Unset/use default
false- Feature disabledtrue- Feature enabled
Performance Tuning
For Local Models
llm-server-ctx-size,8192
llm-server-n-predict,2048
llm-server-parallel,4
llm-cache,true
llm-cache-ttl,7200
For Production
llm-server-cont-batching,true
llm-cache-semantic,true
llm-cache-threshold,0.90
llm-server-parallel,8
sse-max-connections,5000
For Low Memory
llm-server-ctx-size,2048
llm-server-n-predict,512
llm-server-mlock,false
llm-server-no-mmap,false
llm-cache,false
sandbox-memory-mb,128
For Multi-Agent Systems
a2a-enabled,true
a2a-timeout,30
a2a-max-hops,5
a2a-retry-count,3
a2a-persist-messages,true
bot-reflection-enabled,true
bot-reflection-interval,10
user-memory-enabled,true
For Hybrid RAG
rag-hybrid-enabled,true
rag-dense-weight,0.7
rag-sparse-weight,0.3
rag-reranker-enabled,true
rag-max-results,10
rag-min-score,0.3
rag-cache-enabled,true
bm25-enabled,true
bm25-k1,1.2
bm25-b,0.75
For Dense-Only Search (Faster)
bm25-enabled,false
rag-dense-weight,1.0
rag-sparse-weight,0.0
rag-max-results,10
For Code Execution
sandbox-enabled,true
sandbox-runtime,lxc
sandbox-timeout,30
sandbox-memory-mb,512
sandbox-network,false
sandbox-python-packages,numpy,pandas,requests
Validation Rules
- Paths: Model files must exist
- URLs: Must be valid format
- Ports: Must be 1-65535
- Emails: Must contain @ and domain
- Colors: Must be valid hex format
- Booleans: Exactly
trueorfalse - Weights: Must sum to 1.0 (e.g.,
rag-dense-weight+rag-sparse-weight)
LLM Configuration
Configuration for Language Model integration in BotServer, supporting both local GGUF models and external API services.
Local Model Configuration
BotServer is designed to work with local GGUF models by default. The minimal configuration requires only a few settings in your config.csv:
llm-key,none
llm-url,http://localhost:8081
llm-model,../../../../data/llm/DeepSeek-R3-Distill-Qwen-1.5B-Q3_K_M.gguf
Model Path
The llm-model parameter accepts relative paths like ../../../../data/llm/model.gguf, absolute paths like /opt/models/model.gguf, or model names when using external APIs like gpt-5.
Supported Model Formats
BotServer supports GGUF quantized models for CPU and GPU inference. Quantization levels include Q3_K_M, Q4_K_M, and Q5_K_M for reduced memory usage with acceptable quality trade-offs, while F16 and F32 provide full precision for maximum quality.
LLM Server Configuration
Running Embedded Server
BotServer can run its own LLM server for local inference:
llm-server,true
llm-server-path,botserver-stack/bin/llm/build/bin
llm-server-host,0.0.0.0
llm-server-port,8081
Server Performance Parameters
Fine-tune server performance based on your hardware capabilities:
llm-server-gpu-layers,0
llm-server-ctx-size,4096
llm-server-n-predict,1024
llm-server-parallel,6
llm-server-cont-batching,true
| Parameter | Description | Impact |
|---|---|---|
llm-server-gpu-layers | Layers to offload to GPU | 0 = CPU only, higher = more GPU |
llm-server-ctx-size | Context window size | More context = more memory |
llm-server-n-predict | Max tokens to generate | Limits response length |
llm-server-parallel | Concurrent requests | Higher = more throughput |
llm-server-cont-batching | Continuous batching | Improves multi-user performance |
Memory Management
Memory settings control how the model interacts with system RAM:
llm-server-mlock,false
llm-server-no-mmap,false
The mlock option locks the model in RAM to prevent swapping, which improves performance but requires sufficient memory. The no-mmap option disables memory mapping and loads the entire model into RAM, using more memory but potentially improving access patterns.
Cache Configuration
Basic Cache Settings
Caching reduces repeated LLM calls for identical inputs, significantly improving response times and reducing API costs:
llm-cache,false
llm-cache-ttl,3600
Semantic Cache
Semantic caching matches similar queries, not just identical ones, providing cache hits even when users phrase questions differently:
llm-cache-semantic,true
llm-cache-threshold,0.95
The threshold parameter controls how similar queries must be to trigger a cache hit. A value of 0.95 requires 95% similarity. Lower thresholds produce more cache hits but may return less accurate cached responses.
External API Configuration
Groq and OpenAI-Compatible APIs
For cloud inference, Groq offers the fastest performance among major providers:
llm-key,gsk-your-groq-api-key
llm-url,https://api.groq.com/openai/v1
llm-model,mixtral-8x7b-32768
Local API Servers
When running your own inference server or using another local service:
llm-key,none
llm-url,http://localhost:8081
llm-model,local-model-name
Configuration Examples
Minimal Local Setup
The simplest configuration for getting started with local models:
name,value
llm-url,http://localhost:8081
llm-model,../../../../data/llm/model.gguf
High-Performance Local
Optimized for maximum throughput on capable hardware:
name,value
llm-server,true
llm-server-gpu-layers,32
llm-server-ctx-size,8192
llm-server-parallel,8
llm-server-cont-batching,true
llm-cache,true
llm-cache-semantic,true
Low-Resource Setup
Configured for systems with limited RAM or CPU:
name,value
llm-server-ctx-size,2048
llm-server-n-predict,512
llm-server-parallel,2
llm-cache,false
llm-server-mlock,false
External API
Using a cloud provider for inference:
name,value
llm-key,sk-...
llm-url,https://api.anthropic.com
llm-model,claude-sonnet-4.5
llm-cache,true
llm-cache-ttl,7200
Performance Tuning
For Responsiveness
When response speed is the priority, decrease llm-server-ctx-size and llm-server-n-predict to reduce processing time. Enable both llm-cache and llm-cache-semantic to serve repeated queries instantly.
For Quality
When output quality matters most, increase llm-server-ctx-size and llm-server-n-predict to give the model more context and generation headroom. Use higher quantization models like Q5_K_M or F16 for better accuracy. Either disable semantic cache entirely or raise the threshold to avoid returning imprecise cached responses.
For Multiple Users
Supporting concurrent users requires enabling llm-server-cont-batching and increasing llm-server-parallel to handle multiple requests simultaneously. Enable caching to reduce redundant inference calls. If available, GPU offloading significantly improves throughput under load.
Model Selection Guidelines
Small Models (1-3B parameters)
Small models like DeepSeek-R3-Distill-Qwen-1.5B deliver fast responses with low memory usage. They work well for simple tasks, quick interactions, and resource-constrained environments.
Medium Models (7-13B parameters)
Medium-sized models such as Llama-2-7B and Mistral-7B provide balanced performance suitable for general-purpose applications. They require moderate memory but handle a wide range of tasks competently.
Large Models (30B+ parameters)
Large models like Llama-2-70B and Mixtral-8x7B offer the best quality for complex reasoning tasks. They require substantial memory and compute resources but excel at nuanced understanding and generation.
Troubleshooting
Model Won’t Load
If the model fails to load, first verify the file path exists and is accessible. Check that your system has sufficient RAM for the model size. Ensure the GGUF file version is compatible with your llama.cpp build.
Slow Responses
Slow generation typically indicates resource constraints. Reduce context size, enable caching to avoid redundant inference, use GPU offloading if hardware permits, or switch to a smaller quantized model.
Out of Memory
Memory errors require reducing resource consumption. Lower llm-server-ctx-size and llm-server-parallel values. Switch to more aggressively quantized models (Q3 instead of Q5). Disable llm-server-mlock to allow the OS to manage memory more flexibly.
Connection Refused
Connection errors usually indicate server configuration issues. Verify llm-server is set to true if expecting BotServer to run the server. Check that the configured port is not already in use by another process. Ensure firewall rules allow connections on the specified port.
Best Practices
Start with smaller models and scale up only as needed, since larger models consume more resources without always providing proportionally better results. Enable caching for any production deployment to reduce costs and improve response times. Monitor RAM usage during operation to catch memory pressure before it causes problems. Test model responses thoroughly before deploying to production to ensure quality meets requirements. Document which models you’re using and their performance characteristics. Track changes to your config.csv in version control to maintain a history of configuration adjustments.
Configuration Management
Configuration in General Bots is designed to be simple and transparent. Each bot uses a config.csv file for settings, with additional environment variables for system-level configuration.
The config.csv File
Located in your bot’s .gbot package, this file controls all bot-specific settings using simple name-value pairs.
File Format
name,value
setting_name,setting_value
another_setting,another_value
- Empty rows are used for visual grouping
- No quotes needed for string values
- Case-sensitive names
- Comments not supported (keep it simple)
Core Configuration Sections
Server Configuration
server-host,0.0.0.0
server-port,8080
sites-root,/tmp
| Name | Description | Default | Example |
|---|---|---|---|
server-host | Bind address for the web server | 0.0.0.0 | 0.0.0.0 |
server-port | Port for the web interface | 8080 | 8080 |
sites-root | Directory for generated sites | /tmp | /tmp |
LLM Configuration - Overview
For detailed LLM configuration, see the tables below. The basic settings are:
llm-key,none
llm-url,http://localhost:8081
llm-model,../../../../data/llm/DeepSeek-R3-Distill-Qwen-1.5B-Q3_K_M.gguf
Core LLM Settings
| Name | Description | Default | Example |
|---|---|---|---|
llm-key | API key for LLM service | none | gsk-... for Groq |
llm-url | LLM service endpoint | http://localhost:8081 | https://api.groq.com/openai/v1 |
llm-model | Model path or name | Required | mixtral-8x7b-32768 |
Model Path Options
- Local GGUF:
../../../../data/llm/model.gguf - Absolute path:
/opt/models/model.gguf - Cloud model name:
mixtral-8x7b-32768(for Groq)
Supported Formats
- GGUF: Quantized models (Q3_K_M, Q4_K_M, Q5_K_M, Q8_0)
- API Models: Any Groq or OpenAI-compatible model
LLM Cache Settings
llm-cache,false
llm-cache-ttl,3600
llm-cache-semantic,true
llm-cache-threshold,0.95
| Name | Description | Default | Type |
|---|---|---|---|
llm-cache | Enable response caching | false | Boolean |
llm-cache-ttl | Cache time-to-live in seconds | 3600 | Number |
llm-cache-semantic | Use semantic similarity | true | Boolean |
llm-cache-threshold | Similarity threshold for cache hits | 0.95 | Float |
Cache Strategy Tips:
- Enable for production to reduce API costs
- Semantic cache finds similar (not just identical) queries
- Lower threshold (0.90) = more hits but less precision
- Higher threshold (0.98) = fewer hits but exact matches
Context Management
episodic-memory-threshold,4
episodic-memory-history,2
| Name | Description | Default | Range |
|---|---|---|---|
episodic-memory-threshold | Messages before compaction | 4 | 1-10 |
episodic-memory-history | Messages to keep in history | Not set | 1-20 |
Embedding Configuration
embedding-url,http://localhost:8082
embedding-model,../../../../data/llm/bge-small-en-v1.5-f32.gguf
| Name | Description | Default | Type |
|---|---|---|---|
embedding-url | Embedding service endpoint | http://localhost:8082 | URL |
embedding-model | Path to embedding model | Required for KB | Path |
LLM Server Settings (When Self-Hosting)
llm-server,true
llm-server-path,botserver-stack/bin/llm/build/bin
llm-server-host,0.0.0.0
llm-server-port,8081
llm-server-gpu-layers,0
llm-server-n-moe,0
llm-server-ctx-size,4096
llm-server-n-predict,1024
llm-server-parallel,6
llm-server-cont-batching,true
llm-server-mlock,false
llm-server-no-mmap,false
Performance Parameters
| Parameter | Description | Default | Impact |
|---|---|---|---|
llm-server-gpu-layers | Layers to offload to GPU | 0 | 0=CPU only, higher=more GPU usage |
llm-server-n-moe | MoE experts count | 0 | Enables 120B+ models on consumer GPUs |
llm-server-ctx-size | Context window (tokens) | 4096 | More context = more memory |
llm-server-n-predict | Max output tokens | 1024 | Limits response length |
llm-server-parallel | Concurrent requests | 6 | Higher = more throughput |
llm-server-cont-batching | Continuous batching | true | Better multi-user performance |
llm-server-mlock | Lock model in RAM | false | Prevents swapping to disk |
llm-server-no-mmap | Disable memory mapping | false | Uses more RAM but may be faster |
Hardware-Specific Settings
RTX 3090 (24GB VRAM)
- Set
llm-server-gpu-layersto 35-45 for 7B-32B models - Enable
llm-server-n-moe2-4 for 120B+ models - Can run DeepSeek-V3 with proper MoE settings
RTX 4070/Ti (12-16GB)
- Set
llm-server-gpu-layersto 25-30 for 7B-14B models - Keep
llm-server-ctx-sizeat 4096 to save VRAM
CPU-Only Setup
- Keep
llm-server-gpu-layersat 0 - Enable
llm-server-mlockto prevent swapping - Set
llm-server-parallelto CPU cores -2
Email Configuration
email-from,from@domain.com
email-server,mail.domain.com
email-port,587
email-user,user@domain.com
email-pass,password
All email parameters are required if you want to send emails from your bot.
Custom Database (Optional)
custom-server,localhost
custom-port,5432
custom-database,mycustomdb
custom-username,dbuser
custom-password,dbpass
Configuration Examples
Minimal Configuration
name,value
server-port,8080
llm-url,http://localhost:8081
llm-model,../../../../data/llm/model.gguf
Production Configuration (Groq Cloud)
name,value
,
server-host,0.0.0.0
server-port,443
sites-root,/var/www/sites
,
# Groq is 10x faster than traditional cloud providers
llm-key,gsk-your-groq-api-key
llm-url,https://api.groq.com/openai/v1
llm-model,mixtral-8x7b-32768
,
llm-cache,true
llm-cache-ttl,7200
llm-cache-semantic,true
llm-cache-threshold,0.95
,
episodic-memory-threshold,6
,
email-from,bot@company.com
email-server,smtp.company.com
email-port,587
email-user,bot@company.com
email-pass,secure-password
Local Development (Self-Hosted)
name,value
,
server-port,3000
,
# Run your own LLM server
llm-server,true
llm-server-gpu-layers,35
llm-server-ctx-size,8192
llm-server-n-predict,2048
llm-model,../../../../data/llm/DeepSeek-R3-Distill-Qwen-7B-Q4_K_M.gguf
,
# Disable cache for development
llm-cache,false
episodic-memory-threshold,2
Configuration Priority
Settings are applied in this order (later overrides earlier):
- Default values in code
- config.csv settings
Best Practices
- Keep it Simple: Only configure what you need to change
- Use Groups: Empty rows make the file readable
- Test Locally: Verify settings before production
- Secure Secrets: Use environment variables for passwords in production
- Document Changes: Comment significant changes in version control
Validation
The system validates configuration on startup:
- Missing required values cause clear error messages
- Invalid URLs or paths are detected early
- Port conflicts are reported
- Model file existence is verified
Hot Reload
Some settings support hot reload without restart:
- Cache settings
- Context parameters
- Email configuration
Others require restart:
- Server ports
- LLM model changes
- Database connections
Troubleshooting
Common Issues
Port Already in Use
- Change
server-portto an available port - Check for other services on the same port
Model Not Found
- Verify the path in
llm-modelis correct - Ensure the GGUF file exists
- Use absolute paths if relative paths fail
LLM Server Won’t Start
- Check
llm-server-gpu-layersdoesn’t exceed your GPU capability - Reduce
llm-server-ctx-sizeif out of memory - Set
llm-server-gpu-layersto 0 for CPU-only - Verify model file exists at the specified path
- Check available VRAM with
nvidia-smi(if using GPU)
Cache Not Working
- Ensure
llm-cacheis set totrue - Check
llm-cache-thresholdisn’t too high (0.95 is usually good) - Verify Valkey/Redis is running
Quick Model Recommendations
Best Models by Hardware
24GB+ VRAM (RTX 3090, 4090)
- DeepSeek-V3 (with MoE enabled)
- Qwen2.5-32B-Instruct-Q4_K_M
- DeepSeek-R3-Distill-Qwen-14B (runs fast with room to spare)
12-16GB VRAM (RTX 4070, 4070Ti)
- DeepSeek-R3-Distill-Llama-8B
- Qwen2.5-14B-Q4_K_M
- Mistral-7B-Instruct-Q5_K_M
8GB VRAM or CPU-Only
- DeepSeek-R3-Distill-Qwen-1.5B
- Phi-3-mini-4k-instruct
- Qwen2.5-3B-Instruct-Q5_K_M
Cloud API (Fastest)
- Groq: mixtral-8x7b-32768
- Groq: llama-3.1-70b-versatile
Summary
General Bots configuration is intentionally simple - a CSV file with name-value pairs. No complex YAML, no nested JSON, just straightforward settings that anyone can edit. Start with minimal configuration and add settings as needed.
For LLM configuration, the key decision is local vs cloud:
- Local: Full control, no recurring costs, complete privacy
- Cloud (Groq): 10x faster inference, pay-per-use, no hardware needed
Drive Integration
Multimodal Configuration
General Bots integrates with botmodels—a Python service for multimodal AI tasks—to enable image generation, video creation, audio synthesis, and vision capabilities directly from BASIC scripts.
Architecture
┌─────────────┐ HTTPS ┌─────────────┐
│ botserver │ ────────────▶ │ botmodels │
│ (Rust) │ │ (Python) │
└─────────────┘ └─────────────┘
│ │
│ BASIC Keywords │ AI Models
│ - IMAGE │ - Stable Diffusion
│ - VIDEO │ - Zeroscope
│ - AUDIO │ - TTS/Whisper
│ - SEE │ - BLIP2
When a BASIC script calls a multimodal keyword, botserver forwards the request to botmodels, which runs the appropriate AI model and returns the generated content.
Configuration
Add these settings to your bot’s config.csv file to enable multimodal capabilities.
BotModels Service
| Key | Default | Description |
|---|---|---|
botmodels-enabled | false | Enable botmodels integration |
botmodels-host | 0.0.0.0 | Host address for botmodels service |
botmodels-port | 8085 | Port for botmodels service |
botmodels-api-key | — | API key for authentication |
botmodels-https | false | Use HTTPS for connection |
Image Generation
| Key | Default | Description |
|---|---|---|
image-generator-model | — | Path to image generation model |
image-generator-steps | 4 | Inference steps (more = higher quality, slower) |
image-generator-width | 512 | Output image width in pixels |
image-generator-height | 512 | Output image height in pixels |
image-generator-gpu-layers | 20 | Layers to offload to GPU |
image-generator-batch-size | 1 | Batch size for generation |
Video Generation
| Key | Default | Description |
|---|---|---|
video-generator-model | — | Path to video generation model |
video-generator-frames | 24 | Number of frames to generate |
video-generator-fps | 8 | Output frames per second |
video-generator-width | 320 | Output video width in pixels |
video-generator-height | 576 | Output video height in pixels |
video-generator-gpu-layers | 15 | Layers to offload to GPU |
video-generator-batch-size | 1 | Batch size for generation |
Example Configuration
key,value
botmodels-enabled,true
botmodels-host,0.0.0.0
botmodels-port,8085
botmodels-api-key,your-secret-key
botmodels-https,false
image-generator-model,../../../../data/diffusion/sd_turbo_f16.gguf
image-generator-steps,4
image-generator-width,512
image-generator-height,512
image-generator-gpu-layers,20
video-generator-model,../../../../data/diffusion/zeroscope_v2_576w
video-generator-frames,24
video-generator-fps,8
BASIC Keywords
Once configured, these keywords become available in your scripts.
IMAGE
Generate an image from a text prompt:
file = IMAGE "a sunset over mountains with purple clouds"
SEND FILE TO user, file
The keyword returns a path to the generated image file.
VIDEO
Generate a video from a text prompt:
file = VIDEO "a rocket launching into space"
SEND FILE TO user, file
Video generation is more resource-intensive than image generation. Expect longer processing times.
AUDIO
Generate speech audio from text:
file = AUDIO "Hello, welcome to our service!"
SEND FILE TO user, file
SEE
Analyze an image or video and get a description:
' Describe an image
caption = SEE "/path/to/image.jpg"
TALK caption
' Describe a video
description = SEE "/path/to/video.mp4"
TALK description
The SEE keyword uses vision models to understand visual content and return natural language descriptions.
Starting BotModels
Before using multimodal features, start the botmodels service:
cd botmodels
python -m uvicorn src.main:app --host 0.0.0.0 --port 8085
For production with HTTPS:
python -m uvicorn src.main:app \
--host 0.0.0.0 \
--port 8085 \
--ssl-keyfile key.pem \
--ssl-certfile cert.pem
BotModels API Endpoints
The botmodels service exposes these REST endpoints:
| Endpoint | Method | Description |
|---|---|---|
/api/image/generate | POST | Generate image from prompt |
/api/video/generate | POST | Generate video from prompt |
/api/speech/generate | POST | Generate speech from text |
/api/speech/totext | POST | Transcribe audio to text |
/api/vision/describe | POST | Describe an image |
/api/vision/describe_video | POST | Describe a video |
/api/vision/vqa | POST | Visual question answering |
/api/health | GET | Health check |
All endpoints except /api/health require the X-API-Key header for authentication.
Model Paths
Configure model paths relative to the botmodels service directory. Typical layout:
data/
├── diffusion/
│ ├── sd_turbo_f16.gguf # Stable Diffusion
│ └── zeroscope_v2_576w/ # Zeroscope video
├── tts/
│ └── model.onnx # Text-to-speech
├── whisper/
│ └── model.bin # Speech-to-text
└── vision/
└── blip2/ # Vision model
GPU Acceleration
Both image and video generation benefit significantly from GPU acceleration. Configure GPU layers based on your hardware:
| GPU VRAM | Recommended GPU Layers |
|---|---|
| 4GB | 8-12 |
| 8GB | 15-20 |
| 12GB+ | 25-35 |
Lower GPU layers if you experience out-of-memory errors.
Troubleshooting
“BotModels is not enabled”
Set botmodels-enabled=true in your config.csv.
Connection refused
Verify botmodels service is running and check host/port configuration. Test connectivity:
curl http://localhost:8085/api/health
Authentication failed
Ensure botmodels-api-key in config.csv matches the API_KEY environment variable in botmodels.
Model not found
Verify model paths are correct and models are downloaded to the expected locations.
Out of memory
Reduce gpu-layers or batch-size. Video generation is particularly memory-intensive.
Security Considerations
Use HTTPS in production. Set botmodels-https=true and configure SSL certificates on the botmodels service.
Use strong API keys. Generate cryptographically random keys for the botmodels-api-key setting.
Restrict network access. Limit botmodels service access to trusted hosts only.
Consider GPU isolation. Run botmodels on a dedicated GPU server if sharing resources with other services.
Performance Tips
Image generation runs fastest with SD Turbo models and 4-8 inference steps. More steps improve quality but increase generation time linearly.
Video generation is the most resource-intensive operation. Keep frame counts low (24-48) for reasonable response times.
Batch processing improves throughput when generating multiple items. Increase batch-size if you have sufficient GPU memory.
Caching generated content when appropriate. If multiple users request similar content, consider storing results.
See Also
- LLM Configuration - Language model settings
- Bot Parameters - All configuration options
- IMAGE Keyword - Image generation reference
- SEE Keyword - Vision capabilities
Secrets Management
General Bots uses a layered approach to configuration and secrets management. The goal is to keep .env minimal - containing only Vault connection info - while all sensitive data is stored securely in Vault.
Configuration Layers
┌─────────────────────────────────────────────────────────────────────────────┐
│ Configuration Hierarchy │
├─────────────────────────────────────────────────────────────────────────────┤
│ │
│ ┌─────────────┐ ┌─────────────┐ ┌─────────────┐ ┌───────────┐ │
│ │ .env │ │ Zitadel │ │ Vault │ │config.csv │ │
│ │(Vault ONLY) │ │ (Identity) │ │ (Secrets) │ │(Bot Config)│ │
│ └──────┬──────┘ └──────┬──────┘ └──────┬──────┘ └─────┬─────┘ │
│ │ │ │ │ │
│ ▼ ▼ ▼ ▼ │
│ • VAULT_ADDR • User accounts • Directory URL • Bot params │
│ • VAULT_TOKEN • Organizations • Database creds • LLM config │
│ • Projects • API keys • Features │
│ • Applications • Drive credentials • Behavior │
│ • MFA settings • Encryption keys │
│ • SSO/OAuth • ALL service secrets │
│ │
└─────────────────────────────────────────────────────────────────────────────┘
What Goes Where?
.env (Vault Connection ONLY)
The .env file should contain ONLY Vault connection info:
# .env - ONLY Vault connection
# Everything else comes from Vault!
VAULT_ADDR=https://localhost:8200
VAULT_TOKEN=hvs.your-root-token
That’s it. Two variables only.
Why so minimal?
.envfiles can be accidentally committed to git- Environment variables may appear in logs
- Reduces attack surface if server is compromised
- Single point of secret management (Vault)
- Easy rotation - change in Vault, not in files
Zitadel (Identity & Access)
Zitadel manages user-facing identity:
| What | Example |
|---|---|
| User accounts | john@example.com |
| Organizations | Acme Corp |
| Projects | Production Bot |
| Applications | Web UI, Mobile App |
| MFA settings | TOTP, SMS, WebAuthn |
| SSO providers | Google, Microsoft |
| User metadata | Department, Role |
Not stored in Zitadel:
- Service passwords
- API keys
- Encryption keys
Vault (Service Secrets)
Vault manages machine-to-machine secrets:
| Path | Contents |
|---|---|
gbo/drive | MinIO access key and secret |
gbo/tables | PostgreSQL username and password |
gbo/cache | Redis password |
gbo/llm | OpenAI, Anthropic, Groq API keys |
gbo/encryption | Master encryption key, data keys |
gbo/email | SMTP credentials |
gbo/meet | LiveKit API key and secret |
gbo/alm | Forgejo admin password, runner token |
config.csv (Bot Configuration)
The bot’s config.csv contains non-sensitive configuration:
# Bot behavior - NOT secrets
llm-provider,anthropic
llm-model,claude-sonnet-4.5
llm-temperature,0.7
llm-max-tokens,4096
# Feature flags
feature-voice-enabled,true
feature-file-upload,true
# Vault references for sensitive values
llm-api-key,vault:gbo/llm/openai_key
Note: Most service credentials (database, drive, cache) are fetched automatically from Vault at startup. You only need vault: references in config.csv for bot-specific secrets like LLM API keys.
How Secrets Flow
At Startup
1. BotServer starts
2. Reads .env for VAULT_ADDR and VAULT_TOKEN (only 2 variables)
3. Connects to Vault
4. Fetches ALL service credentials:
- gbo/directory → Zitadel URL, client_id, client_secret
- gbo/tables → Database host, port, username, password
- gbo/drive → MinIO endpoint, accesskey, secret
- gbo/cache → Redis host, port, password
- gbo/llm → API keys for all providers
- gbo/encryption → Master encryption keys
5. Connects to all services using Vault credentials
6. Reads config.csv for bot configuration
7. For keys referencing Vault (vault:path/key):
- Fetches from Vault automatically
8. System ready
At Runtime
1. User sends message
2. Bot processes, needs LLM
3. Reads config.csv: llm-api-key = vault:gbo/llm/openai_key
4. Fetches from Vault (cached for performance)
5. Calls OpenAI API
6. Returns response
Setting Up Vault
Initial Setup
When you run ./botserver install secrets, it:
- Downloads and installs Vault
- Initializes with a single unseal key
- Creates initial secret paths
- Outputs root token to
conf/vault/init.json
# Check Vault status
./botserver status secrets
# View init credentials (protect this file!)
cat botserver-stack/conf/vault/init.json
Storing Secrets
Use the Vault CLI or API:
# Directory (Zitadel) - includes URL, no longer in .env
vault kv put gbo/directory \
url=https://localhost:8080 \
project_id=your-project-id \
client_id=your-client-id \
client_secret=your-client-secret
# Database - includes host/port, no longer in .env
vault kv put gbo/tables \
host=localhost \
port=5432 \
database=botserver \
username=gbuser \
password=secure-password
# Drive (MinIO)
vault kv put gbo/drive \
endpoint=https://localhost:9000 \
accesskey=minioadmin \
secret=minioadmin123
# Cache (Redis)
vault kv put gbo/cache \
host=localhost \
port=6379 \
password=redis-secret
# LLM API keys
vault kv put gbo/llm \
openai_key=sk-xxxxx \
anthropic_key=sk-ant-xxxxx \
groq_key=gsk_xxxxx \
deepseek_key=sk-xxxxx
# Encryption keys
vault kv put gbo/encryption \
master_key=your-32-byte-key
# Vector database (Qdrant)
vault kv put gbo/vectordb \
url=https://localhost:6334 \
api_key=optional-api-key
# Observability (InfluxDB)
vault kv put gbo/observability \
url=http://localhost:8086 \
org=pragmatismo \
bucket=metrics \
token=your-influx-token
Automatic Management
Secrets are managed automatically - you don’t need a UI for day-to-day operations:
| Action | How It Works |
|---|---|
| Service startup | Fetches credentials from Vault |
| Key rotation | Update in Vault, services reload |
| New bot deployment | Inherits organization secrets |
| LLM provider change | Update config.csv, key fetched automatically |
Emergency Access
For emergency situations (lost credentials, key rotation), admins can:
- Access Vault UI:
https://localhost:8200/ui - Use Vault CLI:
vault kv get gbo/llm - Check init.json: Contains unseal key and root token
# Emergency: unseal Vault after restart
UNSEAL_KEY=$(cat botserver-stack/conf/vault/init.json | jq -r '.unseal_keys_b64[0]')
vault operator unseal $UNSEAL_KEY
Migrating from Environment Variables
If you’re currently using environment variables:
Before (Old Way)
# .env - TOO MANY SECRETS!
DATABASE_URL=postgres://user:password@localhost/db
DIRECTORY_URL=https://localhost:8080
DIRECTORY_CLIENT_ID=your-client-id
DIRECTORY_CLIENT_SECRET=your-client-secret
REDIS_PASSWORD=redis-secret
OPENAI_API_KEY=sk-xxxxx
ANTHROPIC_API_KEY=sk-ant-xxxxx
DRIVE_ACCESSKEY=minio
DRIVE_SECRET=minio123
ENCRYPTION_KEY=super-secret-key
After (With Vault)
# .env - ONLY VAULT CONNECTION
VAULT_ADDR=https://localhost:8200
VAULT_TOKEN=hvs.xxxxx
# EVERYTHING in Vault
vault kv put gbo/directory \
url=https://localhost:8080 \
project_id=12345 \
client_id=xxx \
client_secret=xxx
vault kv put gbo/tables \
host=localhost \
port=5432 \
database=botserver \
username=user \
password=password
vault kv put gbo/cache \
host=localhost \
port=6379 \
password=redis-secret
vault kv put gbo/llm \
openai_key=sk-xxxxx \
anthropic_key=sk-ant-xxxxx
vault kv put gbo/drive \
endpoint=https://localhost:9000 \
accesskey=minio \
secret=minio123
vault kv put gbo/encryption \
master_key=super-secret-key
Migration Script
#!/bin/bash
# migrate-to-vault.sh
# Read existing .env
source .env
# Parse DATABASE_URL if present
if [ -n "$DATABASE_URL" ]; then
# postgres://user:pass@host:port/db
DB_USER=$(echo $DATABASE_URL | sed -n 's|postgres://\([^:]*\):.*|\1|p')
DB_PASS=$(echo $DATABASE_URL | sed -n 's|postgres://[^:]*:\([^@]*\)@.*|\1|p')
DB_HOST=$(echo $DATABASE_URL | sed -n 's|.*@\([^:]*\):.*|\1|p')
DB_PORT=$(echo $DATABASE_URL | sed -n 's|.*:\([0-9]*\)/.*|\1|p')
DB_NAME=$(echo $DATABASE_URL | sed -n 's|.*/\(.*\)|\1|p')
fi
# Store everything in Vault
vault kv put gbo/directory \
url="${DIRECTORY_URL:-https://localhost:8080}" \
project_id="${DIRECTORY_PROJECT_ID:-}" \
client_id="${ZITADEL_CLIENT_ID:-}" \
client_secret="${ZITADEL_CLIENT_SECRET:-}"
vault kv put gbo/tables \
host="${DB_HOST:-localhost}" \
port="${DB_PORT:-5432}" \
database="${DB_NAME:-botserver}" \
username="${DB_USER:-gbuser}" \
password="${DB_PASS:-}"
vault kv put gbo/cache \
host="${REDIS_HOST:-localhost}" \
port="${REDIS_PORT:-6379}" \
password="${REDIS_PASSWORD:-}"
vault kv put gbo/llm \
openai_key="${OPENAI_API_KEY:-}" \
anthropic_key="${ANTHROPIC_API_KEY:-}" \
groq_key="${GROQ_API_KEY:-}" \
deepseek_key="${DEEPSEEK_API_KEY:-}"
vault kv put gbo/drive \
endpoint="${DRIVE_ENDPOINT:-https://localhost:9000}" \
accesskey="${DRIVE_ACCESSKEY:-}" \
secret="${DRIVE_SECRET:-}"
vault kv put gbo/encryption \
master_key="${ENCRYPTION_KEY:-}"
# Clean up .env - ONLY Vault connection
cat > .env << EOF
# General Bots - Vault Connection Only
# All other secrets are stored in Vault
VAULT_ADDR=https://localhost:8200
VAULT_TOKEN=$VAULT_TOKEN
EOF
echo "Migration complete!"
echo ".env now contains only Vault connection."
echo "All secrets moved to Vault."
Using Vault References in config.csv
Reference Vault secrets in your bot’s config.csv:
# Direct value (non-sensitive)
llm-provider,anthropic
llm-model,claude-sonnet-4.5
llm-temperature,0.7
# Vault reference (sensitive)
llm-api-key,vault:gbo/llm/openai_key
# Multiple keys from same path
drive-accesskey,vault:gbo/drive/accesskey
drive-secret,vault:gbo/drive/secret
# Per-bot secrets (for multi-tenant)
custom-api-key,vault:gbo/bots/mybot/api_key
Syntax
vault:<path>/<key>
path: Vault KV path (e.g.,gbo/llm)key: Specific key within the secret (e.g.,openai_key)
Security Best Practices
1. Protect init.json
# Set restrictive permissions
chmod 600 botserver-stack/conf/vault/init.json
# Consider encrypting or moving off-server
gpg -c init.json
scp init.json.gpg secure-backup-server:
rm init.json
2. Use Token Policies
Create limited tokens for applications:
# gbo-readonly.hcl
path "gbo/*" {
capabilities = ["read", "list"]
}
vault policy write gbo-readonly gbo-readonly.hcl
vault token create -policy=gbo-readonly -ttl=24h
3. Enable Audit Logging
vault audit enable file file_path=/opt/gbo/logs/vault-audit.log
4. Rotate Secrets Regularly
# Rotate LLM keys
vault kv put gbo/llm \
openai_key=sk-new-key \
anthropic_key=sk-ant-new-key
# BotServer will pick up new keys automatically (cache TTL)
5. Backup Vault Data
# Snapshot Vault data
vault operator raft snapshot save backup.snap
# Or backup the data directory
tar -czf vault-backup.tar.gz botserver-stack/data/vault/
No UI Needed
You don’t need to expose a UI for secrets management because:
- Automatic at runtime: Secrets are fetched automatically
- config.csv for changes: Update bot config, not secrets
- Vault UI for emergencies: Available at
https://localhost:8200/ui - CLI for automation: Scripts can manage secrets
When Admins Need Access
| Situation | Solution |
|---|---|
| Add new LLM provider | vault kv put gbo/llm new_key=xxx |
| Rotate compromised key | Update in Vault, services auto-reload |
| Check what’s stored | vault kv get gbo/llm or Vault UI |
| Debug connection issues | Check Vault logs and service logs |
| Disaster recovery | Use init.json to unseal and recover |
Relationship Summary
┌─────────────────────────────────────────────────────────────────┐
│ .env │
│ VAULT_ADDR + VAULT_TOKEN (only!) │
└─────────────────────────────┬───────────────────────────────────┘
│
▼
┌─────────────────────────────────────────────────────────────────┐
│ Vault │
│ "Give me all service credentials and connection info" │
│ │
│ gbo/directory → Zitadel URL, credentials │
│ gbo/tables → Database connection + credentials │
│ gbo/drive → MinIO endpoint + credentials │
│ gbo/cache → Redis connection + password │
│ gbo/llm → All LLM API keys │
└─────────────────────────────┬───────────────────────────────────┘
│
▼
┌─────────────────────────────────────────────────────────────────┐
│ BotServer │
│ Connects to all services using Vault secrets │
└─────────────────────────────┬───────────────────────────────────┘
│
▼
┌─────────────────────────────────────────────────────────────────┐
│ User Request │
└─────────────────────────────┬───────────────────────────────────┘
│
▼
┌─────────────────────────────────────────────────────────────────┐
│ Zitadel │
│ "Who is this user? Are they allowed?" │
│ (Credentials from Vault at startup) │
└─────────────────────────────┬───────────────────────────────────┘
│
▼
┌─────────────────────────────────────────────────────────────────┐
│ config.csv │
│ "What LLM should I use? What model?" │
│ (Non-sensitive bot configuration) │
└─────────────────────────────┬───────────────────────────────────┘
│
▼
┌─────────────────────────────────────────────────────────────────┐
│ LLM Provider │
│ (API key from Vault at startup) │
└─────────────────────────────────────────────────────────────────┘
Vault Paths Reference
| Path | Contents |
|---|---|
gbo/directory | url, project_id, client_id, client_secret |
gbo/tables | host, port, database, username, password |
gbo/drive | endpoint, accesskey, secret |
gbo/cache | host, port, password |
gbo/llm | openai_key, anthropic_key, groq_key, deepseek_key, mistral_key |
gbo/encryption | master_key, data_key |
gbo/email | host, username, password |
gbo/meet | url, api_key, api_secret |
gbo/alm | url, admin_password, runner_token |
gbo/vectordb | url, api_key |
gbo/observability | url, org, bucket, token |
Next Steps
- config.csv Format - Bot configuration reference
- LLM Configuration - LLM-specific settings
- Infrastructure Design - Full architecture
Chapter 09: LLM Tools
Define tools that LLMs can call from your BASIC scripts.
Overview
Tools are BASIC scripts with PARAM declarations that become callable functions for the LLM. This enables AI-driven automation with structured inputs.
Tool Structure
' weather.bas - A tool the LLM can invoke
PARAM city AS STRING LIKE "London" DESCRIPTION "City name"
PARAM units AS STRING LIKE "celsius" DESCRIPTION "Temperature units"
DESCRIPTION "Gets current weather for a city"
data = GET "api.weather.com/current?city=" + city
TALK "Weather in " + city + ": " + data.temperature + "°"
How It Works
- PARAM declarations define inputs
- DESCRIPTION explains the tool’s purpose
- LLM decides when to call the tool
- Parameters collected through conversation
- Tool executes with validated inputs
PARAM Declaration
PARAM name AS type LIKE "example" DESCRIPTION "explanation"
| Component | Purpose |
|---|---|
name | Variable name |
type | STRING, INTEGER, DATE, etc. |
LIKE | Example value for LLM |
DESCRIPTION | What this parameter is for |
Tool Formats
Tools compile to multiple formats:
| Format | Use Case |
|---|---|
| MCP | Model Context Protocol |
| OpenAI | Function calling |
| Internal | BASIC runtime |
Chapter Contents
- Tool Definition - Creating tools
- PARAM Declaration - Parameter syntax
- Tool Compilation - Build process
- MCP Format - MCP integration
- OpenAI Format - Function calling
- GET Integration - API calls
- External APIs - Third-party services
- LLM REST Server - Hosting models
- NVIDIA GPU Setup - GPU acceleration
See Also
- BASIC Dialogs - Scripting reference
- REST Endpoints - HTTP endpoints
Tool Definition
In BotServer, a tool is simply a .bas file. That’s it!
How It Works
- Create a
.basfile in your.gbdialog/folder - The LLM automatically discovers it and can call it when needed
- No manual registration required - it just works!
Tool Discovery and Execution Flow
Simple Example
Create get-weather.bas:
' This tool gets weather information
' The LLM will call this when users ask about weather
TALK "Let me check the weather for you..."
weather = GET "/api/weather/San Francisco"
TALK "The weather is: " + weather
That’s a tool! The LLM now knows it can call this when users ask about weather.
Tool with Parameters
Create send-email.bas:
' Send an email to someone
PARAM to AS STRING
PARAM subject AS STRING
PARAM body AS STRING
GET "/email/send" WITH to, subject, body
TALK "Email sent to " + to
The PARAM declarations tell the LLM what parameters this tool accepts.
Making Tools Available
Method 1: Automatic Discovery (Default)
All .bas files in your .gbdialog/ folder are automatically available.
mybot.gbai/
mybot.gbdialog/
start.bas ← Entry point
get-weather.bas ← Tool (auto-discovered)
send-email.bas ← Tool (auto-discovered)
create-task.bas ← Tool (auto-discovered)
Method 2: Manual Registration
In your start.bas, explicitly add tools:
' Register tools for this conversation
USE TOOL "get-weather"
USE TOOL "send-email"
USE TOOL "create-task"
TALK "Hello! I can help with weather, email, and tasks."
Method 3: LLM-Driven Tool Selection
Let the LLM decide which tools to use naturally:
' In start.bas
' Load all available tools - LLM decides when to use them
USE TOOL "weather"
USE TOOL "email"
USE TOOL "enrollment"
TALK "I can help with various tasks. What do you need?"
' The LLM will automatically call the right tool based on user intent
Tool Format Conversion
BotServer automatically converts your .bas tools to:
- MCP (Model Context Protocol) format
- Groq/OpenAI-compatible function calling format
- Other LLM provider formats
You never write these formats manually - just write .bas files!
Conversion Pipeline
Complete Example
Here’s a real tool from the codebase - enrollment.bas:
PARAM name AS string LIKE "Abreu Silva" DESCRIPTION "Required full name of the individual."
PARAM birthday AS date LIKE "23/09/2001" DESCRIPTION "Required birth date of the individual in DD/MM/YYYY format."
PARAM email AS string LIKE "abreu.silva@example.com" DESCRIPTION "Required email address for contact purposes."
PARAM personalid AS integer LIKE "12345678900" DESCRIPTION "Required Personal ID number of the individual (only numbers)."
PARAM address AS string LIKE "Rua das Flores, 123 - SP" DESCRIPTION "Required full address of the individual."
DESCRIPTION "This is the enrollment process, called when the user wants to enroll. Once all information is collected, confirm the details and inform them that their enrollment request has been successfully submitted."
' The actual tool logic is simple
SAVE "enrollments.csv", id, name, birthday, email, personalid, address
TALK "Successfully enrolled " + name + "!"
' That's it! The LLM handles:
' - Natural conversation to collect parameters
' - Validation and error handling
' - Confirming details with the user
' - All the complex interaction flow
That’s It!
To create a tool:
- ✅ Create a
.basfile - ✅ Add
PARAMdeclarations if you need parameters - ✅ Write your logic using
TALK,HEAR,CALL, etc. - ✅ Done!
The LLM will automatically:
- Discover your tool
- Understand what it does (from comments and code)
- Know when to call it
- Pass the right parameters
No JSON schemas, no manual registration, no complex configuration. Just write BASIC!
Best Practices
1. Add Comments
The LLM reads your comments to understand the tool:
' This tool books a meeting room
' It checks availability and sends calendar invites
PARAM room_name AS STRING
PARAM date AS STRING
PARAM attendees AS ARRAY
2. Validate Parameters
Always validate input:
IF room_name IS NULL THEN
TALK "Please specify which room you want to book."
RETURN
ENDIF
3. Provide Feedback
Let users know what’s happening:
TALK "Checking room availability..."
available = GET "/calendar/check" WITH room_name, date
IF available THEN
TALK "Great! Booking the room now..."
GET "/calendar/book" WITH room_name, date, attendees
TALK "Meeting room booked successfully!"
ELSE
TALK "Sorry, that room is not available on " + date
ENDIF
Tool Discovery
The LLM discovers tools by:
- Reading
.basfiles in your.gbdialog/folder - Extracting comments to understand purpose
- Parsing PARAM declarations to understand parameters
- Building a function signature automatically
Example tool discovery from send-email.bas:
Function: send-email
Description: Send an email to someone
Parameters:
- to: STRING (required)
- subject: STRING (required)
- body: STRING (required)
This is generated automatically from your .bas file!
Removing Tools
Dynamic Tool Management
' Remove a specific tool
REMOVE TOOL "send-email"
' Clear all tools
CLEAR TOOLS
' List active tools
tools = LIST TOOLS
TALK "Available tools: " + tools
Next Steps
- PARAM Declaration - Parameter types and validation
- GET Keyword Integration - Using GET to call tools
- External APIs - Calling external services
PARAM Declaration
The PARAM keyword defines input parameters for tools, enabling type checking, validation, and documentation.
Syntax
PARAM parameter_name AS type LIKE "example" DESCRIPTION "description text"
Components
parameter_name: The name used to reference the parameter in the scriptAS type: The data type (string, integer, number, boolean, date, etc.)LIKE "example": An example value showing expected formatDESCRIPTION "text": Explanation of what the parameter represents
Supported Types
- string: Text values (default if no type specified)
- integer: Whole numbers
- number: Decimal numbers
- boolean: True/false values
- date: Date values
- datetime: Date and time values
- array: Lists of values
- object: Structured data
Examples
Basic Parameter
PARAM username AS string LIKE "john_doe" DESCRIPTION "User's unique identifier"
Multiple Parameters
PARAM first_name AS string LIKE "John" DESCRIPTION "User's first name"
PARAM last_name AS string LIKE "Doe" DESCRIPTION "User's last name"
PARAM age AS integer LIKE "25" DESCRIPTION "User's age in years"
PARAM email AS string LIKE "john@example.com" DESCRIPTION "User's email address"
Complex Types
PARAM preferences AS object LIKE "{"theme": "dark", "notifications": true}" DESCRIPTION "User preference settings"
PARAM tags AS array LIKE "["urgent", "follow-up"]" DESCRIPTION "Item categorization tags"
Type Validation
Parameters are validated when tools are called:
- string: Any text value accepted
- integer: Must be a whole number
- number: Must be a valid number
- boolean: Converted from “true”/“false” or 1/0
- date: Parsed according to locale format
Usage in Tools
Parameters become available as variables in the tool script:
PARAM product_id AS integer LIKE "12345" DESCRIPTION "Product identifier"
REM product_id variable is now available
TALK "Fetching details for product " + product_id
Documentation Generation
Parameter declarations are used to automatically generate:
- Tool documentation
- API schemas (OpenAI tools format)
- MCP (Model Context Protocol) definitions
- User interface forms
Required vs Optional
All parameters are required by default. For optional parameters, check for empty values:
PARAM phone AS string LIKE "+1-555-0123" DESCRIPTION "Optional phone number"
IF phone != "" THEN
TALK "We'll contact you at " + phone
ELSE
TALK "No phone number provided"
END IF
Parameter declarations make tools self-documenting and enable rich integration with AI systems that can understand and use the defined interfaces.
Tool Compilation
BotServer compiles BASIC scripts (.bas files) into tool definitions that can be called by the LLM. The compilation process extracts parameters, descriptions, and generates metadata for tool discovery.
Overview
The compilation process reads .bas files from .gbdialog directories and parses parameter declarations along with descriptions. It then generates tool definitions in both MCP and OpenAI formats, stores the compiled tools in the database, and makes them available for LLM invocation.
The Compilation Pipeline
File Detection
The DriveMonitor service watches for changes in .gbdialog directories. It monitors .bas files in drive storage, detects new or modified scripts, and triggers compilation automatically when changes occur.
Source Processing
When a .bas file changes, the compiler downloads the file from drive and creates a local working directory. It then invokes the BasicCompiler to process the script and extract the necessary metadata.
Parameter Extraction
The compiler parses BASIC script headers for PARAM declarations with types and examples, DESCRIPTION statements for tool documentation, and variable names with default values.
Example script header:
PARAM name AS string LIKE "John Smith" DESCRIPTION "User's full name"
PARAM age AS number LIKE 25 DESCRIPTION "User's age"
DESCRIPTION "Processes user registration"
Tool Definition Generation
The compiler creates structured tool definitions from the parsed script. The tool name is derived from the filename without the .bas extension. Parameters are extracted from PARAM declarations, the description comes from the DESCRIPTION statement, and the script path provides a reference to the source file.
Database Storage
Compiled tools are stored in the basic_tools table, which contains tool metadata including name, description, and parameters. The table also stores source script content, bot association, and compilation timestamp for tracking when tools were last updated.
Compilation Output Formats
MCP (Model Context Protocol) Format
The compiler generates MCP-compatible tool definitions:
{
"name": "user_registration",
"description": "Processes user registration",
"input_schema": {
"type": "object",
"properties": {
"name": {
"type": "string",
"description": "User's full name"
},
"age": {
"type": "number",
"description": "User's age"
}
},
"required": ["name", "age"]
}
}
OpenAI Function Format
The compiler also generates OpenAI-compatible function definitions for API compatibility:
{
"name": "user_registration",
"description": "Processes user registration",
"parameters": {
"type": "object",
"properties": {
"name": {
"type": "string",
"description": "User's full name"
},
"age": {
"type": "number",
"description": "User's age"
}
},
"required": ["name", "age"]
}
}
Automatic Recompilation
Tools are recompiled automatically when the source .bas file is modified, when the file’s ETag changes in drive storage, or when a manual recompilation is triggered through the system.
Working Directory Structure
The compiler maintains a local working directory structured as ./work/bot-name.gbai/bot-name.gbdialog/ containing the individual tool files like tool1.bas, tool2.bas, and so on. This directory is used for caching compiled scripts, temporary processing during compilation, and debug inspection when troubleshooting issues.
Error Handling
Compilation errors are handled gracefully to ensure the system remains stable. Syntax errors are logged with line numbers for easy debugging. Missing parameters are reported clearly, invalid types are highlighted in error messages, and compilation continues for other tools even when one fails. Common compilation errors include missing DESCRIPTION statements, invalid PARAM syntax, unsupported parameter types, and general script parsing failures.
Tool Activation
After successful compilation, the tool is stored in the database and becomes available for the USE TOOL keyword. The LLM can discover the tool through its metadata and invoke it during conversations with users.
Performance Considerations
Compilation is triggered asynchronously to avoid blocking other operations. Multiple tools can be compiled in parallel for efficiency, and results are cached in the database to avoid redundant processing. Only changed files are recompiled, minimizing unnecessary work.
Debugging Compilation
To debug compilation issues, check the logs for compilation errors that include file names and line numbers. Inspect the working directory files to see the raw script content. Verify that parameter syntax follows the expected format, and test the tool manually with USE TOOL to confirm it functions correctly.
Best Practices
Always include a DESCRIPTION statement to help the LLM understand the tool’s purpose. Use clear parameter names that make the code self-documenting. Provide LIKE examples with realistic values to improve LLM parameter filling accuracy. Test tools after making changes to verify compilation succeeded, and check logs regularly to monitor for compilation errors.
Limitations
Parameters must be declared at the start of the script before any executable code. The compiler only supports basic types including string, number, and boolean. All parameters are required since optional parameters are not currently supported. Nested object parameters and array parameters are also not available in the current implementation.
Summary
The compilation process transforms BASIC scripts into callable tools that the LLM can discover and invoke. This automatic compilation ensures that changes to scripts are immediately available for use in conversations, making development iteration fast and seamless.
MCP Format
Model Context Protocol (MCP) is a standardized format for defining tools that language models can discover and invoke. BotServer generates MCP-compatible tool definitions from BASIC scripts.
Overview
MCP provides a structured way to describe:
- Tool name and purpose
- Input parameters and types
- Parameter descriptions and examples
- Output format expectations
MCP Tool Structure
A compiled MCP tool definition contains:
{
"name": "tool_name",
"description": "Tool description from DESCRIPTION statement",
"input_schema": {
"type": "object",
"properties": {
"param1": {
"type": "string",
"description": "Parameter description"
},
"param2": {
"type": "number",
"description": "Another parameter"
}
},
"required": ["param1", "param2"]
}
}
From BASIC to MCP
Source BASIC Script
PARAM customer_name AS string LIKE "John Doe" DESCRIPTION "Customer's full name"
PARAM order_amount AS number LIKE 99.99 DESCRIPTION "Total order amount"
PARAM shipping_address AS string LIKE "123 Main St" DESCRIPTION "Delivery address"
DESCRIPTION "Process a new customer order"
# Script logic here
TALK "Processing order for " + customer_name
# ...
Generated MCP Definition
{
"name": "process_order",
"description": "Process a new customer order",
"input_schema": {
"type": "object",
"properties": {
"customer_name": {
"type": "string",
"description": "Customer's full name",
"example": "John Doe"
},
"order_amount": {
"type": "number",
"description": "Total order amount",
"example": 99.99
},
"shipping_address": {
"type": "string",
"description": "Delivery address",
"example": "123 Main St"
}
},
"required": ["customer_name", "order_amount", "shipping_address"]
}
}
Parameter Types
MCP supports these parameter types in BotServer:
| BASIC Type | MCP Type | JSON Schema Type |
|---|---|---|
| string | string | “type”: “string” |
| number | number | “type”: “number” |
| boolean | boolean | “type”: “boolean” |
Input Schema
The input_schema follows JSON Schema specification:
Required Fields
type: Always “object” for tool parametersproperties: Object containing parameter definitionsrequired: Array of required parameter names
Parameter Properties
type: Data type of the parameterdescription: Human-readable descriptionexample: Example value from LIKE clause
Tool Discovery
MCP tools are discoverable through:
- Tool Listing: LLM can query available tools
- Parameter Inspection: LLM examines input schema
- Description Matching: LLM matches user intent to tool description
Integration with LLM
When the LLM invokes an MCP tool:
- Parameter Collection: LLM extracts values from context
- Schema Validation: Parameters validated against input_schema
- Tool Execution: BASIC script runs with provided parameters
- Result Return: Output returned to LLM for processing
Example Use Cases
Form Processing Tool
{
"name": "submit_application",
"description": "Submit a job application form",
"input_schema": {
"type": "object",
"properties": {
"applicant_name": {
"type": "string",
"description": "Full name of applicant"
},
"position": {
"type": "string",
"description": "Position applying for"
},
"experience_years": {
"type": "number",
"description": "Years of relevant experience"
}
},
"required": ["applicant_name", "position", "experience_years"]
}
}
Data Query Tool
{
"name": "search_inventory",
"description": "Search product inventory",
"input_schema": {
"type": "object",
"properties": {
"product_name": {
"type": "string",
"description": "Product to search for"
},
"min_quantity": {
"type": "number",
"description": "Minimum quantity available"
}
},
"required": ["product_name"]
}
}
Storage and Retrieval
MCP definitions are stored in the basic_tools table:
- Tool metadata serialized as JSON
- Indexed for fast retrieval
- Associated with bot ID
- Cached for performance
Advantages of MCP Format
- Standardized: Compatible with multiple LLM providers
- Self-Documenting: Contains all necessary metadata
- Type-Safe: Schema validation prevents errors
- Discoverable: LLMs can understand tool capabilities
- Extensible: Can add custom properties as needed
Limitations in BotServer
Current MCP implementation limitations:
- No nested object parameters
- No array parameters
- All parameters are required (no optional)
- No enum/choice constraints
- No pattern validation
Best Practices
- Clear Descriptions: Make tool purpose obvious
- Meaningful Names: Use descriptive parameter names
- Provide Examples: LIKE values help LLM understand expected input
- Type Accuracy: Use correct types (string vs number)
- Complete Documentation: Every parameter needs description
Validation
MCP tools are validated during compilation:
- Parameter names must be valid identifiers
- Types must be supported
- Descriptions cannot be empty
- Tool name must be unique per bot
Summary
The MCP format provides a structured way to expose BASIC scripts as callable tools for LLMs. By generating MCP-compatible definitions, BotServer enables seamless tool discovery and invocation within conversational flows.
Tool Format
BotServer generates OpenAI-compatible function definitions from BASIC scripts, enabling integration with OpenAI’s function calling API.
Overview
OpenAI’s function calling format allows GPT models to:
- Discover available functions
- Understand parameter requirements
- Generate structured function calls
- Process function results
Function Structure
An OpenAI-compatible function definition contains:
{
"name": "function_name",
"description": "Function description",
"parameters": {
"type": "object",
"properties": {
"param1": {
"type": "string",
"description": "Parameter description"
},
"param2": {
"type": "number",
"description": "Another parameter"
}
},
"required": ["param1", "param2"]
}
}
Conversion from BASIC
Source BASIC Script
PARAM product_id AS string LIKE "SKU-12345" DESCRIPTION "Product identifier"
PARAM quantity AS number LIKE 10 DESCRIPTION "Quantity to order"
PARAM urgent AS boolean LIKE true DESCRIPTION "Rush delivery needed"
DESCRIPTION "Create a purchase order for inventory"
# Script implementation
let order_id = CREATE_ORDER(product_id, quantity, urgent)
TALK "Order created: " + order_id
Generated Function
{
"name": "create_purchase_order",
"description": "Create a purchase order for inventory",
"parameters": {
"type": "object",
"properties": {
"product_id": {
"type": "string",
"description": "Product identifier"
},
"quantity": {
"type": "number",
"description": "Quantity to order"
},
"urgent": {
"type": "boolean",
"description": "Rush delivery needed"
}
},
"required": ["product_id", "quantity", "urgent"]
}
}
Integration with OpenAI API
When using OpenAI’s API, the functions are passed in the request:
{
"model": "gpt-4o",
"messages": [...],
"functions": [
{
"name": "create_purchase_order",
"description": "Create a purchase order for inventory",
"parameters": {...}
}
],
"function_call": "auto"
}
Parameter Type Mapping
| BASIC Type | OpenAI Type | Description |
|---|---|---|
| string | “string” | Text values |
| number | “number” | Numeric values (integer or float) |
| boolean | “boolean” | True/false values |
Function Calling Flow
- User Query: User asks to perform an action
- Function Discovery: GPT identifies relevant function
- Parameter Extraction: GPT extracts parameters from context
- Function Call: GPT generates structured function call
- Execution: BotServer executes the BASIC script
- Result Processing: Output returned to GPT for response
Example Function Calls
Customer Service Function
{
"name": "check_order_status",
"description": "Check the status of a customer order",
"parameters": {
"type": "object",
"properties": {
"order_id": {
"type": "string",
"description": "Order reference number"
},
"customer_email": {
"type": "string",
"description": "Customer email for verification"
}
},
"required": ["order_id", "customer_email"]
}
}
Data Analysis Function
{
"name": "generate_sales_report",
"description": "Generate sales report for specified period",
"parameters": {
"type": "object",
"properties": {
"start_date": {
"type": "string",
"description": "Report start date (YYYY-MM-DD)"
},
"end_date": {
"type": "string",
"description": "Report end date (YYYY-MM-DD)"
},
"region": {
"type": "string",
"description": "Sales region to analyze"
}
},
"required": ["start_date", "end_date"]
}
}
Function Response Handling
When a function is executed:
- Script Execution: BASIC script runs with provided parameters
- Output Collection: TALK statements and return values collected
- Response Format: Results formatted for OpenAI API
- Context Update: Function result added to conversation
Differences from MCP Format
| Aspect | OpenAI Format | MCP Format |
|---|---|---|
| Schema Location | parameters | input_schema |
| Example Values | Not included | Included in schema |
| Metadata | Minimal | Extended metadata |
| Compatibility | OpenAI models only | Multiple providers |
Error Handling
Function errors are handled gracefully:
- Missing parameters return error message
- Type mismatches caught before execution
- Script errors logged and returned
- Timeout protection for long-running scripts
Best Practices
- Descriptive Names: Use clear function names
- Comprehensive Descriptions: Explain what the function does
- Parameter Clarity: Each parameter needs clear description
- Error Messages: Provide helpful error feedback
- Idempotency: Design functions to be safely retryable
Limitations
Current OpenAI format limitations in BotServer:
- No nested objects in parameters
- No array parameters
- No enum constraints
- All parameters marked as required
- No custom validation rules
Storage
OpenAI function definitions are stored alongside MCP definitions:
- Stored in
basic_toolstable - Generated during compilation
- Cached for performance
- Updated when script changes
Usage in Conversations
When a user message triggers function calling:
User: "Order 50 units of SKU-12345 urgently"
System: [Identifies create_purchase_order function]
[Extracts: product_id="SKU-12345", quantity=50, urgent=true]
[Executes function]
Bot: "Order created: ORD-2024-001. Rush delivery confirmed for 50 units of SKU-12345."
Performance Considerations
- Functions cached after compilation
- Parallel function execution supported
- Rate limiting applied per session
- Timeout protection (30 seconds default)
Debugging
To debug OpenAI function calls:
- Enable debug logging
- Check function registration
- Verify parameter extraction
- Review execution logs
- Test with manual invocation
Summary
The OpenAI function format enables seamless integration between BASIC scripts and OpenAI’s GPT models. By automatically generating compatible function definitions, BotServer allows natural language interactions to trigger complex business logic implementations.
GET Keyword Integration
The GET keyword in BotServer provides file retrieval capabilities from both local filesystem and drive (S3-compatible) storage, enabling tools to access documents, data files, and other resources.
Overview
The GET keyword is a fundamental BASIC command that retrieves file contents as strings. It supports local file system access with safety checks, drive (S3-compatible) bucket retrieval, URL fetching via HTTP and HTTPS, and integration with knowledge base documents.
Basic Usage
# Get a file from the bot's bucket
let content = GET "documents/policy.pdf"
# Get a file with full path
let data = GET "announcements.gbkb/news/news.pdf"
# Get from URL
let webpage = GET "https://example.com/data.json"
Implementation Details
File Path Resolution
The GET keyword determines the source based on the path format. URL detection occurs for paths starting with http:// or https://, which triggers HTTP fetching. All other paths are retrieved from drive storage in the bot’s dedicated bucket. Safety validation checks all paths for directory traversal attempts before processing.
Drive (S3-compatible) Integration
When retrieving from drive storage, the system connects to drive using configured credentials and retrieves files from the bot’s dedicated bucket. File contents are returned as strings, with binary files converted to text automatically.
# Retrieves from: {bot-name}.gbai bucket
let doc = GET "knowledge/document.txt"
# Full path within bucket
let report = GET "reports/2024/quarterly.pdf"
URL Fetching
For external resources, the GET keyword supports both HTTP and HTTPS protocols with automatic redirect following. A 30-second timeout protects against hanging requests, and comprehensive error handling manages failed requests gracefully.
let api_data = GET "https://api.example.com/data"
let webpage = GET "http://example.com/page.html"
Safety Features
Path Validation
The is_safe_path function prevents directory traversal attacks by blocking paths containing .. sequences and rejecting absolute paths. Character sets are validated to ensure only safe characters appear in paths, and sandbox isolation ensures scripts cannot escape their designated storage areas.
Access Control
Files are limited to the bot’s own bucket, preventing access to other bots’ data. System directories receive protection from all access attempts, and credentials are never exposed through the GET interface regardless of the path requested.
Error Handling
GET operations handle various error conditions gracefully. When a file is not found, the operation returns an empty string rather than throwing an error. Access denied conditions return an error message, network timeouts return a timeout error, and invalid paths return a security error.
let content = GET "missing-file.txt"
# Returns empty string if file not found
if (content == "") {
TALK "File not found or empty"
}
Use Cases
Loading Knowledge Base Documents
# In update-summary.bas - background processing script
let text = GET "announcements.gbkb/news/news.pdf"
let summary = LLM "Summarize this: " + text # LLM for background processing only
SET BOT MEMORY "news_summary", summary # Stored for all users
Reading Configuration Files
let config = GET "settings.json"
# Parse and use configuration
Fetching External Data
let weather_data = GET "https://api.weather.com/current"
# Process weather information
Loading Templates
let template = GET "templates/email-template.html"
let filled = REPLACE(template, "{{name}}", customer_name)
Performance Considerations
Caching
GET results are not cached by default, so frequent reads should use BOT_MEMORY for caching to improve performance. Large files impact memory usage significantly since the entire file is loaded into memory at once.
Timeouts
URL fetches enforce a 30-second timeout to prevent indefinite hanging. Drive operations depend on network conditions and may vary in response time. Local files are accessed immediately when accessible.
File Size Limits
No hard limit is enforced on file sizes, but large files consume substantial memory. Binary files converted to text may result in particularly large string representations.
Integration with Tools
Tool Parameters from Files
PARAM config_file AS string LIKE "config.json" DESCRIPTION "Configuration file path"
let config = GET config_file
# Use configuration in tool logic
Dynamic Resource Loading
DESCRIPTION "Process documents from a folder"
let file_list = GET "documents/index.txt"
let files = SPLIT(file_list, "\n")
FOR EACH file IN files {
let content = GET "documents/" + file
# Process each document
}
Best Practices
Always check for empty results to verify GET returned content successfully. Use relative paths rather than hardcoding absolute paths to maintain portability. Handle binary files carefully since text conversion may be lossy for non-text content. Cache frequently used files in BOT_MEMORY to avoid repeated retrieval operations. Validate external URLs and ensure HTTPS is used for sensitive data transfers. Log access failures to track missing or inaccessible files for debugging purposes.
Limitations
The GET keyword is a read-only operation and cannot write files. Binary files are converted to text which may corrupt data that isn’t text-based. No streaming support exists, meaning the entire file loads into memory at once. Path traversal is blocked for security, and system directories cannot be accessed under any circumstances.
Examples
Document Summarization Tool
PARAM doc_path AS string LIKE "reports/annual.pdf" DESCRIPTION "Document to summarize"
DESCRIPTION "Summarizes a document"
let content = GET doc_path
if (content == "") {
TALK "Document not found: " + doc_path
} else {
# Set document as context for system AI
SET CONTEXT "document", content
TALK "I've loaded the document. What would you like to know about it?"
}
Data Processing Tool
PARAM data_file AS string LIKE "data/sales.csv" DESCRIPTION "Data file to process"
DESCRIPTION "Analyzes sales data"
let csv_data = GET data_file
# Set data as context for system AI
SET CONTEXT "sales_data", csv_data
TALK "I've loaded the sales data. What analysis would you like me to perform?"
Security Considerations
Never GET files with user-controlled paths directly without validation. Always validate and sanitize path inputs before passing them to GET. Use allowlists for acceptable file paths when possible. Log all file access attempts for security auditing, and monitor for unusual access patterns that might indicate attempted exploitation.
Summary
The GET keyword provides essential file retrieval capabilities for BASIC tools, enabling access to documents, configuration, and external resources while maintaining security through path validation and sandboxing.
External APIs
BotServer enables integration with external APIs through BASIC scripts, allowing bots to connect with third-party services and extend functionality beyond built-in capabilities.
Overview
External API integration in BotServer is achieved through:
- The
GETkeyword for HTTP/HTTPS requests - LLM function calling for API interactions
- BASIC script logic for response processing
- Bot memory for storing API credentials and state
HTTP Requests with GET
The primary method for calling external APIs is the GET keyword:
# Fetch data from an external API
let response = GET "https://api.example.com/data"
# Process the response
let parsed = LLM "Extract the key information from this JSON: " + response
TALK parsed
Supported Protocols
- HTTP: Basic unencrypted requests
- HTTPS: Secure encrypted requests (recommended)
API Response Handling
JSON Responses
Most modern APIs return JSON data:
let weather = GET "https://api.weather.com/current?city=Seattle"
# Response: {"temp": 65, "conditions": "cloudy"}
let report = LLM "Create a weather report from: " + weather
TALK report
Text Responses
Plain text responses are used directly:
let quote = GET "https://api.quotes.com/daily"
TALK "Quote of the day: " + quote
Authentication Patterns
API Key in URL
let api_key = GET BOT MEMORY "weather_api_key"
let url = "https://api.weather.com/data?key=" + api_key
let data = GET url
Bearer Token (via Headers)
Currently, BotServer’s GET keyword doesn’t support custom headers directly. For APIs requiring Bearer tokens or custom headers, you need to:
- Use proxy endpoints that add authentication
- Or use APIs that support key-in-URL authentication
Common Integration Patterns
Weather Service
PARAM city AS string LIKE "Seattle" DESCRIPTION "City for weather"
DESCRIPTION "Gets current weather for a city"
let api_key = GET BOT MEMORY "openweather_key"
let url = "https://api.openweathermap.org/data/2.5/weather?q=" + city + "&appid=" + api_key
let response = GET url
let weather = LLM "Describe the weather based on: " + response
TALK weather
News API
DESCRIPTION "Fetches latest news headlines"
let api_key = GET BOT MEMORY "newsapi_key"
let url = "https://newsapi.org/v2/top-headlines?country=us&apiKey=" + api_key
let news = GET url
let summary = LLM "Summarize the top 3 news stories from: " + news
TALK summary
Currency Exchange
PARAM amount AS number LIKE 100 DESCRIPTION "Amount to convert"
PARAM from_currency AS string LIKE "USD" DESCRIPTION "Source currency"
PARAM to_currency AS string LIKE "EUR" DESCRIPTION "Target currency"
DESCRIPTION "Converts currency using exchange rates"
let url = "https://api.exchangerate-api.com/v4/latest/" + from_currency
let rates = GET url
' Parse rates and calculate conversion
let rate = PARSE_JSON(rates, "rates." + to_currency)
let converted = amount * rate
TALK amount + " " + from_currency + " = " + converted + " " + to_currency
Error Handling
Network Failures
let response = GET "https://api.example.com/data"
if (response == "") {
TALK "Unable to reach the service. Please try again later."
} else {
# Process successful response
TALK response
}
API Errors
let data = GET "https://api.service.com/endpoint"
if (data CONTAINS "error") {
TALK "The service returned an error. Please check your request."
} else {
# Process valid data
}
Rate Limiting Considerations
When integrating with external APIs:
- Respect Rate Limits: Most APIs have usage limits
- Cache Responses: Use BOT_MEMORY to store frequently accessed data
- Batch Requests: Combine multiple data needs into single calls
- Handle 429 Errors: Too Many Requests responses
Caching Pattern
# Check cache first
let cached = GET BOT MEMORY "weather_cache"
let cache_time = GET BOT MEMORY "weather_cache_time"
let current_time = NOW()
let age = current_time - cache_time
if (cached != "" && age < 3600) {
# Use cached data (less than 1 hour old)
TALK cached
} else {
# Fetch fresh data
let fresh = GET "https://api.weather.com/current"
SET BOT MEMORY "weather_cache", fresh
SET BOT MEMORY "weather_cache_time", current_time
TALK fresh
}
Security Best Practices
Credential Storage
# Store API keys in bot memory, not in scripts
let api_key = GET BOT MEMORY "api_key"
# Never hardcode credentials
# BAD: let key = "sk-1234567890abcdef"
# GOOD: let key = GET BOT MEMORY "api_key"
Input Validation
PARAM user_input AS string LIKE "Seattle" DESCRIPTION "User provided input"
# Sanitize before using in URLs
let safe_input = REPLACE(user_input, " ", "%20")
let url = "https://api.example.com/search?q=" + safe_input
Limitations
Current limitations for external API integration:
- No POST/PUT/DELETE: Only GET requests supported
- No Custom Headers: Cannot set Authorization headers directly
- No Request Body: Cannot send JSON payloads
- Timeout Fixed: 30-second timeout cannot be configured
- No Streaming: Responses fully buffered before processing
Workarounds
For POST Requests
Create a proxy service that:
- Accepts GET requests
- Converts to POST internally
- Returns the response
For Complex APIs
Use the LLM to:
- Interpret API responses
- Extract relevant data
- Format for user consumption
Example: Complete API Integration
PARAM location AS string LIKE "New York" DESCRIPTION "Location to check"
DESCRIPTION "Provides weather and news for a location"
# Weather API
let weather_key = GET BOT MEMORY "weather_api_key"
let weather_url = "https://api.openweathermap.org/data/2.5/weather?q=" + location + "&appid=" + weather_key
let weather = GET weather_url
# News API
let news_key = GET BOT MEMORY "news_api_key"
let news_url = "https://newsapi.org/v2/everything?q=" + location + "&apiKey=" + news_key
let news = GET news_url
# Present the information
TALK "Here's your local update for " + location + ":"
TALK "Weather: " + weather
TALK "Latest news: " + news
Best Practices
- Store Keys Securely: Use BOT_MEMORY for API credentials
- Handle Failures Gracefully: Always check for empty responses
- Cache When Possible: Reduce API calls and improve response time
- Document API Usage: Comment which APIs your tools depend on
- Monitor Usage: Track API calls to avoid exceeding limits
- Use HTTPS: Always prefer secure connections
- Validate Inputs: Sanitize user inputs before including in URLs
Summary
While BotServer’s external API capabilities are currently limited to GET requests, creative use of response processing and bot memory for state management enables integration with many third-party services. For more complex API interactions, consider using proxy services or custom integrations.
Creating an LLM REST Server
General Bots offers an incredibly simple way to transform a Large Language Model (LLM) into a fully functional REST API server. With just a few lines of our proprietary BASIC-like syntax, you can create sophisticated AI-powered applications.
Overview
By defining PARAM declarations and a DESCRIPTION in your .bas file, General Bots automatically:
- Creates REST API endpoints callable by the LLM as tools
- Generates OpenAI-compatible function calling schemas
- Generates MCP (Model Context Protocol) tool definitions
- Handles conversation state and context management
Basic Structure
Every LLM-callable tool follows this structure:
PARAM parameter_name AS type LIKE "example" DESCRIPTION "What this parameter is for"
DESCRIPTION "What this tool does. Called when user wants to [action]."
' Your business logic here
Example: Store Chatbot
Here’s how easy it is to create a chatbot for a store:
PARAM operator AS number LIKE 12312312
DESCRIPTION "Operator code."
DESCRIPTION It is a WebService of GB.
products = FIND "products.csv"
BEGIN SYSTEM PROMPT
You must act as a chatbot that will assist a store attendant by
following these rules: Whenever the attendant places an order, it must
include the table and the customer's name. Example: A 400ml Pineapple
Caipirinha for Rafael at table 10. Orders are based on the products and
sides from this product menu: ${JSON.stringify(products)}.
For each order placed, return a JSON containing the product name, the
table, and a list of sides with their respective ids.
END SYSTEM PROMPT
That’s it! With just this simple BASIC code, you’ve created a fully functional LLM-powered chatbot that can handle complex order processing.
REST API Endpoints
The system automatically generates REST API endpoints for your dialogs.
Starting a Conversation
GET http://localhost:1111/llm-server/dialogs/start?operator=123&userSystemId=999
This returns a Process ID (PID), a number like 24795078551392. This PID should be passed within the call chain for maintaining conversation context.
Talking to the Bot
Once you have the PID, you can interact with the LLM:
GET http://localhost:1111/llm-server/dk/talk?pid=4893749837&text=add%20soda
This call acts like talking to the LLM, but it can be used for anything that General Bots can do in a robotic conversation between systems mediated by LLM. The return will be JSON (or any format specified in your BEGIN SYSTEM PROMPT).
Example: Enrollment Process API
Creating a REST API server for any business process is equally straightforward:
PARAM name AS string LIKE "João Silva"
DESCRIPTION "Required full name of the individual."
PARAM birthday AS date LIKE "23/09/2001"
DESCRIPTION "Required birth date of the individual in DD/MM/YYYY format."
PARAM email AS string LIKE "joao.silva@example.com"
DESCRIPTION "Required email address for contact purposes."
PARAM personalid AS integer LIKE "12345678900"
DESCRIPTION "Required Personal ID number of the individual (only numbers)."
PARAM address AS string LIKE "Rua das Flores, 123, São Paulo, SP"
DESCRIPTION "Required full address of the individual."
DESCRIPTION "This is the enrollment process, called when the user wants to enroll. Once all information is collected, confirm the details and inform them that their enrollment request has been successfully submitted. Provide a polite and professional tone throughout the interaction."
SAVE "enrollments.csv", id, name, birthday, email, cpf, rg, address
This creates a full-fledged enrollment system with:
- Data validation
- User interaction
- Data storage
- Automatic REST API endpoint
The system automatically generates a REST API endpoint that is called by LLM as a tool:
GET http://api.pragmatismo.cloud/llm-server/dialogs/enrollment?birthday=...&name=...
Generated Tool Schemas
MCP Format
For each tool, General Bots generates MCP-compatible schemas:
{
"name": "enrollment",
"description": "This is the enrollment process...",
"input_schema": {
"type": "object",
"properties": {
"name": {
"type": "string",
"description": "Required full name of the individual.",
"example": "João Silva"
},
"birthday": {
"type": "string",
"description": "Required birth date...",
"example": "23/09/2001"
}
},
"required": ["name", "birthday", "email", "personalid", "address"]
}
}
OpenAI Format
Also generates OpenAI function calling format:
{
"type": "function",
"function": {
"name": "enrollment",
"description": "This is the enrollment process...",
"parameters": {
"type": "object",
"properties": {
"name": {
"type": "string",
"description": "Required full name of the individual."
}
},
"required": ["name", "birthday", "email", "personalid", "address"]
}
}
}
Parameter Types
| Type | Description | Example |
|---|---|---|
string | Text values | "John Smith" |
number | Numeric values | 42, 3.14 |
integer | Whole numbers | 100 |
date | Date values | "2024-01-15" |
boolean | True/false | true |
Advanced: External API Integration
You can combine LLM tools with external API calls:
PARAM location AS string LIKE "Seattle"
DESCRIPTION "City for weather lookup"
DESCRIPTION "Gets current weather for a city"
let api_key = GET BOT MEMORY "openweather_key"
let url = "https://api.openweathermap.org/data/2.5/weather?q=" + location + "&appid=" + api_key
let response = GET url
let weather = LLM "Describe the weather based on: " + response
TALK weather
Best Practices
-
Clear Descriptions: Write detailed DESCRIPTION text - this is what the LLM uses to decide when to call your tool.
-
Good Examples: The LIKE clause provides examples that help both the LLM and API consumers understand expected values.
-
Validation: Add validation logic to handle edge cases:
PARAM email AS string LIKE "user@example.com"
DESCRIPTION "Email address"
IF NOT INSTR(email, "@") > 0 THEN
TALK "Please provide a valid email address."
RETURN
END IF
- Error Handling: Always handle potential errors gracefully:
result = GET "https://api.example.com/data"
IF result.error THEN
TALK "Unable to fetch data. Please try again."
RETURN
END IF
- Secure Credentials: Use BOT MEMORY for API keys:
api_key = GET BOT MEMORY "my_api_key"
Deployment
Once your .bas file is saved in the .gbdialog folder, General Bots automatically:
- Compiles the tool definition
- Generates the REST endpoints
- Makes it available to the LLM as a callable tool
- Updates when you modify the file
No additional configuration or deployment steps are required!
See Also
- PARAM Declaration - Detailed PARAM syntax
- Tool Definition - Complete tool definition reference
- MCP Format - MCP schema details
- OpenAI Format - OpenAI function calling format
- External APIs - Integrating external services
NVIDIA GPU Setup for LXC Containers
This guide covers setting up NVIDIA GPU passthrough for BotServer running in LXC containers, enabling hardware acceleration for local LLM inference.
Prerequisites
- NVIDIA GPU (RTX 3060 or better with 12GB+ VRAM recommended)
- NVIDIA drivers installed on the host system
- LXD/LXC installed
- CUDA-capable GPU
LXD Configuration (Interactive Setup)
When initializing LXD, use these settings:
sudo lxd init
Answer the prompts as follows:
- Would you like to use LXD clustering? →
no - Do you want to configure a new storage pool? →
no(will create/generalbotslater) - Would you like to connect to a MAAS server? →
no - Would you like to create a new local network bridge? →
yes - What should the new bridge be called? →
lxdbr0 - What IPv4 address should be used? →
auto - What IPv6 address should be used? →
auto - Would you like the LXD server to be available over the network? →
no - Would you like stale cached images to be updated automatically? →
no - Would you like a YAML “lxd init” preseed to be printed? →
no
Storage Configuration
- Storage backend name: →
default - Storage backend driver: →
zfs - Create a new ZFS pool? →
yes
NVIDIA GPU Configuration
On the Host System
Create a GPU profile and attach it to your container:
# Create GPU profile
lxc profile create gpu
# Add GPU device to profile
lxc profile device add gpu gpu gpu gputype=physical
# Apply GPU profile to your container
lxc profile add gb-system gpu
Inside the Container
Configure NVIDIA driver version pinning and install drivers:
- Pin NVIDIA driver versions to ensure stability:
cat > /etc/apt/preferences.d/nvidia-drivers << 'EOF'
Package: *nvidia*
Pin: version 560.35.05-1
Pin-Priority: 1001
Package: cuda-drivers*
Pin: version 560.35.05-1
Pin-Priority: 1001
Package: libcuda*
Pin: version 560.35.05-1
Pin-Priority: 1001
Package: libxnvctrl*
Pin: version 560.35.05-1
Pin-Priority: 1001
Package: libnv*
Pin: version 560.35.05-1
Pin-Priority: 1001
EOF
- Install NVIDIA drivers and CUDA toolkit:
# Update package lists
apt update
# Install NVIDIA driver and nvidia-smi
apt install -y nvidia-driver nvidia-smi
# Add CUDA repository
wget https://developer.download.nvidia.com/compute/cuda/repos/debian12/x86_64/cuda-keyring_1.1-1_all.deb
dpkg -i cuda-keyring_1.1-1_all.deb
# Install CUDA toolkit
apt-get update
apt-get -y install cuda-toolkit-12-8
apt-get install -y cuda-drivers
Verify GPU Access
After installation, verify GPU is accessible:
# Check GPU is visible
nvidia-smi
# Should show your GPU with driver version 560.35.05
Configure BotServer for GPU
Update your bot’s config.csv to use GPU acceleration:
name,value
llm-server-gpu-layers,35
The number of layers depends on your GPU memory:
- RTX 3060 (12GB): 20-35 layers
- RTX 3070 (8GB): 15-25 layers
- RTX 4070 (12GB): 30-40 layers
- RTX 4090 (24GB): 50-99 layers
Troubleshooting
GPU Not Detected
If nvidia-smi doesn’t show the GPU:
-
Check host GPU drivers:
# On host nvidia-smi lxc config device list gb-system -
Verify GPU passthrough:
# Inside container ls -la /dev/nvidia* -
Check kernel modules:
lsmod | grep nvidia
Driver Version Mismatch
If you encounter driver version conflicts:
- Ensure host and container use the same driver version
- Remove the version pinning file and install matching drivers:
rm /etc/apt/preferences.d/nvidia-drivers apt update apt install nvidia-driver-560
CUDA Library Issues
If CUDA libraries aren’t found:
# Add CUDA to library path
echo '/usr/local/cuda/lib64' >> /etc/ld.so.conf.d/cuda.conf
ldconfig
# Add to PATH
echo 'export PATH=/usr/local/cuda/bin:$PATH' >> ~/.bashrc
source ~/.bashrc
Custom llama.cpp Compilation
If you need custom CPU/GPU optimizations or specific hardware support, compile llama.cpp from source:
Prerequisites
sudo apt update
sudo apt install build-essential cmake git
Compilation Steps
# Clone llama.cpp repository
git clone https://github.com/ggerganov/llama.cpp
cd llama.cpp
# Create build directory
mkdir build
cd build
# Configure with CUDA support
cmake .. -DLLAMA_CUDA=ON -DLLAMA_CURL=OFF
# Compile using all available cores
make -j$(nproc)
Compilation Options
For different hardware configurations:
# CPU-only build (no GPU)
cmake .. -DLLAMA_CURL=OFF
# CUDA with specific compute capability
cmake .. -DLLAMA_CUDA=ON -DLLAMA_CUDA_FORCE_COMPUTE=75
# ROCm for AMD GPUs
cmake .. -DLLAMA_HIPBLAS=ON
# Metal for Apple Silicon
cmake .. -DLLAMA_METAL=ON
# AVX2 optimizations for modern CPUs
cmake .. -DLLAMA_AVX2=ON
# F16C for half-precision support
cmake .. -DLLAMA_F16C=ON
After Compilation
# Copy compiled binary to BotServer
cp bin/llama-server /path/to/botserver-stack/bin/llm/
# Update config.csv to use custom build
llm-server-path,/path/to/botserver-stack/bin/llm/
Benefits of Custom Compilation
- Hardware-specific optimizations for your exact CPU/GPU
- Custom CUDA compute capabilities for newer GPUs
- AVX/AVX2/AVX512 instructions for faster CPU inference
- Reduced binary size by excluding unused features
- Support for experimental features not in releases
Performance Optimization
Memory Settings
For optimal LLM performance with GPU:
name,value
llm-server-gpu-layers,35
llm-server-mlock,true
llm-server-no-mmap,false
llm-server-ctx-size,4096
Multiple GPUs
For systems with multiple GPUs, specify which GPU to use:
# List available GPUs
lxc profile device add gpu gpu0 gpu gputype=physical id=0
lxc profile device add gpu gpu1 gpu gputype=physical id=1
Benefits of GPU Acceleration
With GPU acceleration enabled:
- 5-10x faster inference compared to CPU
- Higher context sizes possible (8K-32K tokens)
- Real-time responses even with large models
- Lower CPU usage for other tasks
- Support for larger models (13B, 30B parameters)
Next Steps
- Installation Guide - Complete BotServer setup
- Quick Start - Create your first bot
- Configuration Reference - All GPU-related parameters
Chapter 10: REST Endpoints
HTTP API endpoints for integrating with BotServer.
Overview
BotServer exposes REST endpoints organized by functional area. All endpoints follow consistent patterns for authentication, pagination, and error handling.
Base URL
http://localhost:8000/api/v1
Authentication
Authorization: Bearer <token>
API Categories
| Category | Prefix | Description |
|---|---|---|
| User APIs | /api/user/* | Personal settings, profile, preferences |
| Admin APIs | /api/admin/* | Organization management (requires admin role) |
| Files | /files/* | Drive operations |
| Chat | /chat/* | Conversations and messages |
User vs Admin Endpoints
The API separates user-level and admin-level operations:
User Endpoints (/api/user/*):
- Personal profile and settings
- User’s own files and data
- Individual preferences
- Accessible by all authenticated users
Admin Endpoints (/api/admin/*):
- Organization-wide settings
- User management
- Group management
- DNS, billing, audit logs
- Requires
adminrole
Quick Example
curl -X POST http://localhost:8000/api/v1/chat \
-H "Authorization: Bearer $TOKEN" \
-H "Content-Type: application/json" \
-d '{"message": "Hello", "session_id": "abc123"}'
Response Format
{
"success": true,
"data": { ... },
"error": null
}
Chapter Contents
- Files API - Upload/download
- Document Processing - Text extraction
- Users API - User management
- User Security API - 2FA, sessions
- Groups API - Group management
- Conversations API - Chat sessions
- Calendar API - Scheduling
- Tasks API - Task management
- Storage API - Object storage
- Analytics API - Metrics
- Admin API - Administration
- AI API - LLM endpoints
- Example Integrations - Code samples
See Also
- LLM Tools - Tool definitions
- Authentication - Security
- Permissions Matrix - Access control
Files API Reference
Complete file and document management operations including upload, download, copy, move, search, sharing, and synchronization.
Overview
The Files API provides comprehensive file management capabilities built on top of S3-compatible storage. All file operations support both single files and folders with recursive operations.
Base Path: /files
Authentication
All endpoints require authentication. Include session token in headers:
Authorization: Bearer <token>
File Operations
List Files
List files and folders in a bucket or path.
Endpoint: GET /files/list
Query Parameters:
bucket(optional) - Bucket namepath(optional) - Folder path
Response:
{
"success": true,
"data": [
{
"name": "document.pdf",
"path": "/documents/document.pdf",
"is_dir": false,
"size": 1048576,
"modified": "2024-01-15T10:30:00Z",
"icon": "📄"
},
{
"name": "images",
"path": "/images",
"is_dir": true,
"size": null,
"modified": "2024-01-15T09:00:00Z",
"icon": "📁"
}
]
}
Example:
curl -X GET "http://localhost:3000/files/list?bucket=my-bucket&path=/documents" \
-H "Authorization: Bearer <token>"
Read File
Read file content from storage.
Endpoint: POST /files/read
Request Body:
{
"bucket": "my-bucket",
"path": "/documents/file.txt"
}
Response:
{
"content": "File content here..."
}
Example:
curl -X POST "http://localhost:3000/files/read" \
-H "Authorization: Bearer <token>" \
-H "Content-Type: application/json" \
-d '{"bucket":"my-bucket","path":"/file.txt"}'
Get File Contents
Alias for read file with alternative naming.
Endpoint: POST /files/getContents
Same parameters and response as /files/read.
Write File
Write or update file content.
Endpoint: POST /files/write
Request Body:
{
"bucket": "my-bucket",
"path": "/documents/file.txt",
"content": "New file content"
}
Response:
{
"success": true,
"message": "File written successfully"
}
Save File
Alias for write file.
Endpoint: POST /files/save
Same parameters and response as /files/write.
Upload File
Upload file to storage.
Endpoint: POST /files/upload
Request Body:
{
"bucket": "my-bucket",
"path": "/documents/upload.pdf",
"content": "base64_encoded_content_or_text"
}
Response:
{
"success": true,
"message": "File uploaded successfully"
}
Download File
Download file from storage.
Endpoint: POST /files/download
Request Body:
{
"bucket": "my-bucket",
"path": "/documents/file.pdf"
}
Response:
{
"content": "file_content"
}
Copy File
Copy file or folder to another location.
Endpoint: POST /files/copy
Request Body:
{
"source_bucket": "my-bucket",
"source_path": "/documents/original.pdf",
"dest_bucket": "my-bucket",
"dest_path": "/backup/copy.pdf"
}
Response:
{
"success": true,
"message": "File copied successfully"
}
Move File
Move file or folder to another location.
Endpoint: POST /files/move
Request Body:
{
"source_bucket": "my-bucket",
"source_path": "/documents/file.pdf",
"dest_bucket": "archive-bucket",
"dest_path": "/archived/file.pdf"
}
Response:
{
"success": true,
"message": "File moved successfully"
}
Note: Move operation copies the file and then deletes the source.
Delete File
Delete file or folder.
Endpoint: POST /files/delete
Request Body:
{
"bucket": "my-bucket",
"path": "/documents/file.pdf"
}
Response:
{
"success": true,
"message": "Deleted successfully"
}
Note: If path ends with /, all objects with that prefix are deleted (recursive folder deletion).
Create Folder
Create a new folder.
Endpoint: POST /files/createFolder
Request Body:
{
"bucket": "my-bucket",
"path": "/documents",
"name": "new-folder"
}
Response:
{
"success": true,
"message": "Folder created successfully"
}
Alternative Endpoint: POST /files/create-folder (dash notation)
List Folder Contents
List contents of a specific folder.
Endpoint: POST /files/dirFolder
Request Body:
{
"bucket": "my-bucket",
"path": "/documents"
}
Response:
[
{
"name": "file1.pdf",
"path": "/documents/file1.pdf",
"is_dir": false,
"size": 1024,
"modified": "2024-01-15T10:30:00Z",
"icon": "📄"
}
]
Search and Discovery
Search Files
Search for files across buckets.
Endpoint: GET /files/search
Query Parameters:
bucket(optional) - Limit search to specific bucketquery(required) - Search termfile_type(optional) - File extension filter (e.g., “.pdf”)
Response:
[
{
"name": "matching-file.pdf",
"path": "/documents/matching-file.pdf",
"is_dir": false,
"size": 2048576,
"modified": "2024-01-15T10:30:00Z",
"icon": "📄"
}
]
Example:
curl -X GET "http://localhost:3000/files/search?query=report&file_type=.pdf" \
-H "Authorization: Bearer <token>"
Recent Files
Get recently modified files.
Endpoint: GET /files/recent
Query Parameters:
bucket(optional) - Filter by bucket
Response:
[
{
"name": "recent-file.txt",
"path": "/documents/recent-file.txt",
"is_dir": false,
"size": 1024,
"modified": "2024-01-15T14:30:00Z",
"icon": "📃"
}
]
Note: Returns up to 50 most recently modified files, sorted by modification date descending.
Favorite Files
List user’s favorite files.
Endpoint: GET /files/favorite
Response:
[]
Note: Currently returns empty array. Favorite functionality to be implemented.
Sharing and Permissions
Share Folder
Share folder with other users.
Endpoint: POST /files/shareFolder
Request Body:
{
"bucket": "my-bucket",
"path": "/documents/shared",
"users": ["user1@example.com", "user2@example.com"],
"permissions": "read-write"
}
Response:
{
"share_id": "550e8400-e29b-41d4-a716-446655440000",
"url": "https://share.example.com/550e8400-e29b-41d4-a716-446655440000",
"expires_at": "2024-01-22T10:30:00Z"
}
List Shared Files
Get files and folders shared with user.
Endpoint: GET /files/shared
Response:
[]
Get Permissions
Get permissions for file or folder.
Endpoint: GET /files/permissions
Query Parameters:
bucket(required) - Bucket namepath(required) - File/folder path
Response:
{
"bucket": "my-bucket",
"path": "/documents/file.pdf",
"permissions": {
"read": true,
"write": true,
"delete": true,
"share": true
},
"shared_with": []
}
Storage Management
Get Quota
Check storage quota information.
Endpoint: GET /files/quota
Response:
{
"total_bytes": 100000000000,
"used_bytes": 45678901234,
"available_bytes": 54321098766,
"percentage_used": 45.68
}
Example:
curl -X GET "http://localhost:3000/files/quota" \
-H "Authorization: Bearer <token>"
Synchronization
Sync Status
Get current synchronization status.
Endpoint: GET /files/sync/status
Response:
{
"status": "idle",
"last_sync": "2024-01-15T10:30:00Z",
"files_synced": 0,
"bytes_synced": 0
}
Status values:
idle- No sync in progresssyncing- Sync in progresserror- Sync error occurredpaused- Sync paused
Start Sync
Start file synchronization.
Endpoint: POST /files/sync/start
Response:
{
"success": true,
"message": "Sync started"
}
Stop Sync
Stop file synchronization.
Endpoint: POST /files/sync/stop
Response:
{
"success": true,
"message": "Sync stopped"
}
File Icons
Files are automatically assigned icons based on extension:
| Extension | Icon | Type |
|---|---|---|
| .bas | ⚙️ | BASIC script |
| .ast | 🔧 | AST file |
| .csv | 📊 | Spreadsheet |
| .gbkb | 📚 | Knowledge base |
| .json | 🔖 | JSON data |
| .txt, .md | 📃 | Text |
| 📕 | PDF document | |
| .zip, .tar, .gz | 📦 | Archive |
| .jpg, .png, .gif | 🖼️ | Image |
| folder | 📁 | Directory |
| .gbai | 🤖 | Bot package |
| default | 📄 | Generic file |
Error Handling
Common error responses:
Service Unavailable:
{
"error": "S3 service not available"
}
Status: 503
File Not Found:
{
"error": "Failed to read file: NoSuchKey"
}
Status: 500
Invalid UTF-8:
{
"error": "File is not valid UTF-8"
}
Status: 500
Best Practices
- Large Files: For files > 5MB, consider chunked uploads
- Batch Operations: Use batch endpoints when operating on multiple files
- Path Naming: Use forward slashes, avoid special characters
- Permissions: Always check permissions before operations
- Error Handling: Implement retry logic for transient failures
- Quotas: Monitor quota usage to prevent storage exhaustion
Examples
Upload and Share Workflow
// 1. Upload file
const uploadResponse = await fetch('/files/upload', {
method: 'POST',
headers: {
'Authorization': 'Bearer token',
'Content-Type': 'application/json'
},
body: JSON.stringify({
bucket: 'my-bucket',
path: '/documents/report.pdf',
content: fileContent
})
});
// 2. Share with team
const shareResponse = await fetch('/files/shareFolder', {
method: 'POST',
headers: {
'Authorization': 'Bearer token',
'Content-Type': 'application/json'
},
body: JSON.stringify({
bucket: 'my-bucket',
path: '/documents',
users: ['team@example.com'],
permissions: 'read-write'
})
});
const { url } = await shareResponse.json();
console.log('Share URL:', url);
Search and Download
import requests
# Search for files
response = requests.get(
'http://localhost:3000/files/search',
params={'query': 'report', 'file_type': '.pdf'},
headers={'Authorization': 'Bearer token'}
)
files = response.json()
# Download first result
if files:
download_response = requests.post(
'http://localhost:3000/files/download',
json={
'bucket': 'my-bucket',
'path': files[0]['path']
},
headers={'Authorization': 'Bearer token'}
)
content = download_response.json()['content']
with open('downloaded.pdf', 'w') as f:
f.write(content)
Next Steps
- Document Processing API - Convert and merge documents
- Storage API - Advanced storage operations
- Backup API - Backup and restore
Document Processing API
BotServer provides RESTful endpoints for processing, extracting, and analyzing various document formats including PDFs, Office documents, and images.
Overview
The Document Processing API enables:
- Text extraction from documents
- OCR for scanned documents
- Metadata extraction
- Document conversion
- Content analysis and summarization
Base URL
http://localhost:8080/api/v1/documents
Authentication
All Document Processing API requests require authentication:
Authorization: Bearer <token>
Endpoints
Upload Document
POST /upload
Upload a document for processing.
Request:
- Method:
POST - Content-Type:
multipart/form-data
Form Data:
file- The document fileprocess_options- JSON string of processing options
Example Request:
curl -X POST \
-H "Authorization: Bearer token123" \
-F "file=@document.pdf" \
-F 'process_options={"extract_text":true,"extract_metadata":true}' \
http://localhost:8080/api/v1/documents/upload
Response:
{
"document_id": "doc_abc123",
"filename": "document.pdf",
"size_bytes": 2048576,
"mime_type": "application/pdf",
"status": "processing",
"uploaded_at": "2024-01-15T10:00:00Z"
}
Process Document
POST /process
Process an already uploaded document.
Request Body:
{
"document_id": "doc_abc123",
"operations": [
"extract_text",
"extract_metadata",
"generate_summary",
"extract_entities"
],
"options": {
"language": "en",
"ocr_enabled": true,
"chunk_size": 1000
}
}
Response:
{
"document_id": "doc_abc123",
"process_id": "prc_xyz789",
"status": "processing",
"estimated_completion": "2024-01-15T10:02:00Z"
}
Get Processing Status
GET /process/{process_id}/status
Check the status of document processing.
Response:
{
"process_id": "prc_xyz789",
"document_id": "doc_abc123",
"status": "completed",
"progress": 100,
"completed_at": "2024-01-15T10:01:30Z",
"results_available": true
}
Get Extracted Text
GET /documents/{document_id}/text
Retrieve extracted text from a processed document.
Query Parameters:
page- Specific page number (optional)format- Output format:plain,markdown,html
Response:
{
"document_id": "doc_abc123",
"text": "This is the extracted text from the document...",
"pages": 10,
"word_count": 5420,
"language": "en"
}
Get Document Metadata
GET /documents/{document_id}/metadata
Retrieve metadata from a document.
Response:
{
"document_id": "doc_abc123",
"metadata": {
"title": "Annual Report 2024",
"author": "John Doe",
"created_date": "2024-01-10T08:00:00Z",
"modified_date": "2024-01-14T16:30:00Z",
"pages": 10,
"producer": "Microsoft Word",
"keywords": ["annual", "report", "finance"],
"custom_properties": {
"department": "Finance",
"confidentiality": "Internal"
}
}
}
Generate Summary
POST /documents/{document_id}/summarize
Generate an AI summary of the document.
Request Body:
{
"type": "abstractive",
"length": "medium",
"focus_areas": ["key_points", "conclusions"],
"language": "en"
}
Response:
{
"document_id": "doc_abc123",
"summary": "This document discusses the annual financial performance...",
"key_points": [
"Revenue increased by 15%",
"New market expansion successful",
"Operating costs reduced"
],
"summary_length": 250
}
Extract Entities
POST /documents/{document_id}/entities
Extract named entities from the document.
Request Body:
{
"entity_types": ["person", "organization", "location", "date", "money"],
"confidence_threshold": 0.7
}
Response:
{
"document_id": "doc_abc123",
"entities": [
{
"text": "John Smith",
"type": "person",
"confidence": 0.95,
"occurrences": 5
},
{
"text": "New York",
"type": "location",
"confidence": 0.88,
"occurrences": 3
},
{
"text": "$1.5 million",
"type": "money",
"confidence": 0.92,
"occurrences": 2
}
]
}
Convert Document
POST /documents/{document_id}/convert
Convert document to another format.
Request Body:
{
"target_format": "pdf",
"options": {
"compress": true,
"quality": "high",
"page_size": "A4"
}
}
Response:
{
"document_id": "doc_abc123",
"converted_id": "doc_def456",
"original_format": "docx",
"target_format": "pdf",
"download_url": "/api/v1/documents/doc_def456/download"
}
Search Within Document
POST /documents/{document_id}/search
Search for text within a document.
Request Body:
{
"query": "revenue growth",
"case_sensitive": false,
"whole_words": false,
"regex": false
}
Response:
{
"document_id": "doc_abc123",
"matches": [
{
"page": 3,
"line": 15,
"context": "...the company achieved significant revenue growth in Q4...",
"position": 1247
},
{
"page": 7,
"line": 8,
"context": "...projecting continued revenue growth for next year...",
"position": 3892
}
],
"total_matches": 2
}
Split Document
POST /documents/{document_id}/split
Split a document into multiple parts.
Request Body:
{
"method": "by_pages",
"pages_per_split": 5
}
Response:
{
"document_id": "doc_abc123",
"parts": [
{
"part_id": "part_001",
"pages": "1-5",
"download_url": "/api/v1/documents/part_001/download"
},
{
"part_id": "part_002",
"pages": "6-10",
"download_url": "/api/v1/documents/part_002/download"
}
],
"total_parts": 2
}
Merge Documents
POST /documents/merge
Merge multiple documents into one.
Request Body:
{
"document_ids": ["doc_abc123", "doc_def456", "doc_ghi789"],
"output_format": "pdf",
"preserve_metadata": true
}
Response:
{
"merged_document_id": "doc_merged_xyz",
"source_count": 3,
"total_pages": 30,
"download_url": "/api/v1/documents/doc_merged_xyz/download"
}
Supported Formats
Input Formats
- Documents: PDF, DOCX, DOC, ODT, RTF, TXT
- Spreadsheets: XLSX, XLS, ODS, CSV
- Presentations: PPTX, PPT, ODP
- Images: PNG, JPG, JPEG, GIF, BMP, TIFF
- Web: HTML, XML, MARKDOWN
Output Formats
- Plain Text
- Markdown
- HTML
- JSON
- CSV (for tabular data)
Processing Options
OCR Options
{
"ocr_enabled": true,
"ocr_language": "eng",
"ocr_engine": "tesseract",
"preprocessing": {
"deskew": true,
"remove_noise": true,
"enhance_contrast": true
}
}
Text Extraction Options
{
"preserve_formatting": false,
"extract_tables": true,
"extract_images": false,
"chunk_text": true,
"chunk_size": 1000,
"chunk_overlap": 100
}
Summary Options
{
"summary_type": "extractive",
"summary_length": "medium",
"bullet_points": true,
"include_keywords": true,
"max_sentences": 5
}
Batch Processing
Submit Batch
POST /batch/process
Process multiple documents in batch.
Request Body:
{
"documents": [
{
"document_id": "doc_001",
"operations": ["extract_text", "summarize"]
},
{
"document_id": "doc_002",
"operations": ["extract_entities"]
}
],
"notify_on_completion": true,
"webhook_url": "https://example.com/webhook"
}
Get Batch Status
GET /batch/{batch_id}/status
Check batch processing status.
Response:
{
"batch_id": "batch_abc123",
"total_documents": 10,
"processed": 7,
"failed": 1,
"pending": 2,
"completion_percentage": 70
}
Error Responses
400 Bad Request
{
"error": "unsupported_format",
"message": "File format .xyz is not supported",
"supported_formats": ["pdf", "docx", "txt"]
}
413 Payload Too Large
{
"error": "file_too_large",
"message": "File size exceeds maximum limit",
"max_size_bytes": 52428800,
"provided_size_bytes": 104857600
}
422 Unprocessable Entity
{
"error": "corrupted_file",
"message": "The document appears to be corrupted and cannot be processed"
}
Webhooks
Configure webhooks to receive processing notifications:
{
"event": "document.processed",
"document_id": "doc_abc123",
"status": "completed",
"results": {
"text_extracted": true,
"summary_generated": true,
"entities_extracted": true
}
}
Rate Limits
| Operation | Limit | Window |
|---|---|---|
| Upload Document | 50/hour | Per user |
| Process Document | 100/hour | Per user |
| Generate Summary | 20/hour | Per user |
| Batch Processing | 5/hour | Per user |
Best Practices
- Preprocess Documents: Clean scanned documents before OCR
- Use Appropriate Formats: Choose the right output format for your use case
- Batch Similar Documents: Process similar documents together for efficiency
- Handle Large Files: Use chunking for large documents
- Cache Results: Store processed results to avoid reprocessing
- Monitor Processing: Use webhooks for long-running operations
Integration Examples
Python Example
import requests
# Upload and process document
with open('document.pdf', 'rb') as f:
response = requests.post(
'http://localhost:8080/api/v1/documents/upload',
headers={'Authorization': 'Bearer token123'},
files={'file': f},
data={'process_options': '{"extract_text": true}'}
)
document_id = response.json()['document_id']
# Get extracted text
text_response = requests.get(
f'http://localhost:8080/api/v1/documents/{document_id}/text',
headers={'Authorization': 'Bearer token123'}
)
print(text_response.json()['text'])
Related APIs
- Storage API - Document storage
- ML API - Advanced text analysis
- Knowledge Base API - Document indexing
Users API
The Users API provides endpoints for user management operations. User authentication is handled through Zitadel, with BotServer maintaining session associations and user preferences.
Overview
User management in General Bots follows a federated model:
- Zitadel: Primary identity provider (authentication, SSO, user creation)
- BotServer: Session management, preferences, bot-specific user data
Endpoints
Get Current User
GET /api/users/me
Returns current authenticated user information.
Headers:
Authorization: Bearer {session_token}
Response:
{
"user_id": "user-123",
"username": "john_doe",
"email": "john@example.com",
"display_name": "John Doe",
"avatar_url": "/api/users/user-123/avatar",
"roles": ["user", "manager"],
"created_at": "2024-01-01T00:00:00Z",
"last_login": "2024-01-15T10:30:00Z"
}
Get User by ID
GET /api/users/:id
Retrieve specific user details.
Required Permission: admin:users or same user
Response:
{
"user_id": "user-123",
"username": "john_doe",
"email": "john@example.com",
"display_name": "John Doe",
"status": "active",
"created_at": "2024-01-01T00:00:00Z"
}
List Users
GET /api/users
List users in the organization.
Required Permission: admin:users
Query Parameters:
limit- Number of results (default: 50, max: 100)offset- Pagination offsetstatus- Filter by status (active/suspended/inactive)role- Filter by rolesearch- Search by name or email
Response:
{
"users": [
{
"user_id": "user-123",
"username": "john_doe",
"email": "john@example.com",
"display_name": "John Doe",
"status": "active",
"roles": ["user", "manager"]
},
{
"user_id": "user-456",
"username": "jane_smith",
"email": "jane@example.com",
"display_name": "Jane Smith",
"status": "active",
"roles": ["user"]
}
],
"total": 47,
"limit": 50,
"offset": 0
}
Update User
PUT /api/users/:id
Update user information.
Required Permission: admin:users or same user (limited fields)
Request:
{
"display_name": "John D. Doe",
"avatar_url": "https://example.com/avatar.jpg"
}
Admin-only fields:
{
"status": "suspended",
"roles": ["user"]
}
Response:
{
"user_id": "user-123",
"status": "updated",
"updated_fields": ["display_name"]
}
Update User Settings
PUT /api/users/:id/settings
Update user preferences.
Request:
{
"theme": "dark",
"language": "en",
"notifications": {
"email": true,
"push": false,
"digest": "daily"
},
"default_bot": "support-bot"
}
Response:
{
"status": "updated",
"settings": {
"theme": "dark",
"language": "en"
}
}
Get User Settings
GET /api/users/:id/settings
Retrieve user preferences.
Response:
{
"theme": "dark",
"language": "en",
"timezone": "America/New_York",
"notifications": {
"email": true,
"push": false,
"digest": "daily"
},
"default_bot": "support-bot"
}
Suspend User
POST /api/users/:id/suspend
Suspend a user account.
Required Permission: admin:users
Request:
{
"reason": "Policy violation"
}
Response:
{
"user_id": "user-123",
"status": "suspended",
"suspended_at": "2024-01-15T10:30:00Z"
}
Activate User
POST /api/users/:id/activate
Reactivate a suspended user.
Required Permission: admin:users
Response:
{
"user_id": "user-123",
"status": "active",
"activated_at": "2024-01-15T10:30:00Z"
}
Delete User
DELETE /api/users/:id
Deactivate/delete user account.
Required Permission: admin:users
Response:
{
"user_id": "user-123",
"status": "deleted",
"deleted_at": "2024-01-15T10:30:00Z"
}
User Sessions
List User Sessions
GET /api/users/:id/sessions
List active sessions for a user.
Response:
{
"sessions": [
{
"session_id": "sess-001",
"bot_id": "support-bot",
"started_at": "2024-01-15T09:00:00Z",
"last_activity": "2024-01-15T10:30:00Z",
"device": "Chrome on Windows"
}
]
}
Terminate Session
DELETE /api/users/:id/sessions/:session_id
End a specific user session.
Response:
{
"session_id": "sess-001",
"status": "terminated"
}
Terminate All Sessions
DELETE /api/users/:id/sessions
End all user sessions (logout everywhere).
Response:
{
"terminated_count": 3,
"status": "all_sessions_terminated"
}
User Authentication Flow
Login
POST /api/users/login
Authenticate user (redirects to Zitadel).
Request:
{
"email": "user@example.com",
"password": "password",
"remember_me": true
}
Response:
{
"redirect_url": "https://auth.yourdomain.com/oauth/authorize?..."
}
Logout
POST /api/users/logout
End current session.
Response:
{
"status": "logged_out",
"redirect_url": "/"
}
Register
POST /api/users/register
Register new user (if self-registration enabled).
Request:
{
"email": "newuser@example.com",
"username": "newuser",
"password": "SecurePassword123!",
"display_name": "New User"
}
Response:
{
"user_id": "user-789",
"status": "pending_verification",
"message": "Check your email to verify your account"
}
User Management via Zitadel
For full user management, access Zitadel admin console:
- Access Console:
http://localhost:8080(or your Zitadel URL) - Create Users: Organization → Users → Add
- Manage Roles: Users → Select User → Authorizations
- Reset Passwords: Users → Select User → Actions → Reset Password
- Configure SSO: Settings → Identity Providers
Database Schema
BotServer maintains minimal user data:
-- users table (synced from Zitadel)
CREATE TABLE users (
id UUID PRIMARY KEY,
zitadel_id TEXT UNIQUE,
username TEXT,
email TEXT,
display_name TEXT,
avatar_url TEXT,
status TEXT DEFAULT 'active',
created_at TIMESTAMPTZ DEFAULT NOW(),
updated_at TIMESTAMPTZ DEFAULT NOW()
);
-- user_settings table
CREATE TABLE user_settings (
id UUID PRIMARY KEY,
user_id UUID REFERENCES users(id),
setting_key TEXT NOT NULL,
setting_value TEXT,
UNIQUE(user_id, setting_key)
);
-- user_sessions table
CREATE TABLE sessions (
id UUID PRIMARY KEY,
user_id UUID REFERENCES users(id),
bot_id UUID,
status TEXT DEFAULT 'active',
device_info TEXT,
created_at TIMESTAMPTZ DEFAULT NOW(),
last_activity TIMESTAMPTZ DEFAULT NOW()
);
Error Handling
| Status Code | Error | Description |
|---|---|---|
| 400 | invalid_request | Malformed request |
| 401 | unauthorized | Not authenticated |
| 403 | forbidden | Insufficient permissions |
| 404 | user_not_found | User doesn’t exist |
| 409 | conflict | Username/email already exists |
| 422 | validation_error | Invalid field values |
Rate Limits
| Endpoint | Limit |
|---|---|
| Login | 10/minute per IP |
| Register | 5/hour per IP |
| User List | 60/minute per user |
| User Update | 30/minute per user |
BASIC Integration
Access user information in scripts:
' Get current user info
user_name = GET user_name
user_email = GET user_email
' Greet by name
TALK "Hello, " + user_name + "!"
' Check user role
role = GET role
IF role = "admin" THEN
TALK "Welcome, administrator!"
END IF
See Also
- User Authentication - Auth details
- Permissions Matrix - Access control
- Groups API - Group management
- SET USER Keyword - BASIC user context
User Security API
BotServer provides RESTful endpoints for user management, authentication, authorization, and security features.
Overview
The User Security API enables:
- User authentication and sessions
- Role-based access control
- Security settings management
- Audit logging
- Password policies
- Two-factor authentication
Base URL
http://localhost:8080/api/v1/security
Authentication
Most security endpoints require authentication:
Authorization: Bearer <token>
User Management
Create User
POST /users
Create a new user account.
Request Body:
{
"username": "johndoe",
"email": "john@example.com",
"full_name": "John Doe",
"role": "user",
"groups": ["support_team"],
"metadata": {
"department": "Customer Service",
"employee_id": "EMP001"
}
}
Response:
{
"user_id": "usr_abc123",
"username": "johndoe",
"email": "john@example.com",
"created_at": "2024-01-15T10:00:00Z",
"status": "pending_activation"
}
Get User
GET /users/{user_id}
Retrieve user information.
Response:
{
"user_id": "usr_abc123",
"username": "johndoe",
"email": "john@example.com",
"full_name": "John Doe",
"role": "user",
"groups": ["support_team"],
"status": "active",
"created_at": "2024-01-15T10:00:00Z",
"last_login": "2024-01-15T14:30:00Z",
"email_verified": true,
"two_factor_enabled": false
}
Update User
PATCH /users/{user_id}
Update user information.
Request Body:
{
"full_name": "John Smith",
"role": "admin",
"groups": ["support_team", "admin_team"]
}
Delete User
DELETE /users/{user_id}
Delete or deactivate a user account.
Response:
{
"user_id": "usr_abc123",
"status": "deactivated",
"deactivated_at": "2024-01-15T15:00:00Z"
}
List Users
GET /users
List all users with pagination and filters.
Query Parameters:
page- Page number (default: 1)limit- Items per page (default: 20)role- Filter by rolegroup- Filter by groupstatus- Filter by status:active,inactive,pendingsearch- Search in username, email, full name
Response:
{
"users": [
{
"user_id": "usr_abc123",
"username": "johndoe",
"email": "john@example.com",
"full_name": "John Doe",
"role": "user",
"status": "active"
}
],
"total": 150,
"page": 1,
"limit": 20
}
Authentication
Login
POST /auth/login
Authenticate user via directory service.
Request Body:
{
"username": "johndoe",
"password": "secure_password",
"two_factor_code": "123456"
}
Response:
{
"access_token": "eyJhbGciOiJIUzI1NiIs...",
"refresh_token": "eyJhbGciOiJIUzI1NiIs...",
"token_type": "Bearer",
"expires_in": 3600,
"user": {
"user_id": "usr_abc123",
"username": "johndoe",
"role": "user"
}
}
Refresh Token
POST /auth/refresh
Refresh an expired access token.
Request Body:
{
"refresh_token": "eyJhbGciOiJIUzI1NiIs..."
}
Logout
POST /auth/logout
Invalidate current session.
Request Body:
{
"refresh_token": "eyJhbGciOiJIUzI1NiIs..."
}
Verify Token
GET /auth/verify
Verify if a token is valid.
Headers:
Authorization: Bearer <token>
Response:
{
"valid": true,
"user_id": "usr_abc123",
"expires_at": "2024-01-15T15:00:00Z"
}
Roles and Permissions
List Roles
GET /roles
Get all available roles.
Response:
{
"roles": [
{
"role_id": "admin",
"name": "Administrator",
"permissions": ["users.manage", "bots.manage", "system.configure"]
},
{
"role_id": "user",
"name": "User",
"permissions": ["bots.use", "profile.edit"]
}
]
}
Assign Role
POST /users/{user_id}/roles
Assign a role to a user.
Request Body:
{
"role_id": "admin"
}
Check Permission
GET /users/{user_id}/permissions/{permission}
Check if user has a specific permission.
Response:
{
"user_id": "usr_abc123",
"permission": "bots.manage",
"granted": true,
"source": "role:admin"
}
Groups
Create Group
POST /groups
Create a user group.
Request Body:
{
"name": "support_team",
"description": "Customer support team",
"permissions": ["tickets.manage", "kb.edit"]
}
Add User to Group
POST /groups/{group_id}/members
Add a user to a group.
Request Body:
{
"user_id": "usr_abc123"
}
Remove User from Group
DELETE /groups/{group_id}/members/{user_id}
Remove a user from a group.
Security Settings
Get Security Settings
GET /settings/security
Get current security configuration.
Response:
{
"password_policy": {
"min_length": 12,
"require_uppercase": true,
"require_lowercase": true,
"require_numbers": true,
"require_special": true,
"expiry_days": 90,
"history_count": 5
},
"session_policy": {
"timeout_minutes": 30,
"max_sessions": 5,
"remember_me_days": 30
},
"two_factor": {
"enabled": false,
"required_for_roles": ["admin"],
"methods": ["totp", "sms"]
},
"lockout_policy": {
"max_attempts": 5,
"lockout_duration_minutes": 30,
"reset_window_minutes": 15
}
}
Update Security Settings
PATCH /settings/security
Update security configuration.
Request Body:
{
"password_policy": {
"min_length": 14,
"expiry_days": 60
},
"two_factor": {
"enabled": true
}
}
Two-Factor Authentication
Enable 2FA
POST /users/{user_id}/2fa/enable
Enable two-factor authentication.
Response:
{
"secret": "JBSWY3DPEHPK3PXP",
"qr_code": "data:image/png;base64,iVBORw0KGgoAAAA...",
"backup_codes": [
"12345678",
"87654321",
"11223344"
]
}
Verify 2FA
POST /users/{user_id}/2fa/verify
Verify 2FA setup.
Request Body:
{
"code": "123456"
}
Disable 2FA
POST /users/{user_id}/2fa/disable
Disable two-factor authentication.
Audit Logs
Get Audit Logs
GET /audit/logs
Retrieve security audit logs.
Query Parameters:
user_id- Filter by useraction- Filter by action typestart_date- Start of date rangeend_date- End of date rangepage- Page numberlimit- Items per page
Response:
{
"logs": [
{
"log_id": "log_123",
"timestamp": "2024-01-15T14:30:00Z",
"user_id": "usr_abc123",
"action": "login",
"ip_address": "192.168.1.100",
"user_agent": "Mozilla/5.0...",
"success": true,
"details": {
"method": "password",
"session_id": "sess_xyz"
}
}
],
"total": 500,
"page": 1
}
Export Audit Logs
POST /audit/export
Export audit logs for compliance.
Request Body:
{
"format": "csv",
"date_range": {
"start": "2024-01-01",
"end": "2024-01-31"
},
"filters": {
"actions": ["login", "permission_change", "data_access"]
}
}
Session Management
List Sessions
GET /users/{user_id}/sessions
List active sessions for a user.
Response:
{
"sessions": [
{
"session_id": "sess_xyz",
"created_at": "2024-01-15T10:00:00Z",
"last_activity": "2024-01-15T14:30:00Z",
"ip_address": "192.168.1.100",
"user_agent": "Mozilla/5.0...",
"location": "New York, US"
}
],
"total": 2
}
Revoke Session
DELETE /sessions/{session_id}
Revoke a specific session.
Revoke All Sessions
POST /users/{user_id}/sessions/revoke-all
Revoke all sessions for a user.
Password Management
Change Password
POST /users/{user_id}/password
Change user password (handled by directory service).
Request Body:
{
"current_password": "old_password",
"new_password": "new_secure_password"
}
Reset Password Request
POST /auth/password/reset-request
Request password reset.
Request Body:
{
"email": "john@example.com"
}
Reset Password
POST /auth/password/reset
Reset password with token.
Request Body:
{
"token": "reset_token_123",
"new_password": "new_secure_password"
}
API Keys
Generate API Key
POST /users/{user_id}/api-keys
Generate an API key for programmatic access.
Request Body:
{
"name": "Integration Key",
"permissions": ["bots.use"],
"expires_at": "2024-12-31T23:59:59Z"
}
Response:
{
"key_id": "key_123",
"api_key": "sk_live_abcdef123456...",
"created_at": "2024-01-15T10:00:00Z",
"expires_at": "2024-12-31T23:59:59Z"
}
List API Keys
GET /users/{user_id}/api-keys
List user’s API keys.
Revoke API Key
DELETE /api-keys/{key_id}
Revoke an API key.
Error Responses
401 Unauthorized
{
"error": "unauthorized",
"message": "Invalid credentials"
}
403 Forbidden
{
"error": "forbidden",
"message": "Insufficient permissions"
}
423 Locked
{
"error": "account_locked",
"message": "Account locked due to too many failed attempts",
"locked_until": "2024-01-15T15:00:00Z"
}
Security Best Practices
- Use Strong Passwords: Enforce complex password requirements
- Enable 2FA: Require for administrative accounts
- Regular Audits: Review audit logs regularly
- Session Limits: Limit concurrent sessions
- API Key Rotation: Rotate keys periodically
- Least Privilege: Grant minimal necessary permissions
- Monitor Failed Logins: Track and alert on suspicious activity
Rate Limits
| Operation | Limit | Window |
|---|---|---|
| Login | 5/minute | Per IP |
| Password Reset | 3/hour | Per email |
| API Key Generation | 10/day | Per user |
Related APIs
- Authentication - Auth details
- Audit Logs - System monitoring
- Notifications - Security alerts
Groups API
The Groups API provides endpoints for managing groups and organizations through Zitadel integration.
Overview
Groups in BotServer represent organizations in Zitadel. They provide multi-tenant support and user grouping capabilities.
Endpoints
Create Group
POST /groups/create
Creates a new group/organization.
Request:
{
"name": "Engineering Team",
"description": "Software engineering department",
"domain": "engineering.example.com"
}
Response:
{
"id": "org-123",
"name": "Engineering Team",
"created_at": "2024-01-20T10:00:00Z"
}
Update Group
PUT /groups/:id/update
Updates group information.
Request:
{
"name": "Updated Name",
"description": "Updated description"
}
Response:
{
"id": "org-123",
"name": "Updated Name",
"updated_at": "2024-01-20T11:00:00Z"
}
Delete Group
DELETE /groups/:id/delete
Deletes a group/organization.
Response:
{
"success": true,
"message": "Group deleted successfully"
}
List Groups
GET /groups/list
Lists all groups accessible to the user.
Query Parameters:
limit- Maximum number of results (default: 20)offset- Pagination offset
Response:
{
"groups": [
{
"id": "org-123",
"name": "Engineering Team",
"member_count": 25,
"created_at": "2024-01-20T10:00:00Z"
}
],
"total": 1
}
Get Group Members
GET /groups/:id/members
Retrieves members of a specific group.
Response:
{
"members": [
{
"user_id": "user-456",
"username": "john_doe",
"email": "john@example.com",
"role": "member",
"joined_at": "2024-01-15T09:00:00Z"
}
],
"total": 1
}
Add Group Member
POST /groups/:id/members/add
Adds a user to a group.
Request:
{
"user_id": "user-789",
"role": "member"
}
Response:
{
"success": true,
"message": "Member added successfully"
}
Remove Group Member
DELETE /groups/:id/members/remove
Removes a user from a group.
Request:
{
"user_id": "user-789"
}
Response:
{
"success": true,
"message": "Member removed successfully"
}
Implementation Details
Zitadel Integration
All group operations are proxied to Zitadel:
- Groups map to Zitadel organizations
- Members are managed through Zitadel’s org API
- Permissions inherited from Zitadel roles
Data Model
Groups are not stored in BotServer’s database. All data comes from Zitadel:
- Group metadata from Zitadel orgs
- Membership from Zitadel org members
- Permissions from Zitadel policies
Error Responses
All endpoints may return standard error responses:
{
"error": "Group not found",
"code": "GROUP_NOT_FOUND",
"status": 404
}
Common error codes:
GROUP_NOT_FOUND- Group doesn’t existUNAUTHORIZED- User lacks permissionMEMBER_EXISTS- User already in groupMEMBER_NOT_FOUND- User not in groupZITADEL_ERROR- Upstream service error
Permissions
Group operations require appropriate Zitadel permissions:
- Create: Organization admin
- Update: Organization owner or admin
- Delete: Organization owner
- List: Authenticated user
- View Members: Group member
- Add/Remove Members: Group admin
Rate Limiting
Group endpoints are rate-limited:
- 100 requests per minute for read operations
- 20 requests per minute for write operations
Best Practices
- Cache Group Data: Groups change infrequently
- Batch Operations: Use bulk endpoints when available
- Handle Zitadel Errors: Gracefully handle upstream failures
- Validate Permissions: Check user has required role
- Audit Changes: Log all group modifications
Group Membership API
BotServer provides RESTful endpoints for managing user groups, team memberships, and collaborative workspaces.
Overview
The Group Membership API enables:
- Group creation and management
- Member addition and removal
- Role assignments within groups
- Permission inheritance
- Team collaboration features
- Workspace organization
Base URL
http://localhost:8080/api/v1/groups
Authentication
All Group Membership API requests require authentication:
Authorization: Bearer <token>
Endpoints
Create Group
POST /groups
Create a new group or team.
Request Body:
{
"name": "Engineering Team",
"description": "Product engineering team",
"type": "team",
"visibility": "private",
"settings": {
"allow_join_requests": true,
"require_approval": true,
"max_members": 50
},
"permissions": ["bot.use", "kb.read", "reports.view"]
}
Response:
{
"group_id": "grp_abc123",
"name": "Engineering Team",
"type": "team",
"created_at": "2024-01-15T10:00:00Z",
"created_by": "usr_admin",
"member_count": 0
}
Get Group
GET /groups/{group_id}
Retrieve group information.
Response:
{
"group_id": "grp_abc123",
"name": "Engineering Team",
"description": "Product engineering team",
"type": "team",
"visibility": "private",
"member_count": 12,
"created_at": "2024-01-15T10:00:00Z",
"settings": {
"allow_join_requests": true,
"require_approval": true,
"max_members": 50
},
"permissions": ["bot.use", "kb.read", "reports.view"]
}
Update Group
PATCH /groups/{group_id}
Update group information.
Request Body:
{
"name": "Engineering & DevOps Team",
"description": "Combined engineering and operations team",
"settings": {
"max_members": 75
}
}
Delete Group
DELETE /groups/{group_id}
Delete a group (requires admin permissions).
Response:
{
"deleted": true,
"group_id": "grp_abc123",
"members_removed": 12
}
List Groups
GET /groups
List all groups with filtering.
Query Parameters:
type- Filter by group type:team,department,projectvisibility- Filter by visibility:public,privatemember- Filter groups containing specific usersearch- Search in name and descriptionpage- Page number (default: 1)limit- Items per page (default: 20)
Response:
{
"groups": [
{
"group_id": "grp_abc123",
"name": "Engineering Team",
"type": "team",
"member_count": 12,
"visibility": "private"
}
],
"total": 8,
"page": 1,
"limit": 20
}
Member Management
Add Member
POST /groups/{group_id}/members
Add a member to a group.
Request Body:
{
"user_id": "usr_xyz789",
"role": "member",
"permissions": ["read", "write"],
"notify": true
}
Response:
{
"membership_id": "mem_123",
"group_id": "grp_abc123",
"user_id": "usr_xyz789",
"role": "member",
"joined_at": "2024-01-15T10:30:00Z"
}
Bulk Add Members
POST /groups/{group_id}/members/bulk
Add multiple members at once.
Request Body:
{
"members": [
{"user_id": "usr_001", "role": "admin"},
{"user_id": "usr_002", "role": "member"},
{"user_id": "usr_003", "role": "member"}
],
"notify_all": true
}
Response:
{
"added": 3,
"failed": 0,
"memberships": [
{"user_id": "usr_001", "status": "added"},
{"user_id": "usr_002", "status": "added"},
{"user_id": "usr_003", "status": "added"}
]
}
List Members
GET /groups/{group_id}/members
List group members.
Query Parameters:
role- Filter by rolestatus- Filter by status:active,pending,suspendedsearch- Search in member namespage- Page numberlimit- Items per page
Response:
{
"members": [
{
"membership_id": "mem_123",
"user": {
"user_id": "usr_xyz789",
"username": "johndoe",
"full_name": "John Doe",
"avatar_url": "https://example.com/avatar.jpg"
},
"role": "admin",
"status": "active",
"joined_at": "2024-01-15T10:30:00Z",
"last_active": "2024-01-15T14:00:00Z"
}
],
"total": 12,
"page": 1,
"limit": 20
}
Update Member Role
PATCH /groups/{group_id}/members/{user_id}
Update a member’s role or permissions.
Request Body:
{
"role": "admin",
"permissions": ["read", "write", "delete"]
}
Remove Member
DELETE /groups/{group_id}/members/{user_id}
Remove a member from a group.
Response:
{
"removed": true,
"group_id": "grp_abc123",
"user_id": "usr_xyz789",
"removed_at": "2024-01-15T15:00:00Z"
}
Group Roles
List Roles
GET /groups/{group_id}/roles
List available roles in a group.
Response:
{
"roles": [
{
"role_id": "owner",
"name": "Owner",
"permissions": ["all"],
"member_count": 1
},
{
"role_id": "admin",
"name": "Administrator",
"permissions": ["manage_members", "manage_settings", "read", "write"],
"member_count": 2
},
{
"role_id": "member",
"name": "Member",
"permissions": ["read", "write"],
"member_count": 9
}
]
}
Create Custom Role
POST /groups/{group_id}/roles
Create a custom role for a group.
Request Body:
{
"name": "Moderator",
"permissions": ["read", "write", "moderate"],
"description": "Can moderate content and manage posts"
}
Join Requests
Request to Join
POST /groups/{group_id}/join-requests
Request to join a private group.
Request Body:
{
"message": "I would like to join the engineering team",
"referred_by": "usr_admin"
}
Response:
{
"request_id": "req_456",
"group_id": "grp_abc123",
"user_id": "usr_xyz789",
"status": "pending",
"submitted_at": "2024-01-15T10:00:00Z"
}
List Join Requests
GET /groups/{group_id}/join-requests
List pending join requests (admin only).
Response:
{
"requests": [
{
"request_id": "req_456",
"user": {
"user_id": "usr_xyz789",
"username": "newuser",
"full_name": "New User"
},
"message": "I would like to join the engineering team",
"status": "pending",
"submitted_at": "2024-01-15T10:00:00Z"
}
],
"total": 3
}
Approve/Reject Request
PATCH /groups/{group_id}/join-requests/{request_id}
Process a join request.
Request Body:
{
"action": "approve",
"role": "member",
"note": "Welcome to the team!"
}
Group Invitations
Send Invitation
POST /groups/{group_id}/invitations
Invite users to join a group.
Request Body:
{
"emails": ["user1@example.com", "user2@example.com"],
"role": "member",
"message": "You're invited to join our team!",
"expires_in_days": 7
}
Response:
{
"invitations": [
{
"invitation_id": "inv_789",
"email": "user1@example.com",
"status": "sent",
"expires_at": "2024-01-22T10:00:00Z"
}
],
"sent": 2,
"failed": 0
}
Accept Invitation
POST /invitations/{invitation_id}/accept
Accept a group invitation.
Response:
{
"membership_id": "mem_999",
"group_id": "grp_abc123",
"joined_at": "2024-01-15T11:00:00Z"
}
Group Permissions
Get Group Permissions
GET /groups/{group_id}/permissions
List group permissions.
Response:
{
"permissions": [
{
"permission": "bot.use",
"description": "Use bots",
"inherited_from": null
},
{
"permission": "kb.read",
"description": "Read knowledge base",
"inherited_from": "parent_group"
}
]
}
Update Permissions
PATCH /groups/{group_id}/permissions
Update group permissions.
Request Body:
{
"add": ["reports.create", "analytics.view"],
"remove": ["kb.write"]
}
Hierarchical Groups
Create Subgroup
POST /groups/{parent_id}/subgroups
Create a subgroup under a parent group.
Request Body:
{
"name": "Frontend Team",
"inherit_permissions": true,
"inherit_members": false
}
List Subgroups
GET /groups/{group_id}/subgroups
List all subgroups.
Response:
{
"subgroups": [
{
"group_id": "grp_sub123",
"name": "Frontend Team",
"member_count": 5,
"created_at": "2024-01-15T10:00:00Z"
}
],
"total": 2
}
Group Analytics
Get Group Analytics
GET /groups/{group_id}/analytics
Get group activity analytics.
Response:
{
"group_id": "grp_abc123",
"analytics": {
"member_growth": {
"current": 12,
"last_month": 10,
"growth_rate": 0.20
},
"activity": {
"messages_sent": 456,
"tasks_completed": 23,
"avg_response_time": 3600
},
"engagement": {
"active_members": 10,
"engagement_rate": 0.83
}
},
"period": "30d"
}
Error Responses
403 Forbidden
{
"error": "permission_denied",
"message": "You don't have permission to manage this group"
}
409 Conflict
{
"error": "member_exists",
"message": "User is already a member of this group"
}
422 Unprocessable Entity
{
"error": "group_full",
"message": "Group has reached maximum member limit",
"max_members": 50,
"current_members": 50
}
Best Practices
- Use Descriptive Names: Group names should clearly indicate purpose
- Set Member Limits: Prevent groups from becoming too large
- Regular Cleanup: Remove inactive members periodically
- Permission Inheritance: Use hierarchy for easier management
- Document Purpose: Always include group descriptions
- Review Requests: Don’t auto-approve join requests for sensitive groups
Related APIs
- User Security API - User management
- Notifications API - Group notifications
- Tasks API - Group task management
Conversations API
The Conversations API provides endpoints for managing chat conversations, message history, and real-time communication.
Overview
Conversations in General Bots are handled primarily through WebSocket connections for real-time messaging, with REST endpoints for history retrieval and session management.
Endpoints
Start Conversation
POST /api/conversations/start
Initiates a new conversation with a bot.
Request:
{
"bot_id": "bot-123",
"initial_message": "Hello"
}
Response:
{
"conversation_id": "conv-456",
"session_id": "session-789",
"status": "active"
}
Send Message
POST /api/conversations/:id/messages
Sends a message in an existing conversation.
Request:
{
"content": "User message",
"attachments": []
}
Response:
{
"message_id": "msg-123",
"timestamp": "2024-01-15T10:30:00Z",
"status": "delivered"
}
Get Conversation History
GET /api/conversations/:id/history
Retrieves message history for a conversation.
Query Parameters:
limit- Number of messages (default: 50, max: 100)before- Messages before timestampafter- Messages after timestamp
Response:
{
"messages": [
{
"id": "msg-001",
"sender": "user",
"content": "Hello",
"timestamp": "2024-01-15T10:00:00Z"
},
{
"id": "msg-002",
"sender": "bot",
"content": "Hi! How can I help you?",
"timestamp": "2024-01-15T10:00:01Z"
}
],
"has_more": false
}
List Conversations
GET /api/conversations
Lists user’s conversations.
Query Parameters:
bot_id- Filter by botstatus- Filter by status (active/archived)limit- Number of resultsoffset- Pagination offset
Response:
{
"conversations": [
{
"id": "conv-456",
"bot_id": "bot-123",
"bot_name": "Support Bot",
"last_message": "Thank you!",
"last_activity": "2024-01-15T10:30:00Z",
"status": "active"
}
],
"total": 1
}
WebSocket Protocol
Real-time messaging uses WebSocket connections at /ws.
Message Types
| Type | Direction | Description |
|---|---|---|
message | Both | Chat message |
typing | Server→Client | Bot is typing |
suggestion | Server→Client | Quick reply suggestions |
status | Server→Client | Connection status |
error | Server→Client | Error notification |
Send Message Format
{
"type": "message",
"content": "Hello",
"session_id": "session-123"
}
Receive Message Format
{
"type": "message",
"sender": "bot",
"content": "Hi! How can I help you?",
"timestamp": "2024-01-15T10:00:01Z"
}
Anonymous Conversations
Anonymous users can chat without authentication:
- Session created automatically on WebSocket connect
- Limited to default bot only
- No history persistence
- Session expires after inactivity
Authenticated Conversations
Logged-in users get additional features:
- Full conversation history
- Multiple bot access
- Cross-device sync
- Persistent sessions
Database Schema
Conversations are stored in:
-- sessions table
CREATE TABLE sessions (
id UUID PRIMARY KEY,
user_id UUID,
bot_id UUID,
status TEXT,
created_at TIMESTAMPTZ,
updated_at TIMESTAMPTZ
);
-- message_history table
CREATE TABLE message_history (
id UUID PRIMARY KEY,
session_id UUID REFERENCES sessions(id),
sender TEXT,
content TEXT,
metadata JSONB,
created_at TIMESTAMPTZ
);
Error Handling
| Status Code | Error | Description |
|---|---|---|
| 400 | invalid_message | Malformed message content |
| 401 | unauthorized | Authentication required |
| 403 | forbidden | No access to conversation |
| 404 | not_found | Conversation doesn’t exist |
| 429 | rate_limited | Too many messages |
Rate Limits
| Endpoint | Limit |
|---|---|
| Messages | 60/minute per user |
| History | 100/minute per user |
| List | 30/minute per user |
See Also
- Sessions and Channels - Session management
- TALK Keyword - Sending messages from BASIC
- HEAR Keyword - Receiving user input
Calls API
The Calls API provides endpoints for managing voice and video calls, conference rooms, and real-time communication within BotServer.
Status
⚠️ NOT IMPLEMENTED
This API is planned for future development but is not currently available in BotServer.
Planned Features
The Calls API will enable voice call initiation and management, video conferencing, screen sharing, call recording, call transcription, conference room management, and WebRTC integration.
Planned Endpoints
Call Management
The call management endpoints will handle the lifecycle of individual calls. Use POST /api/v1/calls/initiate to start a call, GET /api/v1/calls/{call_id} to retrieve call details, POST /api/v1/calls/{call_id}/end to terminate a call, and GET /api/v1/calls/history to access call history.
Conference Rooms
Conference room endpoints manage persistent meeting spaces. Create rooms with POST /api/v1/calls/rooms, retrieve room details with GET /api/v1/calls/rooms/{room_id}, and manage participation through POST /api/v1/calls/rooms/{room_id}/join, POST /api/v1/calls/rooms/{room_id}/leave, and GET /api/v1/calls/rooms/{room_id}/participants.
Recording
Recording endpoints control call archival. Start recording with POST /api/v1/calls/{call_id}/record/start, stop with POST /api/v1/calls/{call_id}/record/stop, and retrieve recordings via GET /api/v1/calls/{call_id}/recordings.
Transcription
Transcription endpoints provide speech-to-text capabilities. Enable transcription with POST /api/v1/calls/{call_id}/transcribe and retrieve the transcript using GET /api/v1/calls/{call_id}/transcript.
Planned Integration with BASIC
When implemented, call features will be accessible via BASIC keywords:
' Initiate call (not yet available)
call_id = START CALL "user123"
WAIT FOR CALL ANSWER call_id
' Conference room (not yet available)
room_id = CREATE ROOM "Team Meeting"
INVITE TO ROOM room_id, ["user1", "user2", "user3"]
' Call with bot (not yet available)
ON INCOMING CALL
ANSWER CALL
TALK "Hello, how can I help you?"
response = HEAR
' Process voice response
END ON
Planned Data Models
Call
{
"call_id": "call_123",
"type": "video",
"status": "active",
"participants": [
{
"user_id": "user123",
"role": "host",
"audio": true,
"video": true,
"joined_at": "2024-01-15T10:00:00Z"
},
{
"user_id": "user456",
"role": "participant",
"audio": true,
"video": false,
"joined_at": "2024-01-15T10:01:00Z"
}
],
"started_at": "2024-01-15T10:00:00Z",
"duration_seconds": 300,
"recording": false,
"transcription": true
}
Conference Room
{
"room_id": "room_456",
"name": "Daily Standup",
"type": "persistent",
"max_participants": 10,
"settings": {
"allow_recording": true,
"auto_transcribe": true,
"waiting_room": false,
"require_password": false
},
"current_participants": 3,
"created_at": "2024-01-01T08:00:00Z"
}
Planned Features Detail
Call Types
The API will support several call types to accommodate different communication needs. One-to-one calls enable direct communication between two users. Group calls allow multi-party conversations with several participants. Conference calls provide scheduled meetings with dedicated rooms. Bot calls enable voice interaction directly with the bot for automated customer service scenarios.
Media Features
Media capabilities will include audio-only calls, video with audio, and screen sharing for presentations and collaboration. File sharing during calls will allow participants to exchange documents in real-time. Virtual backgrounds will provide privacy and professionalism, while noise suppression will ensure clear audio quality.
Recording Options
Recording functionality will offer flexibility in how calls are archived. Audio-only recording will minimize storage requirements when video isn’t needed. Full video recording will capture the complete visual experience. Selective recording will allow capturing specific participants only. Cloud storage integration will enable automatic upload to configured storage providers. Automatic transcription will convert recorded speech to searchable text.
Quality Management
Quality features will ensure reliable communication across varying network conditions. Adaptive bitrate will automatically adjust video quality based on available bandwidth. Network quality indicators will inform participants of connection status. Bandwidth optimization will minimize data usage while maintaining quality. Echo cancellation and automatic gain control will ensure clear audio.
Implementation Considerations
When implemented, the Calls API will use WebRTC for peer-to-peer communication, providing low-latency audio and video. Integration with an SFU (Selective Forwarding Unit) will enable scalable group calls without requiring each participant to send their stream to every other participant. Support for TURN/STUN servers will handle NAT traversal, ensuring connections work across different network configurations. End-to-end encryption will provide security for sensitive conversations. Call analytics and quality metrics will help administrators monitor system health. Dial-in via PSTN integration will allow traditional phone participation. Virtual phone numbers will enable bots to make and receive external calls.
Alternative Solutions
Until the Calls API is implemented, consider these alternatives for voice and video functionality.
External Services Integration
You can integrate with established communication platforms through their APIs. Twilio Voice API provides comprehensive telephony features. Zoom SDK enables embedding video meetings. Microsoft Teams integration connects to enterprise communication. Jitsi Meet offers an open-source video conferencing option that can be self-hosted.
WebRTC Libraries
For custom implementations, you can use existing WebRTC libraries in your frontend:
// Use existing WebRTC libraries in frontend
const peer = new RTCPeerConnection(config);
// Handle signaling through WebSocket
Voice Bot Integration
For voice-enabled bots specifically, consider using external telephony providers, connecting via SIP trunk to existing phone systems, or integrating with cloud PBX systems that handle the voice infrastructure.
Future Technology Stack
The planned implementation will use WebRTC for real-time communication, providing the foundation for peer-to-peer audio and video. MediaSoup or Janus will serve as the SFU server for scalable multi-party calls. Coturn will provide TURN/STUN server functionality for NAT traversal. FFmpeg will handle media processing tasks like transcoding and recording. Whisper will power speech-to-text transcription. PostgreSQL will store call metadata and history. S3-compatible storage will house call recordings.
Workaround Example
Until the Calls API is available, you can implement basic voice interaction using external services:
' Simple voice bot using external service
FUNCTION HandlePhoneCall(phone_number)
' Use external telephony API
response = CALL EXTERNAL API "twilio", {
"action": "answer",
"from": phone_number
}
' Convert speech to text
text = SPEECH TO TEXT response.audio
' Set the transcribed text as context
SET CONTEXT "user_question", text
' System AI responds naturally
TALK "Let me help you with that question."
' Convert text to speech
audio = TEXT TO SPEECH bot_response
' Send response
CALL EXTERNAL API "twilio", {
"action": "play",
"audio": audio
}
END FUNCTION
Integration Points
When available, the Calls API will integrate with the Calendar API for scheduling calls, the Notifications API for call alerts, the User API for user presence information, the Storage API for recording storage, and the ML API for transcription and analysis.
Use Cases
Customer Support
Voice-enabled bot support can handle common customer inquiries automatically. Call center integration allows seamless handoff to human agents. Screen sharing enables technical support representatives to guide customers visually. Call recording provides quality assurance data for training and compliance.
Team Collaboration
Video meetings bring distributed teams together for face-to-face communication. Stand-up calls facilitate daily team synchronization. Screen sharing supports presentations and collaborative work sessions. Persistent team rooms provide always-available meeting spaces.
Education
Virtual classrooms enable remote learning at scale. One-on-one tutoring provides personalized instruction. Recorded lectures allow students to review material at their own pace. Interactive sessions engage students through real-time participation.
Status Updates
Check the GitHub repository for updates on Calls API implementation status.
For immediate voice and video needs, consider integrating with established providers like Twilio, Zoom, or Teams rather than waiting for the native implementation.
Whiteboard API
The Whiteboard API provides endpoints for collaborative drawing, diagramming, and visual collaboration within BotServer.
Status
⚠️ NOT IMPLEMENTED
This API is planned for future development but is not currently available in BotServer.
Planned Features
The Whiteboard API will enable collaborative real-time drawing, shape and diagram creation, text annotations, image uploads, multi-user cursors, version history, and export capabilities. These features will provide teams with a complete visual collaboration environment integrated directly into the BotServer platform.
Planned Endpoints
Whiteboard Management
The whiteboard management endpoints handle the lifecycle of whiteboard instances. Creating a whiteboard uses POST /api/v1/whiteboards, while retrieving whiteboard details uses GET /api/v1/whiteboards/{board_id}. Updates are handled through PATCH /api/v1/whiteboards/{board_id}, deletion through DELETE /api/v1/whiteboards/{board_id}, and listing all whiteboards through GET /api/v1/whiteboards.
Collaboration
Real-time collaboration is managed through several endpoints. Users join sessions via POST /api/v1/whiteboards/{board_id}/join and leave via POST /api/v1/whiteboards/{board_id}/leave. The current participant list is available at GET /api/v1/whiteboards/{board_id}/participants. For real-time updates, a WebSocket connection is established at WebSocket /api/v1/whiteboards/{board_id}/ws.
Content Operations
Content manipulation endpoints allow adding elements with POST /api/v1/whiteboards/{board_id}/elements, updating them with PATCH /api/v1/whiteboards/{board_id}/elements/{element_id}, and removing them with DELETE /api/v1/whiteboards/{board_id}/elements/{element_id}. The entire board can be cleared using POST /api/v1/whiteboards/{board_id}/clear.
Export
Export functionality supports multiple formats. PNG export is available at GET /api/v1/whiteboards/{board_id}/export/png, SVG at GET /api/v1/whiteboards/{board_id}/export/svg, and PDF at GET /api/v1/whiteboards/{board_id}/export/pdf.
Planned Integration with BASIC
When implemented, whiteboard features will be accessible via BASIC keywords:
' Create whiteboard (not yet available)
board_id = CREATE WHITEBOARD "Architecture Diagram"
SHARE WHITEBOARD board_id, ["user123", "user456"]
' Add content (not yet available)
ADD TO WHITEBOARD board_id, "rectangle", {x: 100, y: 100, width: 200, height: 100}
ADD TO WHITEBOARD board_id, "text", {x: 150, y: 150, text: "Component A"}
' Export whiteboard (not yet available)
image_url = EXPORT WHITEBOARD board_id, "png"
SEND FILE image_url
Planned Data Models
Whiteboard
{
"board_id": "wb_123",
"name": "Architecture Diagram",
"owner": "user123",
"created_at": "2024-01-15T10:00:00Z",
"updated_at": "2024-01-15T14:30:00Z",
"settings": {
"background": "grid",
"canvas_width": 1920,
"canvas_height": 1080,
"allow_anonymous": false,
"max_participants": 50
},
"participants": [
{
"user_id": "user123",
"role": "owner",
"cursor_position": {"x": 500, "y": 300}
}
],
"element_count": 42
}
Drawing Element
{
"element_id": "elem_456",
"board_id": "wb_123",
"type": "rectangle",
"properties": {
"x": 100,
"y": 100,
"width": 200,
"height": 100,
"fill": "#ffffff",
"stroke": "#000000",
"stroke_width": 2
},
"created_by": "user123",
"created_at": "2024-01-15T10:05:00Z",
"z_index": 1
}
Planned Features Detail
Drawing Tools
The drawing tools will include basic shapes such as rectangles, circles, triangles, lines, and arrows. Freehand drawing will support pen, pencil, and highlighter modes. Text tools will provide labels, sticky notes, and comments. Smart connectors will automatically route between shapes, and templates will offer pre-built layouts for flowcharts, mind maps, and wireframes.
Collaboration Features
Real-time collaboration will include cursor tracking so users can see where others are working, presence indicators showing who is currently viewing the board, and change notifications for updates made by collaborators. A commenting system will enable discussions on specific elements. Version control will track the history of changes, and conflict resolution will handle simultaneous edits gracefully.
Advanced Features
Advanced functionality will support layers for organizing complex diagrams, grouping to manipulate multiple elements together, and alignment and distribution tools for precise positioning. Copy and paste will work between boards, undo and redo history will allow reverting changes, and keyboard shortcuts will speed up common operations.
Implementation Considerations
When implemented, the Whiteboard API will use WebSocket for real-time collaboration and implement CRDT (Conflict-free Replicated Data Types) for conflict-free editing. Data will be stored in PostgreSQL with JSON columns for flexibility. The cache component will improve performance for frequently accessed boards. SVG will serve as the primary format for rendering, and the system will support touch devices and stylus input. Access controls and permissions will ensure proper security.
Alternative Solutions
Until the Whiteboard API is implemented, several alternatives are available.
External whiteboard services can be integrated, including Miro API, embedded Excalidraw, draw.io (diagrams.net), or Microsoft Whiteboard.
For simple drawing storage, you can store drawing data as JSON in bot memory:
' Store drawing as JSON
drawing = {
"shapes": [
{"type": "rect", "x": 10, "y": 10, "w": 100, "h": 50}
]
}
SET BOT MEMORY "drawing_001", JSON_STRINGIFY(drawing)
Image-based collaboration offers another approach, allowing you to upload and annotate images, use existing image editing APIs, or share screenshots with markup.
Future Technology Stack
The planned implementation will use the Canvas API or SVG for rendering, WebSocket for real-time synchronization, Y.js or OT.js for collaborative editing, Fabric.js for canvas manipulation, PostgreSQL for data persistence, cache for real-time state management, and Sharp for image processing.
Workaround Example
Until the Whiteboard API is available, you can implement basic diagram storage:
' Simple diagram system using text
FUNCTION CreateDiagram(name)
diagram = {
"name": name,
"elements": [],
"connections": []
}
SET BOT MEMORY "diagram_" + name, JSON_STRINGIFY(diagram)
RETURN name
END FUNCTION
FUNCTION AddElement(diagram_name, element_type, label)
diagram_key = "diagram_" + diagram_name
diagram_json = GET BOT MEMORY diagram_key
diagram = JSON_PARSE(diagram_json)
element = {
"id": GENERATE_ID(),
"type": element_type,
"label": label
}
diagram.elements = APPEND(diagram.elements, element)
SET BOT MEMORY diagram_key, JSON_STRINGIFY(diagram)
RETURN element.id
END FUNCTION
FUNCTION GenerateAsciiDiagram(diagram_name)
diagram_json = GET BOT MEMORY "diagram_" + diagram_name
diagram = JSON_PARSE(diagram_json)
output = "Diagram: " + diagram.name + "\n\n"
FOR EACH element IN diagram.elements
IF element.type = "box" THEN
output = output + "[" + element.label + "]\n"
ELSE IF element.type = "circle" THEN
output = output + "(" + element.label + ")\n"
END IF
NEXT
RETURN output
END FUNCTION
Use Cases
Technical Planning
Technical teams can use the Whiteboard API for architecture diagrams, database schemas, network topology visualization, UML diagrams, and flowcharts that document system design and processes.
Business Collaboration
Business users will benefit from mind mapping for brainstorming, process flow documentation, organizational charts, collaborative brainstorming sessions, and project planning visualizations.
Education
Educational applications include teaching illustrations, student collaboration on group projects, visual problem solving, and graphical explanations of complex concepts.
Integration Points
When available, the Whiteboard API will integrate with the Storage API for saving whiteboard data, the Calls API for sharing during video calls, Document Processing for import and export capabilities, and the Notifications API for collaboration alerts.
Status Updates
Check the GitHub repository for updates on Whiteboard API implementation status.
For immediate visual collaboration needs, consider embedding existing solutions like Excalidraw or Miro rather than waiting for the native implementation.
Email API
The Email API provides endpoints for email operations including sending, receiving, and managing email accounts through the Stalwart mail server integration.
Overview
Email functionality in General Bots is available through:
- REST API - Documented in this chapter
- BASIC Keywords -
SEND MAILfor scripts - Email Module - Background processing and IMAP/SMTP integration
Endpoints
Send Email
POST /api/email/send
Send an email message.
Request:
{
"to": ["recipient@example.com"],
"cc": ["cc@example.com"],
"bcc": [],
"subject": "Meeting Tomorrow",
"body": "Hi, just a reminder about our meeting.",
"body_type": "text",
"attachments": []
}
Response:
{
"message_id": "msg-abc123",
"status": "sent",
"timestamp": "2024-01-15T10:30:00Z"
}
Body Types:
text- Plain texthtml- HTML formatted
List Emails
GET /api/email/inbox
Retrieve inbox messages.
Query Parameters:
folder- Folder name (default: INBOX)limit- Number of messages (default: 50)offset- Pagination offsetunread- Filter unread only (boolean)since- Messages since date (ISO 8601)
Response:
{
"messages": [
{
"id": "email-001",
"from": "sender@example.com",
"subject": "Hello",
"preview": "Just wanted to say hi...",
"date": "2024-01-15T09:00:00Z",
"read": false,
"has_attachments": false
}
],
"total": 142,
"unread_count": 5
}
Get Email
GET /api/email/:id
Get specific email details.
Response:
{
"id": "email-001",
"from": {
"name": "John Doe",
"email": "john@example.com"
},
"to": [
{
"name": "You",
"email": "you@example.com"
}
],
"cc": [],
"subject": "Meeting Notes",
"body": "Here are the notes from today's meeting...",
"body_html": "<p>Here are the notes from today's meeting...</p>",
"date": "2024-01-15T09:00:00Z",
"read": true,
"attachments": [
{
"id": "att-001",
"filename": "notes.pdf",
"size": 102400,
"content_type": "application/pdf"
}
]
}
Delete Email
DELETE /api/email/:id
Delete an email message.
Response:
{
"status": "deleted",
"message_id": "email-001"
}
Get Attachment
GET /api/email/:id/attachments/:attachment_id
Download an email attachment.
Response: Binary file with appropriate Content-Type header.
Mark as Read
PUT /api/email/:id/read
Mark email as read.
Request:
{
"read": true
}
Move Email
PUT /api/email/:id/move
Move email to a different folder.
Request:
{
"folder": "Archive"
}
List Folders
GET /api/email/folders
List available email folders.
Response:
{
"folders": [
{
"name": "INBOX",
"path": "INBOX",
"unread_count": 5,
"total_count": 142
},
{
"name": "Sent",
"path": "Sent",
"unread_count": 0,
"total_count": 89
},
{
"name": "Drafts",
"path": "Drafts",
"unread_count": 0,
"total_count": 3
}
]
}
Create Draft
POST /api/email/drafts
Create an email draft.
Request:
{
"to": ["recipient@example.com"],
"subject": "Draft subject",
"body": "Draft content..."
}
Response:
{
"draft_id": "draft-001",
"status": "saved"
}
Send Draft
POST /api/email/drafts/:id/send
Send a previously saved draft.
Response:
{
"message_id": "msg-abc123",
"status": "sent"
}
Email Accounts
List Accounts
GET /api/email/accounts
List configured email accounts.
Response:
{
"accounts": [
{
"id": "account-001",
"email": "user@example.com",
"provider": "stalwart",
"status": "connected"
}
]
}
Add Account
POST /api/email/accounts
Add a new email account.
Request:
{
"email": "user@example.com",
"imap_server": "imap.example.com",
"imap_port": 993,
"smtp_server": "smtp.example.com",
"smtp_port": 587,
"username": "user@example.com",
"password": "app-specific-password"
}
Response:
{
"account_id": "account-002",
"status": "connected",
"message": "Account added successfully"
}
BASIC Integration
Use email in your BASIC scripts:
' Simple email
SEND MAIL "recipient@example.com", "Subject", "Body"
' With variables
TALK "Who should I email?"
recipient = HEAR
TALK "What's the subject?"
subject = HEAR
TALK "What's the message?"
body = HEAR
SEND MAIL recipient, subject, body
TALK "Email sent!"
Configuration
Configure email in config.csv:
key,value
smtp-server,smtp.gmail.com
smtp-port,587
imap-server,imap.gmail.com
imap-port,993
email-username,your-email@gmail.com
email-password,your-app-password
email-from,Your Name <your-email@gmail.com>
Gmail Configuration:
- Use App Passwords (not your main password)
- Enable IMAP in Gmail settings
- Allow less secure apps or use OAuth
Stalwart Mail Server
When using the built-in Stalwart mail server:
Automatic Configuration:
- Server runs on standard ports (25, 993, 587)
- Accounts created through Zitadel integration
- TLS certificates auto-managed
Manual Configuration:
key,value
stalwart-enabled,true
stalwart-domain,mail.yourdomain.com
stalwart-admin-password,secure-password
Error Handling
| Status Code | Error | Description |
|---|---|---|
| 400 | invalid_recipient | Invalid email address |
| 401 | unauthorized | Authentication required |
| 403 | forbidden | No access to mailbox |
| 404 | not_found | Email not found |
| 422 | send_failed | SMTP delivery failed |
| 503 | service_unavailable | Mail server offline |
Rate Limits
| Endpoint | Limit |
|---|---|
| Send | 100/hour per user |
| Inbox | 300/hour per user |
| Attachments | 50/hour per user |
Email Read Tracking
General Bots supports email read tracking via an invisible 1x1 pixel embedded in HTML emails. When enabled, you can track when recipients open your emails.
Configuration
Enable tracking in config.csv:
name,value
email-read-pixel,true
server-url,https://yourdomain.com
How It Works
- When sending an HTML email, a tracking pixel is automatically injected
- When the recipient opens the email, their email client loads the pixel
- The server records the open event with timestamp and metadata
- You can query the tracking status via API or view in the Suite UI
Tracking Endpoints
Serve Tracking Pixel
GET /api/email/tracking/pixel/:tracking_id
This endpoint is called automatically by email clients when loading the tracking pixel. It returns a 1x1 transparent GIF and records the read event.
Response: Binary GIF image (1x1 pixel)
Headers Set:
Content-Type: image/gifCache-Control: no-store, no-cache, must-revalidate, max-age=0
Get Tracking Status
GET /api/email/tracking/status/:tracking_id
Get the read status for a specific sent email.
Response:
{
"success": true,
"data": {
"tracking_id": "550e8400-e29b-41d4-a716-446655440000",
"to_email": "recipient@example.com",
"subject": "Meeting Tomorrow",
"sent_at": "2024-01-15T10:30:00Z",
"is_read": true,
"read_at": "2024-01-15T14:22:00Z",
"read_count": 3
}
}
List Tracked Emails
GET /api/email/tracking/list
List all sent emails with their tracking status.
Query Parameters:
account_id- Filter by email account (optional)limit- Number of results (default: 50)offset- Pagination offset (default: 0)filter- Filter by status:all,read,unread(default: all)
Response:
{
"success": true,
"data": [
{
"tracking_id": "550e8400-e29b-41d4-a716-446655440000",
"to_email": "recipient@example.com",
"subject": "Meeting Tomorrow",
"sent_at": "2024-01-15T10:30:00Z",
"is_read": true,
"read_at": "2024-01-15T14:22:00Z",
"read_count": 3
},
{
"tracking_id": "661e8400-e29b-41d4-a716-446655440001",
"to_email": "another@example.com",
"subject": "Project Update",
"sent_at": "2024-01-15T11:00:00Z",
"is_read": false,
"read_at": null,
"read_count": 0
}
]
}
Get Tracking Statistics
GET /api/email/tracking/stats
Get aggregate statistics for email tracking.
Response:
{
"success": true,
"data": {
"total_sent": 150,
"total_read": 98,
"read_rate": 65.33,
"avg_time_to_read_hours": 4.5
}
}
Tracking Data Stored
For each tracked email, the following data is recorded:
| Field | Description |
|---|---|
tracking_id | Unique ID embedded in the pixel URL |
to_email | Recipient email address |
subject | Email subject line |
sent_at | Timestamp when email was sent |
is_read | Whether email has been opened |
read_at | Timestamp of first open |
read_count | Number of times opened |
first_read_ip | IP address of first open |
last_read_ip | IP address of most recent open |
user_agent | Browser/client user agent string |
Privacy Considerations
- Email tracking should be used responsibly
- Consider disclosing tracking in your email footer
- Some email clients block tracking pixels by default
- Users may have images disabled, preventing tracking
- GDPR/LGPD may require consent for tracking
Suite UI Integration
The Suite email interface shows tracking status:
- 📊 Tracking folder shows all tracked emails
- Green checkmarks (✓✓) indicate opened emails
- Gray checkmarks indicate sent but unread
- Hover over emails to see open timestamp
- Statistics panel shows open rates
Security Notes
- Never hardcode credentials - Use config.csv
- Use App Passwords - Not main account passwords
- Enable TLS - Always use encrypted connections
- Audit sending - Log all outbound emails
Database Schema
-- user_email_accounts
CREATE TABLE user_email_accounts (
id UUID PRIMARY KEY,
user_id UUID REFERENCES users(id),
email TEXT NOT NULL,
imap_server TEXT,
smtp_server TEXT,
encrypted_password TEXT,
created_at TIMESTAMPTZ
);
-- email_drafts
CREATE TABLE email_drafts (
id UUID PRIMARY KEY,
user_id UUID REFERENCES users(id),
recipients JSONB,
subject TEXT,
body TEXT,
attachments JSONB,
created_at TIMESTAMPTZ,
updated_at TIMESTAMPTZ
);
Database Schema
-- user_email_accounts
CREATE TABLE user_email_accounts (
id UUID PRIMARY KEY,
user_id UUID REFERENCES users(id),
email TEXT NOT NULL,
imap_server TEXT,
smtp_server TEXT,
encrypted_password TEXT,
created_at TIMESTAMPTZ
);
-- email_drafts
CREATE TABLE email_drafts (
id UUID PRIMARY KEY,
user_id UUID REFERENCES users(id),
recipients JSONB,
subject TEXT,
body TEXT,
attachments JSONB,
created_at TIMESTAMPTZ,
updated_at TIMESTAMPTZ
);
-- sent_email_tracking (for read receipts)
CREATE TABLE sent_email_tracking (
id UUID PRIMARY KEY,
tracking_id UUID NOT NULL UNIQUE,
bot_id UUID NOT NULL,
account_id UUID NOT NULL,
from_email VARCHAR(255) NOT NULL,
to_email VARCHAR(255) NOT NULL,
cc TEXT,
bcc TEXT,
subject TEXT NOT NULL,
sent_at TIMESTAMPTZ NOT NULL,
is_read BOOLEAN NOT NULL DEFAULT FALSE,
read_at TIMESTAMPTZ,
read_count INTEGER NOT NULL DEFAULT 0,
first_read_ip VARCHAR(45),
last_read_ip VARCHAR(45),
user_agent TEXT,
created_at TIMESTAMPTZ NOT NULL,
updated_at TIMESTAMPTZ NOT NULL
);
See Also
- SEND MAIL Keyword - BASIC email
- CREATE DRAFT Keyword - Draft creation
- External Services - Service configuration
- Configuration Parameters - email-read-pixel setting
Notifications API
BotServer provides RESTful endpoints for managing notifications across multiple channels including push notifications, in-app alerts, and message broadcasting.
Overview
The Notifications API enables:
- Push notifications to users
- Broadcast messages to groups
- Alert management
- Notification preferences
- Delivery tracking
Base URL
http://localhost:8080/api/v1/notifications
Authentication
All Notifications API requests require authentication:
Authorization: Bearer <token>
Endpoints
Send Notification
POST /send
Send a notification to one or more recipients.
Request Body:
{
"recipients": ["user123", "user456"],
"title": "System Update",
"message": "Maintenance scheduled for tonight",
"priority": "normal",
"channels": ["web", "email"],
"data": {
"action": "view_details",
"url": "/maintenance"
}
}
Response:
{
"notification_id": "ntf_abc123",
"recipients_count": 2,
"status": "queued",
"delivery": {
"web": "pending",
"email": "pending"
}
}
Broadcast Message
POST /broadcast
Send a message to all users or a specific group.
Request Body:
{
"target": "all",
"filters": {
"channel": "web",
"last_active": "7d"
},
"message": {
"title": "New Feature Available",
"body": "Check out our latest update!",
"image_url": "https://example.com/feature.png"
},
"schedule": "2024-01-15T14:00:00Z"
}
Response:
{
"broadcast_id": "brd_xyz789",
"target_count": 1250,
"scheduled_for": "2024-01-15T14:00:00Z",
"status": "scheduled"
}
Get Notification Status
GET /notifications/{notification_id}
Get the status of a sent notification.
Response:
{
"notification_id": "ntf_abc123",
"created_at": "2024-01-15T10:00:00Z",
"status": "delivered",
"delivery_details": [
{
"recipient": "user123",
"channel": "web",
"status": "delivered",
"delivered_at": "2024-01-15T10:00:05Z"
},
{
"recipient": "user123",
"channel": "email",
"status": "delivered",
"delivered_at": "2024-01-15T10:00:10Z"
}
]
}
List Notifications
GET /notifications
List sent notifications with optional filters.
Query Parameters:
page- Page number (default: 1)limit- Items per page (default: 20)status- Filter by statuschannel- Filter by channelstart_date- Start date filterend_date- End date filter
Response:
{
"notifications": [
{
"notification_id": "ntf_abc123",
"title": "System Update",
"status": "delivered",
"created_at": "2024-01-15T10:00:00Z",
"recipients_count": 2
}
],
"total": 150,
"page": 1,
"limit": 20
}
Mark as Read
PATCH /notifications/{notification_id}/read
Mark a notification as read by the current user.
Response:
{
"notification_id": "ntf_abc123",
"marked_as_read": true,
"read_at": "2024-01-15T10:05:00Z"
}
Delete Notification
DELETE /notifications/{notification_id}
Delete a notification from the system.
Response:
{
"deleted": true,
"notification_id": "ntf_abc123"
}
User Preferences
Get Preferences
GET /users/{user_id}/preferences
Get notification preferences for a user.
Response:
{
"user_id": "user123",
"preferences": {
"email": {
"enabled": true,
"frequency": "immediate"
},
"push": {
"enabled": true,
"quiet_hours": {
"enabled": true,
"start": "22:00",
"end": "08:00"
}
},
"sms": {
"enabled": false
},
"categories": {
"system": true,
"marketing": false,
"updates": true
}
}
}
Update Preferences
PATCH /users/{user_id}/preferences
Update notification preferences.
Request Body:
{
"email": {
"enabled": false
},
"push": {
"quiet_hours": {
"enabled": true,
"start": "23:00",
"end": "07:00"
}
}
}
Notification Templates
Create Template
POST /templates
Create a reusable notification template.
Request Body:
{
"name": "welcome_message",
"title": "Welcome to {{app_name}}",
"body": "Hi {{user_name}}, welcome to our platform!",
"channels": ["email", "push"],
"variables": ["app_name", "user_name"]
}
Use Template
POST /send/template
Send notification using a template.
Request Body:
{
"template": "welcome_message",
"recipients": ["user789"],
"variables": {
"app_name": "BotServer",
"user_name": "John"
}
}
Notification Types
System Notifications
Critical system messages and alerts.
{
"type": "system",
"priority": "high",
"persistent": true,
"require_acknowledgment": true
}
User Notifications
Personal messages and updates.
{
"type": "user",
"priority": "normal",
"expires_at": "2024-01-22T10:00:00Z"
}
Broadcast Notifications
Mass communications to multiple users.
{
"type": "broadcast",
"target": "segment",
"segment_id": "active_users"
}
Delivery Channels
Web Push
Browser push notifications.
{
"channel": "web",
"options": {
"icon": "https://example.com/icon.png",
"badge": "https://example.com/badge.png",
"vibrate": [200, 100, 200],
"require_interaction": false
}
}
Email notifications with rich content.
{
"channel": "email",
"options": {
"from": "noreply@example.com",
"reply_to": "support@example.com",
"attachments": [],
"html": true
}
}
SMS
Text message notifications.
{
"channel": "sms",
"options": {
"sender_id": "BOTSERV",
"unicode": true
}
}
In-App
Notifications within the application.
{
"channel": "in_app",
"options": {
"persist": true,
"category": "updates"
}
}
Webhook Events
Delivery Events
Configure webhooks to receive delivery updates.
{
"event": "notification.delivered",
"notification_id": "ntf_abc123",
"recipient": "user123",
"channel": "email",
"delivered_at": "2024-01-15T10:00:10Z"
}
Interaction Events
Track user interactions with notifications.
{
"event": "notification.clicked",
"notification_id": "ntf_abc123",
"user_id": "user123",
"clicked_at": "2024-01-15T10:05:00Z",
"action": "view_details"
}
Error Responses
400 Bad Request
{
"error": "invalid_recipients",
"message": "One or more recipients are invalid",
"invalid_recipients": ["unknown_user"]
}
429 Rate Limit
{
"error": "rate_limit_exceeded",
"message": "Notification send limit exceeded",
"limit": 100,
"window": "1h",
"retry_after": 3600
}
Usage Examples
Send Simple Notification
curl -X POST \
-H "Authorization: Bearer token123" \
-H "Content-Type: application/json" \
-d '{
"recipients": ["user123"],
"title": "Hello",
"message": "This is a test notification"
}' \
http://localhost:8080/api/v1/notifications/send
Schedule Broadcast
curl -X POST \
-H "Authorization: Bearer token123" \
-H "Content-Type: application/json" \
-d '{
"target": "all",
"message": {
"title": "Scheduled Maintenance",
"body": "System will be unavailable from 2 AM to 4 AM"
},
"schedule": "2024-01-20T02:00:00Z"
}' \
http://localhost:8080/api/v1/notifications/broadcast
Best Practices
- Batch Notifications: Send to multiple recipients in one request
- Use Templates: Maintain consistent messaging
- Respect Preferences: Check user settings before sending
- Handle Failures: Implement retry logic
- Track Delivery: Monitor delivery rates
- Avoid Spam: Rate limit and deduplicate messages
Rate Limits
| Operation | Limit | Window |
|---|---|---|
| Send Notification | 100/hour | Per user |
| Broadcast | 10/day | Per account |
| Template Creation | 20/day | Per account |
Related APIs
- User API - User management
- WebSocket API - Real-time notifications
- Email API - Email notifications
Calendar API
The Calendar API provides endpoints for managing events, schedules, and time-based activities within BotServer.
Status
⚠️ NOT IMPLEMENTED
This API is planned for future development but is not currently available in BotServer.
Planned Features
The Calendar API will enable event creation and management, meeting scheduling, availability checking, recurring events, calendar synchronization, and reminders with notifications.
Planned Endpoints
Event Management
Event management endpoints handle the lifecycle of calendar events. Create events with POST /api/v1/calendar/events, list events with GET /api/v1/calendar/events, retrieve specific event details with GET /api/v1/calendar/events/{event_id}, update events with PATCH /api/v1/calendar/events/{event_id}, and delete events with DELETE /api/v1/calendar/events/{event_id}.
Scheduling
Scheduling endpoints help coordinate meetings. Find available time slots with POST /api/v1/calendar/schedule, schedule meetings with POST /api/v1/calendar/meeting, and check availability with GET /api/v1/calendar/availability.
Recurring Events
Recurring event endpoints manage events that repeat on a schedule. Create recurring events with POST /api/v1/calendar/events/recurring and update recurrence patterns with PATCH /api/v1/calendar/events/{event_id}/recurrence.
Reminders
Reminder endpoints manage notifications for upcoming events. Add reminders with POST /api/v1/calendar/events/{event_id}/reminders and list upcoming reminders with GET /api/v1/calendar/reminders.
Planned Integration with BASIC
When implemented, calendar features will be accessible via BASIC keywords:
' Create event (not yet available)
event_id = CREATE EVENT "Team Meeting", "2024-02-01 14:00"
SET EVENT DURATION event_id, 60 ' 60 minutes
' Check availability (not yet available)
available = CHECK AVAILABILITY "user123", "2024-02-01"
IF available THEN
TALK "User is available"
END IF
' Schedule meeting (not yet available)
meeting_id = SCHEDULE MEETING participants, datetime, duration
SEND INVITES meeting_id
Planned Data Models
Event
{
"event_id": "evt_123",
"title": "Team Meeting",
"description": "Weekly sync",
"start_time": "2024-02-01T14:00:00Z",
"end_time": "2024-02-01T15:00:00Z",
"location": "Conference Room A",
"attendees": ["user123", "user456"],
"recurrence": {
"frequency": "weekly",
"interval": 1,
"days_of_week": ["monday"],
"end_date": "2024-12-31"
},
"reminders": [
{"minutes_before": 15, "method": "notification"},
{"minutes_before": 60, "method": "email"}
]
}
Availability
{
"user_id": "user123",
"date": "2024-02-01",
"time_slots": [
{"start": "09:00", "end": "10:00", "available": true},
{"start": "10:00", "end": "11:00", "available": false},
{"start": "11:00", "end": "12:00", "available": true}
]
}
Planned Features Detail
Event Types
The API will support several event types. Single events are one-time occurrences. Recurring events follow daily, weekly, or monthly patterns. All-day events span the full day without specific start and end times. Multi-day events extend across multiple consecutive days.
Notification Methods
Notifications can be delivered through in-app notifications, email reminders, SMS alerts when configured, and bot messages through the chat interface.
Calendar Views
The API will support multiple calendar views including day view for detailed hourly scheduling, week view for weekly planning, month view for long-term visibility, and agenda view for a list-based perspective.
Time Zone Support
Time zone handling will include user-specific time zones, automatic daylight saving time adjustments, and cross-timezone meeting coordination to ensure events display correctly for all participants.
Integration Points
The calendar system will integrate with external calendar systems like Google Calendar and Outlook, video conferencing platforms, the task management system, and the notification system for reminders.
Implementation Considerations
When implemented, the Calendar API will use PostgreSQL for event storage, support iCal format for import and export, handle time zones properly across all operations, provide conflict detection for scheduling, include role-based access control for event management, support delegation for assistants, and enable calendar sharing between users.
Alternative Solutions
Until the Calendar API is implemented, consider these alternatives.
External Calendar Services
You can integrate with external providers such as Google Calendar API, Microsoft Graph API for Outlook, or CalDAV servers for standards-based calendar access.
Simple Scheduling in BASIC
For basic appointment tracking, you can store appointments in bot memory:
' Store appointments in bot memory
appointment = "Meeting with client at 2 PM"
SET BOT MEMORY "appointment_" + date, appointment
' Retrieve appointments
today_appointment = GET BOT MEMORY "appointment_" + TODAY()
IF today_appointment <> "" THEN
TALK "Today's appointment: " + today_appointment
END IF
Task-based Scheduling
An alternative approach uses the Tasks API with due dates, creates tasks for time-sensitive items, and sets reminders via scheduled BASIC scripts.
Future Integration
The Calendar API will integrate with the Tasks API to link tasks to calendar events, the Notifications API for event reminders, the User API for user availability, and the Meeting API for video conferencing.
Workaround Example
Until the Calendar API is available, you can implement basic scheduling:
' Simple appointment booking system
FUNCTION BookAppointment(date, time, description)
key = "appointment_" + date + "_" + time
existing = GET BOT MEMORY key
IF existing = "" THEN
SET BOT MEMORY key, description
TALK "Appointment booked for " + date + " at " + time
RETURN TRUE
ELSE
TALK "That time slot is already taken"
RETURN FALSE
END IF
END FUNCTION
' Check availability
FUNCTION CheckAvailability(date)
slots = ["09:00", "10:00", "11:00", "14:00", "15:00", "16:00"]
available = []
FOR EACH slot IN slots
key = "appointment_" + date + "_" + slot
appointment = GET BOT MEMORY key
IF appointment = "" THEN
available = APPEND(available, slot)
END IF
NEXT
RETURN available
END FUNCTION
Status Updates
Check the GitHub repository for updates on Calendar API implementation status.
Tasks API
BotServer provides RESTful endpoints for creating, managing, and tracking tasks and workflows within bot conversations.
Overview
The Tasks API enables:
- Task creation and assignment
- Workflow management
- Task tracking and status updates
- Deadline management
- Task prioritization
- Collaboration features
Base URL
http://localhost:8080/api/v1/tasks
Authentication
All Tasks API requests require authentication:
Authorization: Bearer <token>
Endpoints
Create Task
POST /tasks
Create a new task.
Request Body:
{
"title": "Review customer complaint",
"description": "Investigate and respond to customer issue #1234",
"assignee": "user456",
"due_date": "2024-01-20T17:00:00Z",
"priority": "high",
"tags": ["support", "urgent"],
"context": {
"conversation_id": "conv_abc123",
"bot_id": "support_bot"
}
}
Response:
{
"task_id": "tsk_xyz789",
"title": "Review customer complaint",
"status": "pending",
"created_at": "2024-01-15T10:00:00Z",
"created_by": "user123"
}
Get Task
GET /tasks/{task_id}
Retrieve task details.
Response:
{
"task_id": "tsk_xyz789",
"title": "Review customer complaint",
"description": "Investigate and respond to customer issue #1234",
"status": "in_progress",
"assignee": {
"user_id": "user456",
"name": "Jane Smith",
"avatar_url": "https://example.com/avatar.jpg"
},
"priority": "high",
"due_date": "2024-01-20T17:00:00Z",
"created_at": "2024-01-15T10:00:00Z",
"updated_at": "2024-01-15T14:30:00Z",
"progress": 60,
"time_spent_minutes": 45,
"comments_count": 3,
"attachments_count": 2
}
Update Task
PATCH /tasks/{task_id}
Update task properties.
Request Body:
{
"status": "in_progress",
"progress": 60,
"assignee": "user789"
}
Response:
{
"task_id": "tsk_xyz789",
"updated": true,
"updated_fields": ["status", "progress", "assignee"],
"updated_at": "2024-01-15T14:30:00Z"
}
List Tasks
GET /tasks
List tasks with filtering and pagination.
Query Parameters:
status- Filter by status:pending,in_progress,completed,cancelledassignee- Filter by assignee user IDpriority- Filter by priority:low,medium,high,criticaldue_before- Tasks due before datedue_after- Tasks due after datetags- Comma-separated tagspage- Page number (default: 1)limit- Items per page (default: 20)sort- Sort by:created_at,due_date,priority,updated_atorder- Sort order:asc,desc
Response:
{
"tasks": [
{
"task_id": "tsk_xyz789",
"title": "Review customer complaint",
"status": "in_progress",
"assignee": "user456",
"priority": "high",
"due_date": "2024-01-20T17:00:00Z",
"progress": 60
}
],
"total": 42,
"page": 1,
"limit": 20
}
Complete Task
POST /tasks/{task_id}/complete
Mark a task as completed.
Request Body:
{
"resolution": "Issue resolved - refund processed",
"time_spent_minutes": 90,
"outcomes": ["customer_satisfied", "refund_issued"]
}
Response:
{
"task_id": "tsk_xyz789",
"status": "completed",
"completed_at": "2024-01-15T16:00:00Z",
"completed_by": "user456"
}
Delete Task
DELETE /tasks/{task_id}
Delete a task.
Response:
{
"deleted": true,
"task_id": "tsk_xyz789"
}
Task Comments
Add Comment
POST /tasks/{task_id}/comments
Add a comment to a task.
Request Body:
{
"text": "Contacted customer via email, waiting for response",
"mentions": ["user123"],
"attachments": ["file_abc123"]
}
Response:
{
"comment_id": "cmt_123",
"task_id": "tsk_xyz789",
"text": "Contacted customer via email, waiting for response",
"author": "user456",
"created_at": "2024-01-15T14:30:00Z"
}
List Comments
GET /tasks/{task_id}/comments
Get task comments.
Response:
{
"comments": [
{
"comment_id": "cmt_123",
"text": "Contacted customer via email",
"author": {
"user_id": "user456",
"name": "Jane Smith"
},
"created_at": "2024-01-15T14:30:00Z"
}
],
"total": 3
}
Task Attachments
Upload Attachment
POST /tasks/{task_id}/attachments
Attach a file to a task.
Request:
- Method:
POST - Content-Type:
multipart/form-data - Form fields:
file(binary)
Response:
{
"attachment_id": "att_789",
"task_id": "tsk_xyz789",
"filename": "screenshot.png",
"size_bytes": 102400,
"mime_type": "image/png",
"uploaded_at": "2024-01-15T14:45:00Z"
}
Task Templates
Create Template
POST /templates
Create a reusable task template.
Request Body:
{
"name": "Customer Complaint",
"description_template": "Investigate issue: {{issue_id}}",
"default_priority": "high",
"default_tags": ["support"],
"checklist": [
"Review conversation history",
"Contact customer",
"Provide resolution",
"Follow up"
]
}
Create Task from Template
POST /tasks/from-template
Create a task from a template.
Request Body:
{
"template_id": "tpl_123",
"variables": {
"issue_id": "#1234"
},
"assignee": "user456",
"due_date": "2024-01-20T17:00:00Z"
}
Workflows
Create Workflow
POST /workflows
Create a multi-step workflow.
Request Body:
{
"name": "Customer Onboarding",
"steps": [
{
"name": "Account Setup",
"assignee": "user456",
"duration_hours": 2
},
{
"name": "Training",
"assignee": "user789",
"duration_hours": 4,
"depends_on": ["Account Setup"]
}
]
}
Get Workflow Status
GET /workflows/{workflow_id}/status
Get workflow progress.
Response:
{
"workflow_id": "wf_123",
"name": "Customer Onboarding",
"status": "in_progress",
"progress": 50,
"completed_steps": 1,
"total_steps": 2,
"current_step": "Training",
"estimated_completion": "2024-01-16T12:00:00Z"
}
Task Automation
Create Automation Rule
POST /automations
Create rules for automatic task creation.
Request Body:
{
"name": "High Priority Support",
"trigger": {
"type": "conversation_tag",
"value": "urgent"
},
"action": {
"type": "create_task",
"template": "tpl_urgent",
"auto_assign": true,
"priority": "critical"
}
}
Notifications
Task Notifications
Configure notifications for task events:
{
"events": [
"task_assigned",
"task_completed",
"task_overdue",
"comment_added"
],
"channels": ["email", "in_app"],
"recipients": ["assignee", "watchers"]
}
Analytics
Task Analytics
GET /tasks/analytics
Get task performance metrics.
Response:
{
"summary": {
"total_tasks": 234,
"completed": 189,
"in_progress": 35,
"overdue": 10,
"completion_rate": 0.81,
"average_completion_time_hours": 4.5
},
"by_priority": {
"critical": {"total": 10, "completed": 8},
"high": {"total": 45, "completed": 40},
"medium": {"total": 120, "completed": 100},
"low": {"total": 59, "completed": 41}
},
"by_assignee": [
{
"user_id": "user456",
"name": "Jane Smith",
"tasks_completed": 45,
"average_time_hours": 3.2
}
]
}
Error Responses
400 Bad Request
{
"error": "invalid_due_date",
"message": "Due date must be in the future"
}
404 Not Found
{
"error": "task_not_found",
"message": "Task tsk_xyz789 not found"
}
403 Forbidden
{
"error": "permission_denied",
"message": "You don't have permission to modify this task"
}
Best Practices
- Clear Titles: Use descriptive, action-oriented task titles
- Set Priorities: Always set appropriate priority levels
- Add Context: Include conversation or bot context
- Use Templates: Create templates for recurring task types
- Track Progress: Update progress regularly
- Set Realistic Deadlines: Allow adequate time for completion
- Use Tags: Categorize tasks with consistent tags
Integration with BASIC
Tasks can be created from BASIC scripts:
' Create task from conversation
task_id = CREATE TASK "Follow up with customer", "user456"
SET TASK PRIORITY task_id, "high"
SET TASK DUE DATE task_id, NOW() + 24 * 3600
' Check task status
status = GET TASK STATUS task_id
IF status = "completed" THEN
TALK "Task has been completed"
END IF
Rate Limits
| Operation | Limit | Window |
|---|---|---|
| Create Task | 100/hour | Per user |
| Update Task | 200/hour | Per user |
| List Tasks | 60/minute | Per user |
| Add Comment | 50/hour | Per user |
Related APIs
- Notifications API - Task notifications
- Analytics API - Task analytics
- User API - User management
Storage API
⚠️ Note: This API is not yet implemented and is planned for a future release.
BotServer provides a RESTful API for managing file storage and object management through its S3-compatible storage backend.
Overview
The Storage API allows you to:
- Upload and download files
- Manage buckets and objects
- Generate presigned URLs
- Handle binary data and documents
- Organize bot assets
Base URL
http://localhost:8080/api/v1/storage
Authentication
All storage API requests require authentication:
Authorization: Bearer <token>
Endpoints
List Buckets
GET /buckets
List all available storage buckets.
Response:
{
"buckets": [
{
"name": "mybot.gbai",
"created": "2024-01-15T10:00:00Z",
"size": 1048576
}
]
}
Create Bucket
POST /buckets
Create a new storage bucket.
Request Body:
{
"name": "newbot.gbai",
"region": "us-east-1",
"versioning": false
}
Response:
{
"bucket": "newbot.gbai",
"created": true,
"location": "/newbot.gbai"
}
List Objects
GET /buckets/{bucket}/objects
List objects in a bucket.
Query Parameters:
prefix- Filter objects by prefixdelimiter- Delimiter for groupingmax_keys- Maximum number of results (default: 1000)continuation_token- Pagination token
Response:
{
"objects": [
{
"key": "documents/manual.pdf",
"size": 2048576,
"last_modified": "2024-01-15T10:30:00Z",
"etag": "d41d8cd98f00b204e9800998ecf8427e"
}
],
"is_truncated": false,
"continuation_token": null
}
Upload Object
PUT /buckets/{bucket}/objects/{key}
Upload a file to storage.
Headers:
Content-Type- MIME type of the fileContent-Length- Size of the filex-amz-meta-*- Custom metadata
Request Body: Binary file data
Response:
{
"bucket": "mybot.gbai",
"key": "documents/report.pdf",
"etag": "d41d8cd98f00b204e9800998ecf8427e",
"version_id": null
}
Download Object
GET /buckets/{bucket}/objects/{key}
Download a file from storage.
Headers:
Range- Partial content request (optional)If-None-Match- ETag for caching (optional)
Response: Binary file data with appropriate headers
Delete Object
DELETE /buckets/{bucket}/objects/{key}
Delete an object from storage.
Response:
{
"deleted": true,
"key": "documents/old-file.pdf"
}
Copy Object
POST /buckets/{bucket}/objects/{key}/copy
Copy an object to a new location.
Request Body:
{
"source_bucket": "source.gbai",
"source_key": "file.pdf",
"destination_bucket": "dest.gbai",
"destination_key": "copied-file.pdf"
}
Response:
{
"copied": true,
"source": "source.gbai/file.pdf",
"destination": "dest.gbai/copied-file.pdf"
}
Generate Presigned URL
POST /buckets/{bucket}/objects/{key}/presign
Generate a presigned URL for temporary access.
Request Body:
{
"operation": "GET",
"expires_in": 3600,
"content_type": "application/pdf"
}
Response:
{
"url": "http://localhost:9000/mybot.gbai/file.pdf?X-Amz-Algorithm=...",
"expires_at": "2024-01-15T11:30:00Z"
}
Object Metadata
HEAD /buckets/{bucket}/objects/{key}
Get object metadata without downloading.
Response Headers:
Content-Type- MIME typeContent-Length- File sizeLast-Modified- Modification timeETag- Entity tagx-amz-meta-*- Custom metadata
Multipart Upload
POST /buckets/{bucket}/objects/{key}/multipart
Initiate multipart upload for large files.
Response:
{
"upload_id": "abc123...",
"bucket": "mybot.gbai",
"key": "large-file.zip"
}
Upload Part:
PUT /buckets/{bucket}/objects/{key}/multipart/{uploadId}/{partNumber}
Complete Upload:
POST /buckets/{bucket}/objects/{key}/multipart/{uploadId}/complete
Error Responses
404 Not Found
{
"error": "not_found",
"message": "Object not found",
"resource": "mybot.gbai/missing.pdf"
}
409 Conflict
{
"error": "conflict",
"message": "Bucket already exists",
"bucket": "existing.gbai"
}
507 Insufficient Storage
{
"error": "insufficient_storage",
"message": "Storage quota exceeded",
"quota": 10737418240,
"used": 10737418240
}
Usage Examples
Upload File with cURL
curl -X PUT \
-H "Authorization: Bearer token123" \
-H "Content-Type: application/pdf" \
--data-binary @document.pdf \
http://localhost:8080/api/v1/storage/buckets/mybot.gbai/objects/docs/manual.pdf
Download File
curl -X GET \
-H "Authorization: Bearer token123" \
http://localhost:8080/api/v1/storage/buckets/mybot.gbai/objects/docs/manual.pdf \
-o downloaded.pdf
List Objects with Prefix
curl -X GET \
-H "Authorization: Bearer token123" \
"http://localhost:8080/api/v1/storage/buckets/mybot.gbai/objects?prefix=docs/"
Storage Organization
Recommended Structure
bucket/
├── .gbkb/ # Knowledge base files
│ ├── docs/
│ └── data/
├── .gbdialog/ # Dialog scripts
│ ├── scripts/
│ └── tools/
├── .gbtheme/ # Theme assets
│ ├── css/
│ └── images/
└── .gbdrive/ # User uploads
├── attachments/
└── temp/
Quotas and Limits
| Limit | Default Value |
|---|---|
| Max file size | 100 MB |
| Max bucket size | 10 GB |
| Max objects per bucket | 10,000 |
| Presigned URL validity | 7 days |
| Multipart chunk size | 5 MB |
Performance Tips
- Use Multipart Upload for files > 5MB
- Enable Caching with ETags
- Compress Large Files before upload
- Use Presigned URLs for direct client uploads
- Implement Retry Logic for network failures
Security Considerations
- All uploads are scanned for malware
- File types are validated
- Presigned URLs expire automatically
- Access control per bucket
- Encryption at rest
- SSL/TLS for transfers
Related APIs
Backup API
⚠️ Note: This API is not yet implemented and is planned for a future release.
The Backup API will provide endpoints for creating, managing, and restoring backups of bot data and configurations.
Planned Features
- Automated backup scheduling
- Point-in-time recovery
- Export/import bot configurations
- Data archival and retention policies
- Incremental and full backup options
Base URL (Planned)
http://localhost:8080/api/v1/backup
Authentication
Will use the standard BotServer authentication mechanism with appropriate role-based permissions.
Endpoints (Planned)
Create Backup
POST /api/v1/backup/create
List Backups
GET /api/v1/backup/list
Restore Backup
POST /api/v1/backup/restore/{backup_id}
Delete Backup
DELETE /api/v1/backup/{backup_id}
Schedule Backup
POST /api/v1/backup/schedule
Implementation Status
This API is currently in the planning phase. Check back in future releases for availability.
Analytics API
⚠️ Note: This API is not yet implemented and is planned for a future release.
The Analytics API will provide endpoints for tracking, analyzing, and reporting on bot usage and performance metrics.
Planned Features
- Usage analytics and statistics
- Conversation metrics and insights
- User engagement tracking
- Performance monitoring
- Custom report generation
- Real-time analytics dashboard
Base URL (Planned)
http://localhost:8080/api/v1/analytics
Authentication
Will use the standard BotServer authentication mechanism with appropriate role-based permissions.
Endpoints (Planned)
Get Usage Statistics
GET /api/v1/analytics/usage
Get Conversation Metrics
GET /api/v1/analytics/conversations
Get User Engagement
GET /api/v1/analytics/engagement
Generate Custom Report
POST /api/v1/analytics/reports
Get Real-time Metrics
GET /api/v1/analytics/realtime
Implementation Status
This API is currently in the planning phase. Check back in future releases for availability.
Reports API
⚠️ Note: This API is not yet implemented and is planned for a future release.
The Reports API will provide endpoints for generating, managing, and exporting various types of reports from bot data and analytics.
Planned Features
- Custom report generation
- Scheduled report delivery
- Multiple export formats (PDF, CSV, Excel)
- Report templates and presets
- Historical data reporting
- Compliance and audit reports
Base URL (Planned)
http://localhost:8080/api/v1/reports
Authentication
Will use the standard BotServer authentication mechanism with appropriate role-based permissions.
Endpoints (Planned)
Generate Report
POST /api/v1/reports/generate
List Reports
GET /api/v1/reports/list
Get Report Status
GET /api/v1/reports/{report_id}/status
Download Report
GET /api/v1/reports/{report_id}/download
Schedule Report
POST /api/v1/reports/schedule
Delete Report
DELETE /api/v1/reports/{report_id}
Implementation Status
This API is currently in the planning phase. Check back in future releases for availability.
Admin API
⚠️ Note: This API is not yet implemented and is planned for a future release.
The Admin API will provide endpoints for system administration, user management, and configuration management.
Planned Features
- System configuration management
- User and role administration
- Bot lifecycle management
- System health monitoring
- Audit logging and compliance
- Backup and restore operations
Base URL (Planned)
http://localhost:8080/api/v1/admin
Authentication
Will use the standard BotServer authentication mechanism with administrator-level permissions required.
Endpoints (Planned)
System Configuration
GET /api/v1/admin/config
PUT /api/v1/admin/config
User Management
GET /api/v1/admin/users
POST /api/v1/admin/users
DELETE /api/v1/admin/users/{user_id}
Bot Management
GET /api/v1/admin/bots
POST /api/v1/admin/bots/{bot_id}/restart
DELETE /api/v1/admin/bots/{bot_id}
System Health
GET /api/v1/admin/health
GET /api/v1/admin/metrics
Audit Logs
GET /api/v1/admin/audit
Implementation Status
This API is currently in the planning phase. Check back in future releases for availability.
Monitoring API
⚠️ Note: This API is not yet implemented and is planned for a future release.
The Monitoring API will provide endpoints for real-time system monitoring, performance metrics, and health checks.
Planned Features
- Real-time system metrics
- Performance monitoring
- Health check endpoints
- Alert configuration
- Log aggregation
- Resource usage tracking
- Service status monitoring
Base URL (Planned)
http://localhost:8080/api/v1/monitoring
Authentication
Will use the standard BotServer authentication mechanism with appropriate role-based permissions.
Endpoints (Planned)
System Health
GET /api/v1/monitoring/health
Performance Metrics
GET /api/v1/monitoring/metrics
Service Status
GET /api/v1/monitoring/services
Resource Usage
GET /api/v1/monitoring/resources
Alert Configuration
POST /api/v1/monitoring/alerts
GET /api/v1/monitoring/alerts
Log Stream
GET /api/v1/monitoring/logs
Implementation Status
This API is currently in the planning phase. Check back in future releases for availability.
AI API
⚠️ Note: This API is not yet implemented and is planned for a future release.
The AI API will provide endpoints for managing AI models, inference, training, and advanced AI operations.
Planned Features
- Model management and deployment
- Inference endpoints for various AI tasks
- Fine-tuning and training capabilities
- Model versioning and rollback
- Performance optimization settings
- Custom AI pipeline configuration
Base URL (Planned)
http://localhost:8080/api/v1/ai
Authentication
Will use the standard BotServer authentication mechanism with appropriate role-based permissions.
Endpoints (Planned)
Model Management
GET /api/v1/ai/models
POST /api/v1/ai/models/deploy
DELETE /api/v1/ai/models/{model_id}
Inference
POST /api/v1/ai/inference
POST /api/v1/ai/chat/completions
Training
POST /api/v1/ai/training/start
GET /api/v1/ai/training/{job_id}/status
Model Configuration
GET /api/v1/ai/models/{model_id}/config
PUT /api/v1/ai/models/{model_id}/config
Implementation Status
This API is currently in the planning phase. Check back in future releases for availability.
ML API
⚠️ Note: This API is not yet implemented and is planned for a future release.
The ML API will provide endpoints for machine learning operations, model training, and predictive analytics.
Planned Features
- Dataset management and preprocessing
- Model training and evaluation
- Hyperparameter tuning
- Batch predictions
- Model performance monitoring
- A/B testing for models
- Feature engineering tools
Base URL (Planned)
http://localhost:8080/api/v1/ml
Authentication
Will use the standard BotServer authentication mechanism with appropriate role-based permissions.
Endpoints (Planned)
Dataset Management
POST /api/v1/ml/datasets
GET /api/v1/ml/datasets
DELETE /api/v1/ml/datasets/{dataset_id}
Model Training
POST /api/v1/ml/train
GET /api/v1/ml/jobs/{job_id}
POST /api/v1/ml/jobs/{job_id}/stop
Predictions
POST /api/v1/ml/predict
POST /api/v1/ml/batch-predict
Model Evaluation
GET /api/v1/ml/models/{model_id}/metrics
POST /api/v1/ml/models/{model_id}/evaluate
Feature Engineering
POST /api/v1/ml/features/extract
GET /api/v1/ml/features/importance
Implementation Status
This API is currently in the planning phase. Check back in future releases for availability.
Security API
⚠️ Note: This API is not yet implemented and is planned for a future release.
The Security API will provide endpoints for security management, access control, and threat monitoring.
Planned Features
- Authentication and authorization management
- API key generation and management
- Role-based access control (RBAC)
- Security audit logging
- Threat detection and prevention
- Encryption key management
- Session management
- OAuth integration
Base URL (Planned)
http://localhost:8080/api/v1/security
Authentication
Will use the standard BotServer authentication mechanism with elevated security permissions required.
Endpoints (Planned)
Authentication
POST /api/v1/security/auth/login
POST /api/v1/security/auth/logout
POST /api/v1/security/auth/refresh
API Keys
POST /api/v1/security/keys/generate
GET /api/v1/security/keys
DELETE /api/v1/security/keys/{key_id}
Access Control
GET /api/v1/security/roles
POST /api/v1/security/roles
PUT /api/v1/security/permissions
Audit Logs
GET /api/v1/security/audit
GET /api/v1/security/audit/export
Session Management
GET /api/v1/security/sessions
DELETE /api/v1/security/sessions/{session_id}
Security Monitoring
GET /api/v1/security/threats
GET /api/v1/security/vulnerabilities
Implementation Status
This API is currently in the planning phase. Check back in future releases for availability.
Compliance API
⚠️ Note: This API is not yet implemented and is planned for a future release.
The Compliance API will provide endpoints for regulatory compliance management, audit trails, and policy enforcement.
Planned Features
- Regulatory compliance tracking
- Audit trail management
- Policy enforcement and validation
- Compliance reporting
- Data governance controls
- Privacy management (GDPR, CCPA)
- Retention policy management
- Compliance dashboards
Base URL (Planned)
http://localhost:8080/api/v1/compliance
Authentication
Will use the standard BotServer authentication mechanism with appropriate role-based permissions.
Endpoints (Planned)
Compliance Status
GET /api/v1/compliance/status
Audit Trails
GET /api/v1/compliance/audit-trails
POST /api/v1/compliance/audit-trails/export
Policy Management
GET /api/v1/compliance/policies
POST /api/v1/compliance/policies
PUT /api/v1/compliance/policies/{policy_id}
Compliance Reports
POST /api/v1/compliance/reports/generate
GET /api/v1/compliance/reports/{report_id}
Data Governance
GET /api/v1/compliance/data-governance
POST /api/v1/compliance/data-governance/scan
Privacy Management
POST /api/v1/compliance/privacy/request
GET /api/v1/compliance/privacy/status
Retention Policies
GET /api/v1/compliance/retention
PUT /api/v1/compliance/retention
Implementation Status
This API is currently in the planning phase. Check back in future releases for availability.
API Examples
This section provides practical examples of using the BotServer REST API in various programming languages and scenarios.
Authentication Examples
Getting a Session Token
JavaScript/TypeScript:
// Note: Authentication is handled through Zitadel OAuth flow
// This is a simplified example
async function authenticate() {
// Redirect to Zitadel login
window.location.href = '/auth/login';
// After OAuth callback, session token is set
// Use it for subsequent requests
}
cURL:
# Session validation
curl -X GET http://localhost:8080/auth/validate \
-H "Authorization: Bearer YOUR_SESSION_TOKEN"
Group Management Examples
Creating a Group
JavaScript:
async function createGroup() {
const response = await fetch('/api/groups/create', {
method: 'POST',
headers: {
'Content-Type': 'application/json',
'Authorization': 'Bearer YOUR_TOKEN'
},
body: JSON.stringify({
name: 'Engineering Team',
description: 'Software developers'
})
});
const group = await response.json();
console.log('Created group:', group);
}
Python:
import requests
def create_group():
url = "http://localhost:8080/api/groups/create"
headers = {
"Authorization": "Bearer YOUR_TOKEN",
"Content-Type": "application/json"
}
data = {
"name": "Engineering Team",
"description": "Software developers"
}
response = requests.post(url, json=data, headers=headers)
return response.json()
Adding Group Members
JavaScript:
async function addMember(groupId, userId) {
const response = await fetch(`/api/groups/${groupId}/members/add`, {
method: 'POST',
headers: {
'Content-Type': 'application/json',
'Authorization': 'Bearer YOUR_TOKEN'
},
body: JSON.stringify({
user_id: userId,
role: 'member'
})
});
return response.json();
}
Admin API Examples
Getting System Status
cURL:
curl -X GET http://localhost:8080/api/admin/system/status \
-H "Authorization: Bearer ADMIN_TOKEN"
Go:
package main
import (
"net/http"
"io/ioutil"
"fmt"
)
func getSystemStatus(token string) {
client := &http.Client{}
req, _ := http.NewRequest("GET",
"http://localhost:8080/api/admin/system/status", nil)
req.Header.Add("Authorization", "Bearer " + token)
resp, err := client.Do(req)
if err != nil {
panic(err)
}
defer resp.Body.Close()
body, _ := ioutil.ReadAll(resp.Body)
fmt.Println(string(body))
}
Creating a Backup
JavaScript:
async function createBackup() {
const response = await fetch('/api/admin/backup/create', {
method: 'POST',
headers: {
'Content-Type': 'application/json',
'Authorization': 'Bearer ADMIN_TOKEN'
},
body: JSON.stringify({
backup_type: 'full',
include_data: true,
include_config: true
})
});
const backup = await response.json();
console.log('Backup created:', backup.id);
console.log('Download URL:', backup.download_url);
}
WebSocket Communication
Real-Time Chat
JavaScript:
class BotChat {
constructor(sessionId) {
this.sessionId = sessionId;
this.ws = null;
}
connect() {
this.ws = new WebSocket('ws://localhost:8080/ws');
this.ws.onopen = () => {
console.log('Connected to bot');
};
this.ws.onmessage = (event) => {
const message = JSON.parse(event.data);
this.handleMessage(message);
};
this.ws.onerror = (error) => {
console.error('WebSocket error:', error);
};
}
sendMessage(content) {
if (this.ws && this.ws.readyState === WebSocket.OPEN) {
this.ws.send(JSON.stringify({
type: 'message',
content: content,
session_id: this.sessionId
}));
}
}
handleMessage(message) {
console.log('Bot response:', message.content);
if (message.suggestions) {
console.log('Suggestions:', message.suggestions);
}
}
}
// Usage
const chat = new BotChat('session-123');
chat.connect();
chat.sendMessage('Hello bot!');
Error Handling
Handling API Errors
JavaScript:
async function apiCall(url, options = {}) {
try {
const response = await fetch(url, {
...options,
headers: {
'Authorization': 'Bearer YOUR_TOKEN',
'Content-Type': 'application/json',
...options.headers
}
});
if (!response.ok) {
const error = await response.json();
throw new Error(error.message || `HTTP ${response.status}`);
}
return await response.json();
} catch (error) {
console.error('API Error:', error);
// Handle specific error codes
if (error.code === 'RATE_LIMITED') {
// Wait and retry
await new Promise(resolve => setTimeout(resolve, 1000));
return apiCall(url, options);
}
throw error;
}
}
Rate Limit Handling
Python:
import time
import requests
class APIClient:
def __init__(self, base_url, token):
self.base_url = base_url
self.headers = {
'Authorization': f'Bearer {token}',
'Content-Type': 'application/json'
}
def request(self, method, endpoint, **kwargs):
url = f"{self.base_url}{endpoint}"
response = requests.request(
method, url,
headers=self.headers,
**kwargs
)
# Check rate limit headers
remaining = response.headers.get('X-RateLimit-Remaining')
if remaining and int(remaining) < 10:
reset_time = int(response.headers.get('X-RateLimit-Reset', 0))
sleep_time = max(reset_time - time.time(), 0)
print(f"Rate limit approaching, sleeping {sleep_time}s")
time.sleep(sleep_time)
response.raise_for_status()
return response.json()
Pagination Examples
Iterating Through Paginated Results
JavaScript:
async function* getAllGroups(token) {
let offset = 0;
const limit = 20;
while (true) {
const response = await fetch(
`/api/groups/list?limit=${limit}&offset=${offset}`,
{
headers: { 'Authorization': `Bearer ${token}` }
}
);
const data = await response.json();
for (const group of data.groups) {
yield group;
}
if (data.groups.length < limit) {
break; // No more pages
}
offset += limit;
}
}
// Usage
for await (const group of getAllGroups(token)) {
console.log(group.name);
}
Integration Patterns
Webhook Handler
Node.js/Express:
const express = require('express');
const app = express();
app.post('/webhook/botserver', express.json(), (req, res) => {
const event = req.body;
switch(event.type) {
case 'user.created':
handleUserCreated(event.data);
break;
case 'conversation.completed':
handleConversationCompleted(event.data);
break;
default:
console.log('Unknown event type:', event.type);
}
res.status(200).send('OK');
});
function handleUserCreated(userData) {
console.log('New user:', userData);
// Process new user
}
function handleConversationCompleted(conversationData) {
console.log('Conversation completed:', conversationData);
// Process completed conversation
}
Best Practices
- Always handle errors gracefully - Network failures happen
- Respect rate limits - Implement exponential backoff
- Use environment variables for API tokens
- Log API interactions for debugging
- Cache responses when appropriate
- Use connection pooling for multiple requests
- Implement timeout handling for long operations
Testing API Calls
Using Postman
- Import the API collection (when available)
- Set environment variables for:
base_url: http://localhost:8080token: Your session token
- Run requests individually or as collection
Unit Testing API Calls
JavaScript/Jest:
describe('Groups API', () => {
test('should create a group', async () => {
const mockFetch = jest.fn(() =>
Promise.resolve({
ok: true,
json: () => Promise.resolve({
id: 'group-123',
name: 'Test Group'
})
})
);
global.fetch = mockFetch;
const result = await createGroup('Test Group');
expect(mockFetch).toHaveBeenCalledWith(
'/api/groups/create',
expect.objectContaining({
method: 'POST'
})
);
expect(result.id).toBe('group-123');
});
});
Summary
These examples demonstrate common patterns for interacting with the BotServer API. Remember to:
- Handle authentication properly through Zitadel
- Check response status codes
- Parse error responses
- Implement proper error handling
- Use appropriate HTTP methods
- Follow REST conventions
For more specific endpoint documentation, refer to the individual API sections in this chapter.
Chapter 11: Feature Reference
Quick reference for all General Bots capabilities.
Feature Categories
| Category | Key Features |
|---|---|
| Core | Chat, dialogs, sessions, automation |
| AI/LLM | Model integration, knowledge base, context |
| Channels | Web, WhatsApp, Teams, Email, SMS |
| Productivity | Calendar, tasks, drive, mail, meet |
| Data | CRUD, aggregations, file operations |
| HTTP | REST, GraphQL, SOAP, webhooks |
Editions
| Edition | Use Case |
|---|---|
| Minimal | Embedded, IoT |
| Core | General business |
| Standard | Professional teams |
| Enterprise | Large organizations |
Chapter Contents
- Feature Editions - Edition comparison
- Core Features - Platform fundamentals
- Conversation Management - Dialog flows
- AI and LLM - AI integration
- Knowledge Base - RAG patterns
- Automation - Scheduled tasks
- Email Integration - Email features
- Storage and Data - Data persistence
- Multi-Channel Support - Communication channels
- Hybrid Search - RAG 2.0
- Memory Management - Context handling
- Multi-Agent - Agent coordination
See Also
- BASIC Reference - Scripting
- Configuration - Settings
Feature Editions
General Bots offers flexible feature configurations to match different deployment needs. Features can be enabled at compile time using Cargo feature flags or selected through pre-configured edition bundles.
Edition Overview
| Edition | Target Use Case | Key Features |
|---|---|---|
| Minimal | Embedded, IoT, testing | Basic chat only |
| Lightweight | Small teams, startups | Chat + Drive + Tasks |
| Core | General business use | Full productivity suite |
| Standard | Professional teams | + Email + Calendar + Meet |
| Enterprise | Large organizations | + Compliance + Multi-channel + GPU |
| Full | Maximum capability | All features enabled |
Minimal Edition
Use Case: Embedded systems, IoT devices, testing environments
Cargo Feature: minimal
cargo build --features minimal
Included Features
- ✅ UI Server (web interface)
- ✅ Basic chat functionality
Not Included
- ❌ Console TUI
- ❌ File storage
- ❌ Task management
- ❌ LLM integration
- ❌ Vector search
Typical Deployment: Raspberry Pi, edge devices, containerized microservices
Lightweight Edition
Use Case: Small teams, startups, personal use
Cargo Feature: lightweight
cargo build --features lightweight
Included Features
- ✅ UI Server
- ✅ Chat
- ✅ Drive (file storage)
- ✅ Tasks
- ✅ Redis caching
Not Included
- ❌ Email integration
- ❌ Calendar
- ❌ Video meetings
- ❌ Compliance tools
- ❌ Multi-channel messaging
Typical Deployment: Small office server, developer workstation
Core Edition (Default)
Use Case: General business operations, mid-size teams
Cargo Feature: default (or no feature flag)
cargo build
# or explicitly:
cargo build --features default
Included Features
- ✅ UI Server
- ✅ Console TUI
- ✅ Chat
- ✅ Automation (Rhai scripting)
- ✅ Tasks (with cron scheduling)
- ✅ Drive
- ✅ LLM integration
- ✅ Redis caching
- ✅ Progress bars
- ✅ Directory services
Not Included
- ❌ Email (IMAP/SMTP)
- ❌ Calendar management
- ❌ Video meetings
- ❌ Vector database
- ❌ Compliance monitoring
- ❌ Multi-channel (WhatsApp, Teams, etc.)
- ❌ NVIDIA GPU support
- ❌ Desktop application
Typical Deployment: On-premise server, cloud VM, container
Standard Edition
Use Case: Professional teams requiring full productivity features
Cargo Feature: productivity
cargo build --features productivity
Included Features
All Core features plus:
- ✅ Email integration (IMAP/SMTP)
- ✅ Calendar management
- ✅ Video meetings (LiveKit)
- ✅ Mail client interface
- ✅ Redis caching
Additional Dependencies
imap- Email receivinglettre- Email sendingmailparse- Email parsinglivekit- Video conferencing
Typical Deployment: Business office, remote teams
Enterprise Edition
Use Case: Large organizations with compliance and integration requirements
Cargo Feature: enterprise
cargo build --features enterprise
Included Features
All Standard features plus:
- ✅ Compliance monitoring (LGPD/GDPR/HIPAA/SOC2)
- ✅ Attendance tracking
- ✅ Directory services (LDAP/AD compatible)
- ✅ Vector database (Qdrant)
- ✅ Advanced monitoring (sysinfo)
- ✅ LLM integration
Compliance Features
| Framework | Status | Implementation |
|---|---|---|
| LGPD | ✅ | Data subject rights dialogs |
| GDPR | ✅ | Consent management, data portability |
| HIPAA | ✅ | PHI handling, audit trails |
| SOC 2 | ✅ | Access controls, logging |
| ISO 27001 | ✅ | Asset management, risk assessment |
| PCI DSS | ✅ | Payment data protection |
Typical Deployment: Enterprise data center, regulated industries
Communications Edition
Use Case: Organizations needing multi-channel customer engagement
Cargo Feature: communications
cargo build --features communications
Included Features
- ✅ Email (IMAP/SMTP)
- ✅ WhatsApp Business
- ✅ Instagram messaging
- ✅ Microsoft Teams
- ✅ Chat
- ✅ Redis caching
Channel Support
| Channel | Protocol | Status |
|---|---|---|
| Cloud API | ✅ | |
| Graph API | ✅ | |
| MS Teams | Bot Framework | ✅ |
| Telegram | Bot API | Planned |
| Slack | Web API | Planned |
| SMS | Twilio | Planned |
Typical Deployment: Customer service center, marketing teams
Full Edition
Use Case: Maximum capability, all features enabled
Cargo Feature: full
cargo build --features full
All Features Enabled
- ✅ UI Server + Desktop application
- ✅ Console TUI
- ✅ Vector database (Qdrant)
- ✅ LLM integration
- ✅ NVIDIA GPU acceleration
- ✅ All communication channels
- ✅ Full productivity suite
- ✅ Compliance & attendance
- ✅ Directory services
- ✅ Web automation
- ✅ Redis caching
- ✅ System monitoring
- ✅ Automation (Rhai)
- ✅ gRPC support
- ✅ Progress bars
Hardware Recommendations
| Component | Minimum | Recommended |
|---|---|---|
| CPU | 4 cores | 8+ cores |
| RAM | 8 GB | 32 GB |
| Storage | 100 GB SSD | 500 GB NVMe |
| GPU | Optional | NVIDIA RTX 3060+ |
| Network | 100 Mbps | 1 Gbps |
Typical Deployment: Enterprise AI platform, research environments
Feature Matrix
| Feature | Minimal | Light | Core | Standard | Enterprise | Full |
|---|---|---|---|---|---|---|
| UI Server | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ |
| Chat | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ |
| Console TUI | ❌ | ❌ | ✅ | ✅ | ✅ | ✅ |
| Drive | ❌ | ✅ | ✅ | ✅ | ✅ | ✅ |
| Tasks | ❌ | ✅ | ✅ | ✅ | ✅ | ✅ |
| Automation | ❌ | ❌ | ✅ | ✅ | ✅ | ✅ |
| LLM | ❌ | ❌ | ✅ | ✅ | ✅ | ✅ |
| ❌ | ❌ | ❌ | ✅ | ✅ | ✅ | |
| Calendar | ❌ | ❌ | ❌ | ✅ | ✅ | ✅ |
| Meet | ❌ | ❌ | ❌ | ✅ | ✅ | ✅ |
| Vector DB | ❌ | ❌ | ❌ | ❌ | ✅ | ✅ |
| Compliance | ❌ | ❌ | ❌ | ❌ | ✅ | ✅ |
| Multi-channel | ❌ | ❌ | ❌ | ❌ | ❌ | ✅ |
| Desktop | ❌ | ❌ | ❌ | ❌ | ❌ | ✅ |
| GPU | ❌ | ❌ | ❌ | ❌ | ❌ | ✅ |
Custom Feature Combinations
You can combine individual features for custom builds:
# Chat + Email + Vector search
cargo build --features "chat,email,vectordb"
# Productivity + Compliance
cargo build --features "productivity,compliance"
# Everything except desktop
cargo build --features "full" --no-default-features
Available Feature Flags
[features]
# UI Features
desktop = ["dep:tauri", ...]
ui-server = []
console = ["dep:crossterm", "dep:ratatui", "monitoring"]
# Core Integrations
vectordb = ["dep:qdrant-client"]
llm = []
nvidia = []
# Communication Channels
email = ["dep:imap", "dep:lettre", ...]
whatsapp = []
instagram = []
msteams = []
# Productivity Features
chat = []
drive = ["dep:aws-config", "dep:aws-sdk-s3", ...]
tasks = ["dep:cron"]
calendar = []
meet = ["dep:livekit"]
mail = ["email"]
# Enterprise Features
compliance = ["dep:csv"]
attendance = []
directory = []
weba = []
# Infrastructure
redis-cache = ["dep:redis"]
monitoring = ["dep:sysinfo"]
automation = ["dep:rhai"]
grpc = ["dep:tonic"]
progress-bars = ["dep:indicatif"]
Deployment Recommendations
By Organization Size
| Size | Employees | Recommended Edition |
|---|---|---|
| Solo | 1 | Lightweight |
| Startup | 2-10 | Core |
| SMB | 11-50 | Standard |
| Mid-market | 51-200 | Enterprise |
| Enterprise | 200+ | Full |
By Industry
| Industry | Recommended Edition | Key Features |
|---|---|---|
| Healthcare | Enterprise | HIPAA compliance |
| Finance | Enterprise | SOC 2, PCI DSS |
| Education | Standard | Calendar, Meet |
| Retail | Communications | Multi-channel |
| Legal | Enterprise | Document management, compliance |
| Manufacturing | Core | Automation, tasks |
| Tech/SaaS | Full | All capabilities |
Upgrading Editions
Editions can be changed by rebuilding with different feature flags:
# From Core to Enterprise
cargo build --release --features enterprise
# From Standard to Full
cargo build --release --features full
Note: Some features may require additional infrastructure components:
vectordb→ Requires Qdrant servicemeet→ Requires LiveKit serverredis-cache→ Requires Redis/Valkeynvidia→ Requires NVIDIA GPU + CUDA
See Also
Core Features
Technical overview of BotServer capabilities. For the complete feature matrix, see Feature Reference.
Multi-Channel Communication
| Channel | Protocol | Keywords |
|---|---|---|
| Web Chat | WebSocket | TALK, HEAR |
| Cloud API | SEND, SEND TEMPLATE | |
| SMTP/IMAP | SEND MAIL | |
| Teams | Graph API | SEND |
| Voice | WebRTC | PLAY, RECORD |
All channels share the same conversation logic through a unified abstraction.
Authentication & Sessions
- Password Hashing: Argon2 with secure defaults
- Session Tokens: Cryptographically secure generation
- Session Persistence: Survives restarts (database-backed)
- User Isolation: Each user has isolated session state
BASIC Scripting
Built on the Rhai engine with custom keywords:
TALK "Hello!" ' Output
HEAR name AS NAME ' Input with validation
result = LLM "Summarize: " + text ' AI integration
USE KB "docs" ' Knowledge base
Scripts stored as .gbdialog files in bot packages.
LLM Integration
| Provider | Models | Features |
|---|---|---|
| OpenAI | GPT-5, o3 | Streaming, function calling |
| Anthropic | Claude Sonnet 4.5, Opus 4.5 | Analysis, coding, guidelines |
| Local | GGUF models | GPU acceleration, offline |
Features: prompt templates, context injection, token management, cost optimization.
Knowledge Base
- Vector Database: Qdrant for semantic search
- Document Processing: PDF, DOCX, HTML, TXT extraction
- Auto-Indexing: Documents indexed on upload
- Context Retrieval: Automatic injection into LLM prompts
Storage
Object Storage (S3-Compatible)
- Bucket management
- Secure credential-based access
- Template and asset storage
File Monitoring
- Real-time change detection
- Automatic processing triggers
- Event-driven workflows
Database
PostgreSQL with Diesel ORM:
- Connection pooling (R2D2)
- Automatic migrations
- ACID transactions
Key tables: users, bots, sessions, messages, conversations
Automation
SET SCHEDULE "0 9 * * *" ' Daily at 9 AM
SEND MAIL "team@company.com", "Daily Report", report
- Cron scheduling
- Event triggers
- Background jobs
Security
| Feature | Implementation |
|---|---|
| Password Storage | Argon2 |
| Data at Rest | AES-GCM |
| Sessions | Cryptographic tokens |
| API Access | Token-based auth |
| Transport | TLS via proxy |
Optional Components
| Component | Port | Purpose |
|---|---|---|
| Email Server | 25/993 | SMTP/IMAP |
| Video Server | 7880 | LiveKit meetings |
| Vector DB | 6333 | Qdrant search |
| Time-Series | 8086 | InfluxDB metrics |
Extensibility
- Custom Keywords: Add BASIC keywords in Rust
- Tool Integration: Call external APIs from scripts
- Custom Channels: Implement new communication channels
- LLM Providers: Add new AI providers
See Also
- Feature Reference - Complete feature matrix
- AI and LLM - AI integration details
- Automation - Scheduling and triggers
Conversation Management
This chapter explores how BotServer manages conversations through sessions, message history, and context tracking. Understanding these mechanisms helps you build bots that maintain coherent, contextual interactions across multiple turns and sessions.
The Conversation Lifecycle
Every conversation in BotServer follows a well-defined lifecycle that begins when a user first connects and continues until the session expires or ends explicitly. When a user interacts with a bot, the system creates a session that serves as the container for all conversation state, including message history, user preferences, and any variables set during the interaction.
Sessions persist across individual messages, allowing conversations to span multiple interactions. A user might ask a question, receive a response, and return hours later to continue the same conversation thread. The system maintains this continuity by storing session data in PostgreSQL for durability while caching active sessions in the cache layer for fast access.
The session contains a unique identifier, a reference to the associated user (or an anonymous identifier), the bot being interacted with, creation and expiration timestamps, and all accumulated conversation state. This comprehensive tracking enables sophisticated multi-turn interactions where the bot remembers previous exchanges and builds upon them.
Message History and Persistence
Every message exchanged during a conversation is recorded in the message history table, creating a permanent record of the interaction. Each entry captures the session identifier linking it to the conversation, the user and bot involved, the actual message content, an indicator of whether the message came from the user or the bot, and a precise timestamp.
The system distinguishes between several message types that serve different purposes. User messages represent input from the human participant. Bot responses contain the generated replies. System messages convey status updates or notifications. Tool outputs capture results from executed tools. This categorization helps with both display formatting and analysis.
Message history serves multiple purposes beyond simple record-keeping. The conversation context sent to the language model draws from recent history, enabling contextual responses. Analytics systems process history to understand usage patterns and conversation quality. Compliance requirements often mandate retention of interaction records, which the history system satisfies.
Context Assembly and Management
Context management represents one of the most sophisticated aspects of conversation handling. When generating a response, the system must assemble relevant information from multiple sources into a coherent context that guides the language model’s output.
The context assembly process draws from several layers. System context includes the bot’s configuration and base prompts that establish personality and capabilities. Conversation context incorporates recent message history to maintain coherence. Knowledge context adds relevant documents retrieved from active knowledge bases. User context includes preferences and state specific to the current user. Tool context describes available tools the model can invoke.
Because language models have limited context windows, the system must manage what information to include. Automatic truncation removes older messages when the context grows too large, preserving the most recent and relevant exchanges. For very long conversations, summarization compresses earlier history into concise summaries that capture essential information without consuming excessive tokens.
Scripts can manipulate context directly through dedicated keywords. Setting context adds specific information that should influence responses. Clearing context removes information that is no longer relevant. These operations give developers fine-grained control over what the model knows during generation.
Multi-Turn Interaction Patterns
Conversations rarely consist of single isolated exchanges. Users ask follow-up questions, refine requests, and reference earlier parts of the conversation. BotServer’s architecture specifically supports these multi-turn patterns through careful context management and entity tracking.
When a user says “Book a meeting for tomorrow” followed by “Make it at 2 PM,” the system must understand that “it” refers to the meeting mentioned in the previous turn. This reference resolution happens automatically through the included conversation history, which gives the model the context needed to interpret pronouns and implicit references correctly.
Topic persistence allows conversations to maintain focus across multiple exchanges. If a user is discussing product returns, subsequent messages are interpreted in that context even when they don’t explicitly mention returns. The accumulated history provides the framing that makes this natural understanding possible.
Guided conversations implement multi-step flows where the bot collects information progressively. Rather than asking for all information at once, the bot might first ask for a name, then an email, then a preference. Each step builds on previous responses, with validation ensuring data quality before proceeding.
Session Recovery and Continuity
Network interruptions, browser refreshes, and other disruptions shouldn’t break conversation flow. BotServer implements robust session recovery that allows users to seamlessly continue where they left off.
When a user reconnects, the session identifier validates their return. The system retrieves stored history and reconstructs the conversation context. The user can then continue as if no interruption occurred, with full access to previous exchanges and accumulated state.
Error recovery extends beyond simple disconnections. If a response generation fails, the system preserves the last known good state. Graceful degradation provides meaningful feedback to users rather than cryptic errors. Automatic retry logic handles transient failures that resolve themselves.
Anonymous and Authenticated Conversations
BotServer supports both authenticated users and anonymous visitors, with different handling for each case. Understanding these distinctions helps design appropriate conversation experiences.
Anonymous sessions receive temporary identifiers that exist only for the duration of the session. Permissions are limited compared to authenticated users. Storage is typically short-term, with sessions expiring quickly after inactivity. These constraints reflect the reduced trust level for unidentified users.
When an anonymous user authenticates, their session upgrades to a full user session. Accumulated history transfers to the persistent user record. Permissions expand to match the authenticated role. This seamless upgrade path encourages users to authenticate without losing conversation progress.
Real-Time Communication
WebSocket connections provide the real-time communication channel for conversations. Unlike traditional HTTP request-response patterns, WebSockets maintain persistent bidirectional connections that enable instant message delivery in both directions.
The WebSocket protocol supports several interaction patterns beyond basic message exchange. Streaming responses allow bots to send content progressively, displaying text as it generates rather than waiting for complete responses. Typing indicators let users know the bot is processing their request. Connection status updates inform users of connectivity changes.
Messages follow a structured format with type identifiers, content payloads, and session references. The server processes incoming messages, routes them through the conversation engine, and pushes responses back through the same WebSocket connection.
Conversation Analytics
Understanding how conversations perform helps improve bot effectiveness. BotServer tracks numerous metrics that reveal conversation patterns and quality indicators.
Quantitative metrics include message counts, conversation lengths, response times, and tool usage frequency. These numbers identify basic patterns like peak usage times and average conversation depth.
Qualitative analysis examines conversation content for sentiment, topics, intents, and entities. This deeper understanding reveals what users actually want from the bot, what frustrates them, and what succeeds.
Performance metrics specifically track system behavior, including generation latency, error rates, and resource utilization during conversation processing.
Configuration and Tuning
Several configuration parameters affect conversation behavior. Session timeout controls how long inactive sessions persist before expiring. History length limits how many messages the system retains in active memory. Context window size determines how much information reaches the language model.
Retention policies govern long-term storage of conversation data. Message retention duration sets how long history persists before archival. Archive timing determines when conversations move to compressed storage. Anonymous retention specifically addresses the shorter lifetime appropriate for unidentified users.
These settings balance resource usage against conversation quality and compliance requirements. Longer retention supports better context and audit trails but consumes more storage. Larger context windows improve response quality but increase processing costs.
Privacy and Compliance
Conversation data represents sensitive information that requires careful handling. BotServer implements multiple safeguards to protect user privacy while meeting compliance requirements.
Data retention policies ensure information doesn’t persist longer than necessary. Compression and archival reduce storage costs while maintaining accessibility for compliance purposes. Clear deletion procedures support user rights to have their data removed.
Access controls limit who can view conversation history. Users see their own conversations. Administrators may have audit access where compliance requires it. Appropriate logging tracks access to sensitive data.
Summary
Conversation management in BotServer provides the foundation for meaningful bot interactions. Through careful session handling, comprehensive message history, sophisticated context assembly, and robust recovery mechanisms, the system enables conversations that feel natural and maintain coherence across multiple turns, sessions, and circumstances. Understanding these capabilities helps developers build bots that engage users effectively while respecting privacy and compliance requirements.
AI and LLM
BotServer integrates with Large Language Models to provide intelligent conversational capabilities and natural language understanding. This integration forms the core of what makes General Bots conversations feel natural and contextually aware.
Overview
The LLM integration in BotServer enables sophisticated conversational experiences. Natural language conversations flow smoothly without rigid command structures. Responses are context-aware, drawing on conversation history and loaded knowledge bases. The system automatically discovers and invokes tools when they would help answer user questions. Document understanding allows bots to read and reason about uploaded files. Text generation and summarization capabilities support content creation and information distillation.
LLM Providers
OpenAI
OpenAI serves as the primary LLM provider with support for multiple model tiers. GPT-5 provides fast, cost-effective responses for straightforward conversations. GPT-5 mini delivers efficient processing for simpler queries. The o3 series offers superior reasoning for complex tasks. Custom fine-tuned models can be used when you have specialized requirements.
Configuration requires setting your API key and selecting a model:
OPENAI_API_KEY=your-api-key
LLM_MODEL=gpt-5
Local Models
For privacy-sensitive deployments or cost control, BotServer supports self-hosted models. Llama.cpp compatible servers provide open-source model hosting. Custom inference endpoints allow integration with any API-compatible service. Privacy-preserving deployments keep all data on-premises without external API calls.
Configuration for local models specifies the provider type and endpoint:
LLM_PROVIDER=local
LLM_ENDPOINT=http://localhost:8081
The LLM Keyword
The LLM keyword provides direct access to language model capabilities within BASIC scripts. Usage patterns differ between background processing and interactive conversations.
Background Processing
For scheduled tasks and background jobs that do not interact directly with users, the LLM keyword generates content that can be stored for later use.
' For background/scheduled tasks only - not for interactive conversations
summary = LLM "Explain quantum computing in simple terms"
SET BOT MEMORY "quantum_explanation", summary
Document Summarization
Scheduled tasks can process documents and generate summaries available to all users.
' Scheduled task to generate summaries for all users
document = GET "knowledge/policy.pdf"
summary = LLM "Summarize this document: " + document
SET BOT MEMORY "policy_summary", summary
Context-Aware Conversations
For interactive conversations, use SET CONTEXT to provide information that the System AI incorporates automatically when responding. This approach lets the AI generate natural responses rather than scripted outputs.
' For interactive conversations - use SET CONTEXT, not LLM
TALK "What's your question?"
question = HEAR
context = GET BOT MEMORY "knowledge"
SET CONTEXT "background", context
TALK "Based on our knowledge base, here's what I can tell you..."
' System AI automatically uses the context when responding
LLM Provider Implementation
The provider architecture lives in the src/llm/ directory with a modular design. The mod.rs file defines the provider trait and factory for instantiating providers. The openai.rs file implements the OpenAI provider with all API operations. The local.rs file provides support for local model servers.
Provider Trait
All LLM providers implement a common trait ensuring consistent behavior. The generate method produces text completions from prompts. The generate_stream method returns tokens incrementally for real-time display. The get_embedding method creates vector representations for semantic search. The count_tokens method estimates token usage before making API calls.
Context Management
Context Window
Managing the limited context window requires careful attention to what information reaches the model. Automatic truncation removes older content when approaching limits. Context compaction summarizes extensive histories into shorter representations. Relevance filtering prioritizes information most likely to help with the current query. History summarization condenses long conversations into essential points.
Context Sources
The context provided to the LLM comes from multiple sources that combine to create informed responses. Conversation history provides recent messages for continuity. Knowledge base chunks supply relevant document excerpts. Bot memory contributes persistent context that applies across conversations. Tool definitions tell the model what functions it can invoke. User profile information enables personalization based on known preferences.
Prompt Engineering
System Prompts
System prompts establish the bot’s personality and capabilities. These are typically configured in bot memory and loaded into context at the start of conversations.
system_prompt = GET BOT MEMORY "system_prompt"
SET CONTEXT "system", system_prompt
Dynamic Prompts
Building prompts programmatically allows context to reflect current conditions. Variables set in context become available to the System AI for generating responses.
' For interactive conversations - use SET CONTEXT
SET CONTEXT "user_name", user_name
SET CONTEXT "current_date", NOW()
' System AI automatically incorporates this context
Streaming Responses
WebSocket Streaming
Real-time token streaming creates a responsive user experience. As the LLM generates tokens, each token is sent immediately via WebSocket to the connected client. The UI updates progressively as tokens arrive, showing the response as it forms. The complete response is assembled on the client side once generation finishes.
Stream Control
Several controls manage the streaming process. Users can start and stop generation as needed. Long responses can be cancelled if they are not useful. Timeout protection prevents indefinitely hanging connections. Error recovery handles network interruptions gracefully by resuming or restarting generation.
Embeddings
Vector Generation
Creating embeddings transforms text into vectors for semantic search. The embedding process converts natural language into high-dimensional numerical representations that capture meaning.
Embedding Models
Different embedding models serve different needs. OpenAI’s text-embedding-ada-002 provides high-quality embeddings through their API. Local deployments can use sentence transformers for on-premises embedding generation. Custom models can be configured when you have specialized embedding requirements.
Token Management
Token Counting
Estimating token usage before making API calls helps with cost control and context management. Token counting uses the same tokenizer as the target model to produce accurate estimates.
Token Limits
Several factors constrain token usage. Each model has specific limits on total tokens per request. Context window constraints determine how much history and knowledge base content fits. Rate limiting prevents exceeding API quotas. Cost management tracks token usage against budgets.
Error Handling
Common Errors
Several error conditions occur frequently when working with LLMs. Invalid API keys prevent authentication with the provider. Rate limit exceeded errors indicate too many requests in a time window. Context too long errors mean the prompt exceeds the model’s maximum. Model unavailable errors happen during provider outages. Network timeouts occur when connections take too long.
Fallback Strategies
Robust error handling employs multiple fallback strategies. Retry with exponential backoff handles transient failures. Switching to a backup model maintains service when the primary is unavailable. Reducing context size can resolve context length errors. Caching responses reduces API calls and provides fallback content. Returning graceful errors keeps users informed when recovery is not possible.
Performance Optimization
Caching
Response caching dramatically improves performance for repeated queries. Semantic caching identifies similar questions and returns cached responses without API calls. Cache invalidation strategies ensure responses remain fresh as knowledge bases update. Cache warming pre-generates responses for common questions during off-peak times.
Batching
Batching multiple requests improves throughput and reduces per-request overhead. Embedding generation particularly benefits from batching when processing many documents. Rate limit management becomes simpler with controlled batch submission.
Connection Pooling
Connection pooling to LLM providers reduces latency from connection establishment. Keep-alive connections persist across requests. Pool sizing balances resource usage against responsiveness.
Model Selection
Choosing the right model involves balancing several factors. Capability requirements determine the minimum model sophistication needed. Response latency requirements favor faster models for interactive use. Cost constraints may push toward more economical model tiers. Privacy requirements might mandate local models over cloud APIs.
Model Comparison
GPT-5 mini offers the fastest responses at the lowest cost, suitable for straightforward questions. Claude Sonnet 4.5 and GPT-5 provide superior reasoning for complex queries with good balance of cost and capability. Local models like Llama variants offer privacy and cost predictability with varying capability levels. Specialized models may excel at particular domains like code or medical content.
Integration with Tools
LLMs in BotServer work closely with the tool system. The model receives tool definitions describing available functions. When a user request would benefit from tool use, the model generates a tool call. BotServer executes the tool and returns results to the model. The model incorporates tool results into its final response.
This integration enables bots to take actions beyond conversation, such as querying databases, sending emails, or calling external APIs, all orchestrated naturally through conversation.
Best Practices
Effective LLM usage follows several guidelines. Keep system prompts focused and specific rather than trying to cover every scenario. Use SET CONTEXT for interactive conversations rather than generating responses directly with LLM calls. Load relevant knowledge bases before conversations to improve response quality. Monitor token usage to manage costs. Test responses across different query types to ensure consistent quality.
Debugging and Monitoring
Debugging LLM interactions requires visibility into prompts and responses. Enable verbose logging during development to see full API exchanges. Monitor response quality metrics over time. Track token usage and costs per conversation. Review conversation logs to identify improvement opportunities.
See Also
The Context Configuration chapter explains context window management in detail. The LLM Configuration chapter covers all configuration options. The Tool Definition chapter describes creating tools the LLM can invoke. The Knowledge Base chapter explains how documents integrate with LLM context.
Knowledge Base
The Knowledge Base (KB) system enables semantic search and document retrieval for intelligent bot responses.
Quick Overview
| Feature | Description |
|---|---|
| Storage | S3-compatible drive + PostgreSQL metadata + Qdrant vectors |
| Search | Hybrid (semantic + keyword) with optional reranking |
| Formats | PDF, DOCX, TXT, MD, HTML, CSV, JSON |
| Integration | Automatic context injection into LLM responses |
Basic Usage
' Load knowledge base
USE KB "policies"
' Bot now answers questions using that knowledge
' No explicit search needed - it's automatic
Key Capabilities
- Semantic Search - Find content by meaning, not just keywords
- Multi-Collection - Organize documents into focused collections
- Auto-Indexing - Documents indexed automatically when added
- Hybrid Search - Combines dense (semantic) and sparse (BM25) retrieval
- Context Management - Relevant chunks injected into LLM prompts
Document Organization
bot.gbkb/
├── policies/ → USE KB "policies"
├── products/ → USE KB "products"
└── support/ → USE KB "support"
Configuration
Key settings in config.csv:
name,value
rag-hybrid-enabled,true
rag-dense-weight,0.7
rag-sparse-weight,0.3
rag-top-k,10
Performance Tips
- Organize collections by topic for precise activation
- Clear unused KBs to free memory:
CLEAR KB "old-docs" - Enable caching for repeated queries
- Tune weights based on content type (technical vs conversational)
Learn More
- KB System Architecture - Technical deep dive
- Semantic Search - How search works
- Document Indexing - Processing pipeline
- Hybrid Search - RAG 2.0 configuration
- USE KB Keyword - Complete reference
- .gbkb Package - Folder structure
Automation
This chapter explains how BotServer enables bots to perform scheduled and event-driven tasks without requiring direct user interaction. Through automation capabilities, bots can proactively maintain content, process data, and respond to system events, transforming them from reactive assistants into active participants in organizational workflows.
Automation Fundamentals
BotServer implements automation through two complementary mechanisms. The SET SCHEDULE keyword enables cron-based task scheduling, allowing scripts to execute at predetermined times. Event triggers through the ON keyword enable scripts to respond to database changes and system events. Together, these mechanisms support a wide range of automation scenarios from simple periodic tasks to complex event-driven workflows.
Understanding when to use automation is as important as understanding how. Automated tasks run without an active user session, which means they cannot directly communicate with users through TALK statements. Instead, automated scripts typically gather and process information, storing results in bot memory where users can access it during their next interaction.
Scheduling Tasks with SET SCHEDULE
The SET SCHEDULE keyword registers a script for periodic execution using standard cron expressions. When the BASIC compiler encounters this keyword, it extracts the schedule specification and creates an entry in the system_automations table. A background service monitors this table and executes scripts when their scheduled times arrive.
Cron expressions follow the standard Unix format with five fields representing minutes, hours, day of month, month, and day of week. The expression 0 9 * * * means “at minute 0 of hour 9, every day of every month, regardless of day of week”—in other words, daily at 9:00 AM. The expression */30 * * * * means “every 30 minutes” by using the step syntax. More complex patterns like 0 9 * * 1-5 specify “weekdays at 9 AM” by restricting the day of week field to Monday through Friday.
Scheduled scripts execute with full bot context and permissions, but without an associated user session. This means they can access bot memory, call external APIs, read and write files, and perform data processing. However, they cannot use TALK to send messages since there’s no user to receive them. Results should be stored in bot memory for later retrieval or sent through other channels like email.
Practical Scheduling Examples
A daily report generation script illustrates common automation patterns. The script specifies its schedule, retrieves data from the previous day, processes it using LLM analysis, and stores the result in bot memory. When users later ask about the daily report, the bot can retrieve and present this pre-computed summary without delay.
Content update automation keeps information fresh without manual intervention. A news aggregation script might run every six hours, fetching latest headlines, summarizing them, and caching the result. Users interacting with the bot receive current information even if nobody has explicitly updated the content.
Maintenance tasks handle housekeeping that shouldn’t require human attention. Cleanup scripts can run during low-activity periods to archive old data, remove temporary files, or perform consistency checks. These tasks keep the system healthy without consuming resources during peak usage times.
Data synchronization scripts bridge external systems with bot knowledge. A script might periodically fetch updates from a CRM, inventory system, or other business application, ensuring the bot’s responses reflect current organizational reality.
Event-Driven Automation
The ON keyword creates triggers that fire when specific events occur rather than at scheduled times. Currently, the system supports database event triggers that respond to table modifications. When the specified event occurs, the associated code block executes.
Event triggers complement scheduled tasks by enabling immediate response to changes rather than waiting for the next scheduled run. While a scheduled task might check for new registrations hourly, an event trigger fires immediately when a registration occurs, enabling real-time automation workflows.
The system stores triggers in the same system_automations table as scheduled tasks, distinguished by their trigger kind. Each trigger specifies its target (the table or resource being monitored), parameters controlling its behavior, and an activation flag allowing temporary disabling without deletion.
The System Automations Table
The system_automations table serves as the central registry for all automation rules. Each record contains a unique identifier, the bot that owns the automation, the kind of trigger (scheduled or event-driven), the cron expression for scheduled tasks, parameters such as script names, an active flag, and a timestamp tracking the last execution.
This centralized storage allows the background scheduler to efficiently query upcoming tasks across all bots. It also enables administrative monitoring of automation activity and troubleshooting of failed executions.
Automation Lifecycle Management
Understanding how automations are created, executed, modified, and removed helps administrators manage bot deployments effectively.
During script compilation, the preprocessor detects SET SCHEDULE statements and extracts their cron expressions. The system creates or updates corresponding entries in the system_automations table. If a script previously had a schedule that was removed, the old automation entry is deleted.
When execution time arrives, the scheduler loads the bot’s context, executes the BASIC script, updates the last_triggered timestamp, and logs the execution result. Any errors during execution are captured and logged but don’t affect other scheduled tasks.
Modifying a schedule requires only changing the SET SCHEDULE line in the script. The next compilation updates the database entry automatically. This approach keeps schedule definitions with their associated code rather than requiring separate configuration management.
Deleting a bot cascades to remove all its automations, preventing orphaned schedules that would fail at execution time.
Best Practices for Automation
Effective automation requires thoughtful design decisions. Scheduling frequency should match actual needs—running a task every minute when hourly would suffice wastes resources and can mask problems. Consider what would happen if a task takes longer than its scheduling interval, as overlapping executions can cause unexpected behavior.
Error handling in automated scripts is particularly important because no user is present to observe failures. Scripts should catch exceptions, log meaningful error messages, and degrade gracefully when dependencies are unavailable. Consider storing error states in bot memory so users can be informed of issues during their next interaction.
Scripts should be tested manually before enabling scheduling. Running a script interactively verifies that it works correctly and helps identify issues that might not be apparent from logs alone.
Bot memory serves as the bridge between automated tasks and user interactions. Automated scripts store their results in bot memory, making that information available to all users. This pattern works well for information that benefits from pre-computation, like summarized reports or aggregated statistics.
External credentials should never be hardcoded in scripts. Use bot memory to store API keys and other secrets, retrieving them at runtime. This practice improves security and simplifies credential rotation.
Understanding Limitations
Several constraints affect automation design decisions. The minimum scheduling granularity is one minute, as the cron format doesn’t support sub-minute precision. Tasks requiring more frequent execution need alternative approaches.
Each scheduled execution has timeout limits to prevent runaway tasks from consuming resources indefinitely. Long-running processes should be designed to complete within these limits or broken into smaller pieces.
The system doesn’t provide automatic retry on failure. If a scheduled task fails, it simply waits for the next scheduled time. Scripts needing retry behavior must implement it internally.
Only one instance of a scheduled script runs at a time. If execution takes longer than the scheduling interval, subsequent invocations are skipped rather than queued. This prevents resource exhaustion but means some scheduled times may be missed.
There’s no dependency management between scheduled tasks. If one task must complete before another begins, scripts must coordinate through bot memory or other synchronization mechanisms.
Monitoring Automated Tasks
Observing automation behavior helps identify problems and optimize performance. Active schedules can be queried directly from the system_automations table, filtered by bot and trigger kind. The last_triggered timestamp shows when each automation last executed successfully.
Execution logging captures both successful runs and failures at appropriate log levels. Monitoring these logs reveals patterns like consistently slow executions or recurring errors that might not be apparent from individual runs.
Debug logging at lower levels captures schedule changes during compilation, helping trace unexpected automation behavior to its source. Enabling debug logging temporarily can help diagnose why a schedule isn’t executing as expected.
Debugging Automation Issues
When automated tasks don’t behave as expected, systematic investigation identifies the cause. Common issues include invalid cron expressions that never match, scripts that work interactively but fail without a user session, external resources that are unavailable when the script runs, and permission issues that only manifest in the automation context.
Verifying the cron expression syntax ensures the schedule means what you intend. Online cron expression validators can help confirm that expressions match expected execution times.
Testing scripts manually with explicit handling for the missing user session helps identify code that incorrectly assumes user context. Any TALK statements will fail in automated context, and scripts must work correctly without user input.
Checking external resource availability at scheduled times reveals dependencies that might not be available around the clock. Business APIs often have maintenance windows, and network conditions vary throughout the day.
Reviewing permissions ensures the bot has access to all resources the automated script needs. Permissions that work for interactive users might not apply to automated execution contexts.
Security Considerations
Automated tasks execute with the bot’s full permissions, making them powerful but requiring careful design. Scripts can access any data the bot can access, call any API the bot is authorized to use, and store results in any location the bot can write.
This power means automated scripts should be reviewed carefully before deployment. Malicious or buggy automation could exfiltrate data, overwhelm external services, or fill storage with garbage. Limiting automation privileges isn’t possible in the current system, so careful script review is the primary safeguard.
Rate limiting applies to automated tasks just as it does to interactive use. Aggressive scheduling that exceeds API limits will be throttled, potentially causing tasks to fail or take longer than expected.
Monitoring for runaway automation helps catch scripts that behave differently than expected. Unusual resource consumption, excessive API calls, or unexpected storage growth might indicate automation problems requiring intervention.
Summary
BotServer’s automation capabilities transform bots from reactive assistants into proactive system participants. Through SET SCHEDULE and event triggers, bots can maintain fresh content, process data regularly, and respond to system events without user interaction. Understanding the automation lifecycle, limitations, and best practices enables effective use of these powerful capabilities while avoiding common pitfalls. Automation extends bot value by handling routine tasks automatically, freeing users to focus on work that requires human judgment.
Email Integration
BotServer provides email integration capabilities through IMAP/SMTP protocols, allowing bots to read, send, and manage emails.
Overview
Email integration in BotServer enables reading emails via IMAP, sending emails via SMTP, email account management, draft creation and management, folder organization, and email-based automation workflows.
Configuration
Environment Variables
Email configuration requires these environment variables:
# IMAP Settings
EMAIL_IMAP_SERVER=imap.gmail.com
EMAIL_IMAP_PORT=993
EMAIL_USERNAME=your-email@example.com
EMAIL_PASSWORD=your-app-password
# SMTP Settings
EMAIL_SMTP_SERVER=smtp.gmail.com
EMAIL_SMTP_PORT=587
EMAIL_FROM=your-email@example.com
Feature Flag
Email functionality requires the email feature flag during compilation:
cargo build --features email
Email Module Structure
The email module is located in src/email/ and contains mod.rs with the email service implementation, account management functionality, message handling logic, and IMAP/SMTP client implementations.
Database Schema
User Email Accounts
The user_email_accounts table stores email account configurations with encrypted password storage. Users can configure multiple accounts, each with its own IMAP and SMTP server details.
Email Drafts
The email_drafts table provides draft management including To, CC, and BCC addresses, subject and body content, attachment metadata, and auto-save support for work in progress.
Email Folders
The email_folders table handles folder organization with IMAP folder mapping, message counts, unread tracking, and hierarchical structure support for nested folders.
BASIC Keywords for Email
SEND MAIL
Send emails from BASIC scripts:
SEND MAIL "recipient@example.com", "Subject", "Email body content"
# With variables
let to = "user@example.com"
let subject = "Meeting Reminder"
let body = "Don't forget our meeting at 2 PM"
SEND MAIL to, subject, body
Email Automation
# Check for new emails
let new_emails = GET_EMAILS("INBOX", "UNSEEN")
# Process each email
FOR EACH email IN new_emails {
let sender = email.from
let subject = email.subject
let body = email.body
# Auto-reply logic
if (subject CONTAINS "urgent") {
SEND MAIL sender, "Re: " + subject, "I'll get back to you soon."
}
}
Email Operations
Reading Emails
The system can connect to IMAP servers, fetch message headers, download full messages, search by various criteria, mark messages as read or unread, and move messages between folders.
Sending Emails
SMTP operations include authentication with the mail server, sending plain text and HTML emails, reply and forward functionality, and bulk sending with configurable limits. Attachment support is planned for a future release.
Security
Password Storage
Email passwords are encrypted using AES-GCM and never stored in plaintext. Passwords are decrypted only when needed for authentication and memory is cleared after use to prevent credential leakage.
Connection Security
All email connections require TLS/SSL encryption with proper certificate validation. Secure authentication methods are enforced, and plaintext transmission is never permitted.
Use Cases
Support Ticket System
# Monitor support inbox
let support_emails = GET_EMAILS("support", "UNSEEN")
FOR EACH email IN support_emails {
# Create ticket
let ticket_id = CREATE_TICKET(email.from, email.subject, email.body)
# Send confirmation
let response = "Ticket #" + ticket_id + " created. We'll respond within 24 hours."
SEND MAIL email.from, "Ticket Created: " + email.subject, response
}
Newsletter Distribution
# Send newsletter to subscribers
let subscribers = GET "subscribers.csv"
let newsletter = GET "newsletter.html"
FOR EACH subscriber IN subscribers {
SEND MAIL subscriber.email, "Monthly Newsletter", newsletter
WAIT 1 # Rate limiting
}
Email-to-Task Conversion
# Convert emails to tasks
let task_emails = GET_EMAILS("tasks", "UNSEEN")
FOR EACH email IN task_emails {
CREATE TASK email.subject, email.body, email.from
MARK_AS_READ email
}
Integration with Other Features
With Calendar
Email integrates with the calendar system for meeting invitations, event reminders, and schedule updates sent via email notifications.
With Tasks
Task integration enables task creation from emails, status updates delivered via email, and deadline reminders sent to responsible parties.
With Knowledge Base
Knowledge base integration supports email archival for compliance, searchable email history, and providing email context for bot conversations.
Limitations
Current Limitations
The current implementation does not support attachment handling, provides only basic HTML email support, lacks email templates, has limited filtering options, and does not support OAuth2 authentication, requiring app-specific passwords instead.
Rate Limiting
Provider-specific rate limits apply to all email operations. Implement delays between sends to avoid throttling, monitor for rate limit errors, and use batch operations wisely to stay within provider limits.
Email Provider Setup
Gmail Configuration
To configure Gmail, first enable 2-factor authentication on your Google account. Then generate an app-specific password for BotServer to use. Enable IMAP access in Gmail settings. Use imap.gmail.com on port 993 for IMAP and smtp.gmail.com on port 587 for SMTP.
Outlook/Office 365
For Outlook or Office 365, enable IMAP in your account settings. If 2FA is enabled, generate an app password. Use outlook.office365.com on port 993 for IMAP and smtp.office365.com on port 587 for SMTP.
Custom Email Servers
For custom email servers, configure the appropriate server addresses, port numbers, security settings including TLS or SSL requirements, and the authentication method supported by your server.
Error Handling
Connection Errors
# Handle email errors
status = SEND MAIL recipient, subject, body
IF status = "sent" THEN
TALK "Email sent successfully"
ELSE
TALK "Failed to send email: " + status
# Log error for admin
END IF
Common Issues
Common email issues include authentication failures from incorrect credentials, network timeouts when servers are slow to respond, server unavailable errors during outages, quota exceeded errors when hitting send limits, and invalid address errors for malformed recipients.
Best Practices
Use app-specific passwords rather than primary account passwords to limit security exposure. Respect provider rate limits by implementing appropriate delays between operations. Implement retry logic for transient failures to ensure delivery. Validate email addresses before sending to catch format errors early. Monitor usage by tracking sent and received counts. Encrypt sensitive data in storage and transit. Maintain an audit trail by logging all email operations.
Monitoring
Metrics to Track
Key metrics include emails sent and received, failed operations and their causes, connection failures, processing time for email operations, and queue size when batching sends.
Health Checks
Regular health checks should verify IMAP connectivity, SMTP availability, account validity and credential freshness, and folder synchronization status.
Summary
Email integration in BotServer enables powerful email-based automation and communication. Through IMAP/SMTP protocols and BASIC script integration, bots can manage email workflows, automate responses, and integrate email with other bot features for comprehensive communication automation.
Storage and Data
This chapter explains how BotServer organizes and manages data across its multiple storage layers. Understanding this architecture helps you make informed decisions about where to store different types of information and how to optimize storage performance for your deployment.
Understanding the Storage Architecture
BotServer employs a multi-layered storage architecture where each layer serves specific purposes and data types. Rather than forcing all data into a single storage system, this design allows each component to use the storage technology best suited to its access patterns and requirements.
PostgreSQL serves as the primary database for all structured data, including user accounts, session information, bot configurations, and message history. Its relational model excels at maintaining data integrity and supporting complex queries across related entities.
The Drive component provides S3-compatible object storage for files and documents. This includes uploaded files, knowledge base documents, BASIC scripts, and media assets. Object storage handles large files efficiently and integrates well with content delivery networks.
Valkey (the cache layer) maintains session state and temporary data that benefits from extremely fast access. Cached data might be lost during restarts, but the performance benefits for frequently accessed information justify this trade-off.
Qdrant stores vector embeddings that power semantic search. These high-dimensional numerical representations capture the meaning of documents and queries, enabling similarity-based retrieval that goes beyond keyword matching.
Local filesystem storage handles temporary working directories, log files, and operational caches that don’t require persistence across system restarts.
PostgreSQL: Structured Data Storage
PostgreSQL anchors the storage architecture by maintaining all structured information that requires durability and relational integrity. User accounts, their associations with sessions, and the relationships between users and bots all live in PostgreSQL tables protected by transactions and foreign key constraints.
The database schema evolves through managed migrations stored in the migrations directory. Diesel ORM provides type-safe database access from Rust code, catching many potential errors at compile time rather than runtime. When the system bootstraps, it automatically applies pending migrations to bring the schema up to date.
Connection pooling ensures efficient database access under load. The pool maintains a configurable number of connections ready for use, eliminating the overhead of establishing new connections for each query. Automatic retry logic handles transient connection failures, and timeout protection prevents runaway queries from consuming resources indefinitely.
Message history accumulates in the database, creating a permanent record of all conversations. Session data persists across server restarts, allowing users to resume conversations even after maintenance windows. Bot configurations stored in the database take effect immediately across all running instances.
Drive: Object Storage for Files
The Drive component implements S3-compatible object storage, organizing files into buckets that typically correspond to individual bots. Within each bucket, the familiar package structure appears: .gbdialog folders for scripts, .gbkb folders for knowledge base documents, and .gbot folders for configuration.
File operations follow standard patterns. Uploads place files into specified bucket and key combinations. Downloads retrieve files by their bucket and key. Listing operations enumerate bucket contents for browsing or processing. Deletion removes objects when necessary, though this operation is relatively rare in normal operation.
The storage system supports any S3-compatible backend, including self-hosted solutions like MinIO for development and cloud services like AWS S3 for production. This flexibility allows deployments to choose storage solutions that match their requirements for cost, performance, geographic distribution, and data residency.
Beyond static storage, Drive integrates with the knowledge base system. Documents uploaded to .gbkb folders trigger indexing pipelines that extract text, generate embeddings, and make content searchable. Changes to stored files can trigger reprocessing, keeping knowledge bases current as source documents evolve.
Valkey: High-Speed Caching
The cache layer accelerates access to frequently used data by keeping it in memory. Session tokens validate quickly against cached values. Recently accessed conversation state retrieves without database queries. Rate limiting counters update with minimal latency.
Cached data follows patterns that maximize effectiveness. Session data uses keys combining the session identifier with the data type, enabling targeted retrieval. Rate limiting keys incorporate user identifiers and endpoint paths to track request rates per user per endpoint. Temporary data keys clearly indicate their transient nature.
Cache entries include time-to-live values that automatically expire stale data. Session caches might persist for 24 hours of inactivity. Rate limiting counters reset after their tracking windows. Temporary computation results expire after configurable periods.
The cache operates as a performance optimization rather than a primary data store. If cached data is lost, the system regenerates it from authoritative sources in PostgreSQL or Drive. This approach simplifies operations since cache failures cause performance degradation rather than data loss.
Qdrant: Vector Storage for Semantic Search
Qdrant provides the specialized storage that makes semantic search possible. Each document chunk from knowledge bases generates a high-dimensional vector embedding that captures its semantic content. These vectors live in Qdrant collections organized by bot, enabling fast similarity searches.
The vector storage structure includes collections for different content types. Document embeddings enable knowledge base search. Conversation embeddings support finding similar past interactions. Cached query results accelerate repeated searches.
Vector operations differ from traditional database operations. Insertion adds new embeddings along with their associated metadata. Search finds vectors most similar to a query vector, returning the closest matches based on distance metrics. Updates modify the metadata associated with existing vectors. Deletion removes outdated content when source documents change.
Qdrant’s specialized architecture handles the mathematical operations behind similarity search efficiently. Unlike general-purpose databases that would struggle with high-dimensional vector comparisons, Qdrant implements optimized algorithms that scale to millions of vectors while maintaining sub-second query response times.
Local Storage: Working Directories
The local filesystem serves operational needs that don’t require distributed storage. Working directories hold files during processing, such as documents being indexed or uploads being validated. Log files capture operational events for debugging and monitoring. Local caches store computed values that benefit from filesystem-level caching.
Directory structure follows conventions that keep different content types organized. The work directory contains per-bot working files during processing. Logs accumulate in dedicated directories with rotation policies that prevent unbounded growth. Upload directories receive incoming files temporarily before they move to permanent storage.
Automatic cleanup processes remove files that no longer serve purposes. Old temporary files delete after their processing completes. Log rotation archives and eventually removes old log files. Cache invalidation clears stale computed values when source data changes.
Data Persistence and Backup
Reliable data storage requires comprehensive backup strategies that protect against various failure modes. BotServer’s multi-layer architecture requires coordinating backups across storage systems.
PostgreSQL backups capture the authoritative state of all structured data. Daily dumps create recovery points. Point-in-time recovery capabilities protect against accidental data modifications. Backup verification ensures that recovery would actually work when needed.
Drive storage benefits from built-in replication and versioning capabilities. S3-compatible storage systems maintain multiple copies across availability zones. Object versioning preserves previous states even after modifications. Cross-region replication protects against regional failures for critical deployments.
Configuration versioning through source control provides another protection layer. Environment-specific configurations store separately from shared defaults. Secret encryption protects sensitive values in backups.
Retention policies balance storage costs against recovery needs. Message history might retain for 90 days before archival. Session data expires after 30 days of inactivity. Temporary files clean up within 24 hours. Log retention follows regulatory requirements and debugging needs. Backup retention provides sufficient history for recovery scenarios.
Storage Operations in BASIC Scripts
Scripts interact with storage through dedicated keywords that abstract the underlying complexity. The SAVE keyword writes data to CSV files or other formats, handling the details of file creation and formatting. The GET keyword retrieves content from storage, automatically determining the appropriate storage layer based on the path specified.
These abstractions allow script authors to work with storage without understanding the full architecture. A script saving customer data doesn’t need to know whether that data ultimately resides in PostgreSQL or Drive. The system routes operations appropriately based on data types and configurations.
Security and Access Control
Data security spans all storage layers with appropriate protections for each. Encryption at rest protects stored data from unauthorized physical access. Database encryption covers PostgreSQL storage. Object storage encryption protects Drive contents. Transport encryption using TLS secures all network communication between components.
Access control ensures users and processes only reach data they’re authorized to access. Role-based permissions govern database operations. Bucket policies control object storage access. Bot isolation prevents cross-bot data leakage. Audit logging creates accountability trails for sensitive operations.
Sensitive data receives additional protection. Passwords never store in BotServer systems since Zitadel handles authentication. API keys and secrets encrypt with AES-GCM before storage. Personally identifiable information follows data protection regulations applicable to the deployment jurisdiction.
Monitoring and Maintenance
Storage systems require ongoing attention to maintain performance and reliability. Monitoring tracks resource utilization across all storage layers. Database size growth reveals capacity planning needs. Drive bucket usage indicates document accumulation rates. Cache memory utilization guides sizing decisions. Qdrant index size affects search performance.
Health checks verify that storage systems remain accessible and responsive. Database connectivity tests confirm query capability. Drive availability checks verify object operations work. Cache response time measurements identify performance degradation. Qdrant query tests validate search functionality.
Regular maintenance keeps storage systems performing well. PostgreSQL vacuum operations reclaim space and update statistics. Drive cleanup removes orphaned objects. Cache pruning maintains working set size. Qdrant optimization improves query performance as indexes grow.
Troubleshooting Common Issues
Storage problems manifest in recognizable patterns that guide resolution. Space exhaustion causes write failures across storage layers. Resolution involves cleaning temporary files, archiving old data, or expanding storage allocation.
Performance degradation often traces to storage layer issues. Slow queries might indicate missing indexes or excessive table sizes. Slow file access might reveal network or disk bottlenecks. Cache misses might suggest insufficient cache sizing or inappropriate eviction policies.
Connection failures require systematic investigation. Service status checks confirm components are running. Credential verification ensures authentication succeeds. Network configuration review identifies routing or firewall issues.
Summary
BotServer’s storage architecture distributes data across specialized systems optimized for different access patterns. PostgreSQL handles structured data with transactional integrity. Drive provides scalable object storage for files and documents. Valkey accelerates access to frequently used information. Qdrant enables semantic search through vector storage. Understanding this architecture helps you configure storage appropriately, implement effective backup strategies, and troubleshoot issues when they arise. The result is a storage foundation that supports the diverse requirements of conversational AI applications while maintaining performance and reliability.
Multi-Channel Support
This chapter describes how BotServer enables bots to communicate with users across different platforms through its flexible multi-channel architecture. The design ensures that conversation logic remains consistent regardless of how users choose to interact, while still taking advantage of each channel’s unique capabilities.
Architectural Foundation
BotServer abstracts communication methods through a channel adapter pattern that separates bot logic from platform-specific details. When a user sends a message, it flows from their chosen platform through a channel adapter that converts the platform-specific format into a common message structure. The bot processes this message, generates a response, and the adapter converts it back to the appropriate format for delivery.
This abstraction provides significant benefits for bot development. The same BASIC scripts work across all supported channels without modification. Conversation state persists even when users switch between platforms. New channels can be added without changing existing bot logic.
The primary channel is the web interface, which provides the richest interaction capabilities. WebSocket connections enable real-time messaging with streaming responses. Additional channels extend reach to users on their preferred platforms while maintaining consistent conversation experiences.
The Web Channel
The web channel serves as the reference implementation and primary interaction method for most deployments. It leverages HTTP for initial page loads and WebSocket connections for real-time bidirectional communication.
Users interacting through the web channel benefit from rich formatting through Markdown support, clickable suggestion buttons that simplify common interactions, file upload and attachment capabilities, inline image display, typing indicators that show when the bot is processing, and theme customization that allows organizations to brand the interface.
The implementation handles WebSocket connection management, maintaining long-lived connections with heartbeat mechanisms to detect disconnections. When a connection drops, clients can automatically reconnect and resume their session without losing conversation context.
Voice Interaction
When the voice feature is enabled, BotServer supports spoken interaction through speech-to-text and text-to-speech processing. Voice conversations follow a continuous flow where the system listens for user speech, converts it to text, processes it through the same BASIC scripts used for text channels, and converts the response back to speech for playback.
This channel requires integration with speech services and is optional due to its additional infrastructure requirements. Organizations that enable voice interaction can serve users who prefer speaking to typing or who are in situations where hands-free operation is beneficial.
Unified Session Management
All channels share a common session system, which is essential for maintaining coherent conversations across platform switches. When a user first interacts with a bot, the system creates a session that stores conversation context, user preferences, and any data accumulated during the interaction.
This session persists independently of the channel being used. A user could begin a conversation on the web interface from their desktop, continue it later on a mobile device, and the bot would have full context of previous exchanges. The session stores user identification information linked through authentication, ensuring that cross-channel continuity works correctly for logged-in users.
Session data includes conversation history, variables set during script execution, user preferences such as language settings, and references to any files or documents shared during the conversation.
Message and Response Structures
The common message format bridges platform-specific protocols to the unified bot processing system. Each message carries the text content provided by the user, identifiers linking it to the user and session, the channel type indicating its origin, and a metadata field for channel-specific information that might be relevant to processing.
Responses follow a structured format that channel adapters interpret appropriately. Beyond the main content text, responses can include suggestion arrays that channels supporting quick replies render as buttons, a message type indicator distinguishing text from cards or media, streaming tokens for channels that support progressive response display, and completion flags indicating whether the response is final.
Channel adapters examine these response components and render them appropriately for their platform. A suggestion might become a clickable button on the web, a numbered list in a text-only channel, or ignored entirely in voice where such interaction patterns don’t apply.
Adaptive Bot Behavior
While the goal is channel-agnostic scripts, situations arise where bots benefit from knowing their communication context. Scripts can query the current channel and adapt their behavior accordingly, offering voice-appropriate prompts when speaking to users or visual elements when they’re available.
Feature detection allows scripts to check whether the current channel supports specific capabilities before attempting to use them. Rather than checking the channel type directly, checking for feature support makes scripts more resilient to future channel additions that might have different capability combinations.
This adaptive capability should be used sparingly. Most bot logic should remain channel-agnostic, with adaptations limited to presentation concerns rather than core functionality.
WebSocket Communication Protocol
The WebSocket protocol defines how clients and servers exchange messages over persistent connections. Clients initiate connections to the /ws endpoint, where the server creates or retrieves their session and establishes the bidirectional channel.
Messages from clients to the server carry a type field indicating the message kind, the content being sent, and the session identifier linking the message to an existing conversation. The server responds with structured messages including the response content, any suggestions to display, and flags indicating whether the response is complete or if more content will follow for streaming scenarios.
The protocol includes heartbeat messages to maintain connection liveness across network infrastructure that might otherwise terminate idle connections. Both client and server implementations should handle reconnection gracefully, allowing conversations to continue after temporary network interruptions.
Expanding Channel Support
The architecture anticipates integration with additional platforms including WhatsApp Business API, Microsoft Teams, Slack, Telegram, Discord, and SMS gateways. While these channels aren’t implemented in the current version, the adapter pattern provides a clear path for adding them.
Implementing a new channel involves creating an adapter that implements the standard interface for sending and receiving messages, handling the platform’s specific authentication and webhook requirements, mapping between the platform’s message format and the common structure, registering supported features accurately so scripts can adapt appropriately, and managing any platform-specific rate limits or constraints.
The separation of concerns in the adapter pattern means that new channels don’t require changes to bot logic, session management, or the BASIC execution environment. They plug into the existing infrastructure at well-defined integration points.
Practical Considerations
Several factors influence channel selection and implementation for production deployments. Feature availability varies significantly between channels, with web providing the richest interaction while text-only channels offer broader reach. Rich formatting and media support depend entirely on the destination platform’s capabilities.
Network reliability affects real-time channels differently than store-and-forward systems like email or SMS. WebSocket connections require stable networks, while messaging platforms handle intermittent connectivity through their own infrastructure.
Authentication requirements differ between channels. The web channel integrates with the platform’s standard OAuth flow, while messaging platforms typically use their own identity systems that must be mapped to General Bots users.
Rate limiting applies per channel and must be respected to maintain good standing with platform providers. Automated messages face stricter limits than user-initiated conversations on most platforms.
Development Guidelines
Effective multi-channel bot development follows several principles. Writing channel-agnostic scripts as the default approach maximizes code reuse and simplifies maintenance. Using universal keywords like TALK and HEAR ensures scripts work everywhere without modification.
Testing across channels validates that the user experience remains coherent despite platform differences. What works well on web might need adjustment for voice or text-only channels. Identifying these differences during development prevents surprises in production.
Preserving session state carefully ensures that cross-channel continuity works correctly. Scripts should store important context in session variables rather than relying on channel-specific features that might not translate.
Monitoring channel metrics helps identify performance issues or user experience problems specific to particular platforms. Response times, error rates, and user satisfaction can vary significantly between channels.
Summary
BotServer’s multi-channel architecture enables bots to reach users wherever they prefer to communicate while maintaining consistent conversation logic and state. The channel adapter pattern isolates platform-specific concerns from bot development, allowing the same scripts to work across current channels and future integrations. This design philosophy prioritizes developer productivity and user experience across an expanding communication landscape.
Drive Monitor
The Drive Monitor is a real-time file synchronization system that watches for changes in bot storage buckets and automatically updates the database and runtime configuration.
Overview
DriveMonitor provides hot-reloading capabilities for bot configurations by continuously monitoring file changes in object storage. When files are modified, added, or removed, the system automatically detects changes through ETags and file comparison, updates the database with new configurations, recompiles scripts and tools, refreshes knowledge bases, and broadcasts theme changes to connected clients.
Architecture
┌─────────────────┐
│ Object Storage │ (S3-compatible)
│ Buckets │
└────────┬────────┘
│ Poll every 30s
▼
┌─────────────────┐
│ Drive Monitor │
│ - Check ETags │
│ - Diff files │
└────────┬────────┘
│ Changes detected
▼
┌─────────────────────────┐
│ Process Updates │
│ - Compile scripts (.bas)│
│ - Update KB (.gbkb) │
│ - Refresh themes │
│ - Update database │
└─────────────────────────┘
Implementation
Core Components
The DriveMonitor is implemented in src/drive/drive_monitor/mod.rs with the following structure:
#![allow(unused)] fn main() { pub struct DriveMonitor { state: Arc<AppState>, bucket_name: String, file_states: Arc<RwLock<HashMap<String, FileState>>>, bot_id: Uuid, kb_manager: Arc<KnowledgeBaseManager>, work_root: PathBuf, is_processing: Arc<AtomicBool>, } }
Monitoring Process
The monitoring process begins with initialization when a bot is mounted, at which point a DriveMonitor instance is created and spawned. Every 30 seconds, the monitor polls for changes in .gbdialog files containing scripts and tools, .gbkb collections containing knowledge base documents, .gbtheme files for UI themes, and .gbot/config.csv for bot configuration.
Change detection uses ETags to efficiently identify file modifications without downloading entire files. When changes are detected, different file types trigger specific handlers. Scripts are compiled to AST, knowledge base files are indexed and embedded, themes are broadcast to WebSocket clients, and config changes trigger bot settings reload.
File Type Handlers
Script Files (.bas)
The script handler compiles BASIC scripts to AST format for efficient execution. It stores the compiled version in the database for persistence and updates the tool registry if the script defines callable tools.
Knowledge Base Files (.gbkb)
The knowledge base handler downloads new and modified documents from storage. It processes text extraction to prepare content for indexing, generates embeddings using the configured embedding model, and updates the vector database for semantic search functionality.
Theme Files (.gbtheme)
The theme handler detects CSS and JavaScript changes in theme packages. It broadcasts updates to all connected WebSocket clients and triggers UI refresh without requiring a full page reload.
Usage
The DriveMonitor is automatically started when a bot is mounted:
#![allow(unused)] fn main() { // In BotOrchestrator::mount_bot let drive_monitor = Arc::new(DriveMonitor::new( state.clone(), bucket_name, bot_id )); let _handle = drive_monitor.clone().spawn().await; }
Configuration
No explicit configuration is needed since the monitor automatically uses the bot’s storage bucket name, creates work directories as needed, and manages its own file state cache internally.
Performance Considerations
The polling interval of 30 seconds balances responsiveness with resource usage to avoid overwhelming the storage backend. Concurrent processing uses atomic flags to prevent overlapping operations that could cause race conditions. The caching system maintains an ETag cache to minimize unnecessary downloads when files haven’t changed. Batching ensures that multiple file changes detected in a single poll cycle are processed together efficiently.
Error Handling
The monitor includes robust error handling that continues operation even if individual file processing fails. Errors are logged for debugging while maintaining overall service availability. Isolated error boundaries prevent cascading failures that could take down the entire monitoring system.
Monitoring and Debugging
Enable debug logging to see monitor activity:
RUST_LOG=botserver::drive::drive_monitor=debug cargo run
Log output includes change detection events showing which files were modified, file processing status as each file is handled, compilation results for script files, and database update confirmations when changes are persisted.
Best Practices
Keep related files organized in their appropriate directories such as .gbdialog for scripts, .gbkb for knowledge base content, and .gbtheme for UI customizations. The monitor tracks changes but doesn’t maintain history, so use git or another version control system to track file revisions. For knowledge base documents larger than 10MB, consider splitting them into smaller files for better processing performance. During active development, the 30-second polling delay can be avoided by restarting the bot to force immediate reprocessing.
Limitations
The system is not truly real-time due to the 30-second polling interval, meaning changes aren’t reflected instantly. There is no conflict resolution mechanism, so concurrent modifications follow a last-write-wins policy. Memory usage for the file state cache is minimal since only ETags are stored rather than full file contents.
Future Enhancements
Planned improvements include WebSocket notifications from the storage layer for instant updates without polling, configurable polling intervals per file type to allow more frequent checks for critical files, differential sync for large knowledge bases to reduce processing time, and multi-version support for A/B testing different bot configurations.
Platform Capabilities
General Bots provides a unique combination of capabilities that differentiate it from other AI platforms. This document outlines what makes General Bots suitable for organizations seeking customizable, self-hosted AI automation.
Core Differentiators
Self-Hosted & Open Source
General Bots runs entirely on your infrastructure. Your data never leaves your servers, and you have full access to the source code under AGPL licensing.
| Capability | General Bots |
|---|---|
| Self-hosted deployment | ✅ Full control |
| Open source | ✅ AGPL licensed |
| Data sovereignty | ✅ Your infrastructure |
| Custom modifications | ✅ Full source access |
| Per-user licensing | ✅ None required |
Customization & Extensibility
Build exactly what you need with BASIC scripting and instant API creation:
' Create a custom API endpoint
WEBHOOK "customer-lookup"
customer_id = params.id
USE KB "customer-data"
response = LLM "Get information about customer " + customer_id
WITH result = NEW OBJECT
.customer_id = customer_id
.data = response
.generated_at = NOW()
END WITH
This creates a working API endpoint in seconds without separate deployment or infrastructure configuration.
You can build custom webhooks and APIs, automated workflows with BASIC scripts, integrations with any external system, multi-channel chatbots, document processing pipelines, and scheduled automation tasks.
Knowledge Base & RAG
Full control over your knowledge base with built-in retrieval-augmented generation:
' Load multiple knowledge sources
USE KB "company-policies"
USE KB "product-catalog"
USE KB "customer-faq"
SET CONTEXT "You are a helpful customer service agent."
answer = LLM user_question
' Save for training and analysis
INSERT "conversations", #{
question: user_question,
answer: answer,
timestamp: NOW()
}
The knowledge base system supports multiple vector collections, custom embedding models, semantic search, context compaction, semantic caching, and full document indexing.
Multi-Channel Deployment
Deploy once, reach users everywhere:
' Same code works across all channels
TALK "How can I help you today?"
HEAR question
response = LLM question
TALK response
Supported channels include web chat, WhatsApp Business, Teams, Slack, Telegram, SMS, email, and voice through LiveKit.
Database & Integration
Direct database access and unlimited API integrations:
' Direct SQL access
customers = FIND "customers", "region = 'EMEA'"
' REST APIs
data = GET "https://api.example.com/data"
POST "https://api.crm.com/leads", lead_data
' GraphQL
result = GRAPHQL "https://api.github.com/graphql", query, vars
There is no connector marketplace and no per-integration fees. You can connect to anything with HTTP.
AI Capabilities
Native AI integration without additional licensing:
| Feature | Implementation |
|---|---|
| Chat assistance | LLM keyword |
| Document Q&A | USE KB + RAG |
| Image generation | IMAGE keyword |
| Video generation | VIDEO keyword |
| Speech-to-text | HEAR AS AUDIO |
| Text-to-speech | AUDIO keyword |
| Vision/OCR | SEE keyword |
Use any LLM provider including OpenAI, Anthropic, or local models. You can run entirely offline with local inference.
Automation Power
BASIC scripting provides full programming capabilities:
SET SCHEDULE "every day at 9am"
' Daily report automation
sales = AGGREGATE "orders", "SUM", "total", "date = TODAY()"
count = AGGREGATE "orders", "COUNT", "id", "date = TODAY()"
SET CONTEXT "You are a business analyst."
summary = LLM "Sales: $" + sales + ", Orders: " + count
SEND MAIL TO "team@company.com" SUBJECT "Daily Report" BODY summary
Automation features include scheduled tasks using cron syntax, event-driven webhooks, database triggers, conditional logic, loops and iterations, error handling, and multi-step workflows.
When General Bots Excels
General Bots is the right choice when you need custom chatbots for customer service, internal support, or specialized domains where you control the knowledge base and conversation flow.
Workflow automation that goes beyond simple triggers benefits from the full programming logic with database access, API calls, and AI integration that General Bots provides.
Multi-channel deployment where the same bot serves users on web, mobile messaging, and enterprise platforms is straightforward with General Bots.
Data sovereignty requirements are met through self-hosted deployment that keeps all data on your infrastructure.
Cost control is achieved without per-user licensing that scales with your organization.
Integration flexibility allows connecting to any system without marketplace limitations.
Deployment Options
Self-Hosted
Run General Bots on your own infrastructure with single binary deployment, container support through LXC and Docker, horizontal scaling, and full observability.
Quick Start
./botserver
Access at http://localhost:8080 and start building.
Summary
General Bots combines self-hosting for complete data control, BASIC scripting for powerful automation, multi-channel support for broad reach, native AI without extra licensing, open source for transparency and customization, and no per-user fees for predictable costs.
For organizations that need more than a simple chatbot—those requiring custom integrations, complex workflows, and full control over their AI deployment—General Bots provides the foundation to build exactly what you need.
See Also
- Quick Start - Get running in minutes
- Keywords Reference - Full BASIC reference
- REST API - API documentation
- Projects - Team collaboration features
Enterprise Platform Migration
General Bots provides complete feature parity with enterprise cloud productivity suites while offering significant advantages: self-hosting, open source licensing, no per-user fees, and native AI integration.
Migration Overview
Organizations evaluating self-hosted alternatives find General Bots delivers equivalent functionality with full data sovereignty. The sections below map common enterprise APIs to their General Bots equivalents.
API Endpoint Mapping
Mail and Communication
Enterprise mail APIs handle sending, receiving, and managing email. General Bots provides the same capabilities through Stalwart Mail Server and BASIC keywords.
| Enterprise API | General Bots Equivalent | Implementation |
|---|---|---|
| Messages endpoint | Stalwart IMAP/JMAP | Full mailbox access |
| Send mail endpoint | SEND MAIL keyword | SEND MAIL TO email SUBJECT s BODY b |
| Mail folders | Stalwart folders | Standard IMAP folders |
| Attachments | File keywords | READ, WRITE with attachments |
The BASIC syntax is straightforward:
SEND MAIL TO "client@company.com" SUBJECT "Report Ready" BODY report_content
For receiving mail, configure webhooks or use scheduled scripts to process incoming messages through the Stalwart API.
Calendar and Scheduling
Calendar APIs manage events, appointments, and scheduling. General Bots integrates CalDAV with the BOOK keyword.
| Enterprise API | General Bots Equivalent | Implementation |
|---|---|---|
| Calendar events | Calendar API | /api/calendar/events |
| Create event | BOOK keyword | BOOK "Meeting" AT datetime |
| Calendar view | Calendar range query | Date-filtered event retrieval |
| Free/busy lookup | Availability API | Schedule availability |
Schedule appointments conversationally:
TALK "When would you like to schedule your appointment?"
HEAR appointment_time AS DATE
BOOK "Consultation" AT appointment_time
TALK "Your appointment is confirmed for " + FORMAT(appointment_time, "MMMM d 'at' h:mm a")
Files and Storage
Cloud storage APIs handle file operations, versioning, and sharing. SeaweedFS provides S3-compatible storage with full versioning support.
| Enterprise API | General Bots Equivalent | Implementation |
|---|---|---|
| List files | LIST keyword | LIST "/documents/" |
| File listing | Drive API | /api/files/list |
| File content | READ keyword | content = READ "file.pdf" |
| File versions | Versions API | /api/files/versions |
| Permissions | Sharing API | Permission management |
File operations in BASIC:
files = LIST "/reports/"
FOR EACH file IN files
content = READ file.path
processed = LLM "Summarize this document: " + content
WRITE "/summaries/" + file.name + ".summary.txt", processed
NEXT file
Tasks and Planning
Task management APIs create, update, and track work items. General Bots implements a complete task system with project organization.
| Enterprise API | General Bots Equivalent | Implementation |
|---|---|---|
| Tasks endpoint | Tasks API | /api/tasks |
| Task lists | Task lists | Board-based organization |
| Create task | CREATE TASK keyword | Task creation |
| Task details | Task CRUD | Full task lifecycle |
Create tasks from conversations:
TALK "What task should I create?"
HEAR task_title
TALK "When is it due?"
HEAR due_date AS DATE
CREATE TASK task_title DUE due_date
TALK "Task created: " + task_title
Users and Directory
User management APIs handle identity, groups, and permissions. Zitadel provides enterprise-grade IAM with OIDC/OAuth2.
| Enterprise API | General Bots Equivalent | Implementation |
|---|---|---|
| Users endpoint | Users API | /api/users |
| Current user | Current user | Session context |
| Groups | Groups API | /api/groups |
| Directory | Directory API | Zitadel directory |
| Memberships | Membership API | Group memberships |
Automation and Workflows
Cloud automation platforms provide flow-based workflow design. General Bots offers BASIC scripting with more power and flexibility.
| Cloud Automation | General Bots Equivalent | Advantage |
|---|---|---|
| Scheduled flows | SET SCHEDULE | Cron syntax, unlimited |
| HTTP triggers | WEBHOOK | Instant API creation |
| Connectors | GET, POST, GraphQL | Any REST/GraphQL API |
| Conditions | IF/THEN/ELSE | Full programming logic |
| Loops | FOR EACH | Native iteration |
| Data operations | TABLE, INSERT, UPDATE | Direct database access |
A workflow that would require a visual designer elsewhere becomes simple BASIC:
SET SCHEDULE "0 9 * * 1-5"
' Daily sales report - runs weekdays at 9 AM
sales = AGGREGATE "orders", "SUM", "total", "date = TODAY()"
count = AGGREGATE "orders", "COUNT", "id", "date = TODAY()"
SET CONTEXT "You are a business analyst. Create a brief executive summary."
summary = LLM "Sales: $" + sales + ", Orders: " + count
SEND MAIL TO "executives@company.com" SUBJECT "Daily Sales Report" BODY summary
AI and Intelligence
Cloud AI assistants typically require additional per-user licensing. General Bots includes AI capabilities at no extra cost.
| Cloud AI Feature | General Bots Equivalent | Advantage |
|---|---|---|
| AI Assistant | LLM keyword | Free (bring your API key) |
| Document analysis | USE KB + LLM | Built-in RAG |
| Image generation | IMAGE keyword | Local generation available |
| Speech-to-text | HEAR AS AUDIO | Whisper integration |
| Text-to-speech | AUDIO keyword | TTS models |
| Vision/OCR | SEE keyword | Vision models |
AI integration is native:
USE KB "product-docs"
SET CONTEXT "You are a helpful product specialist."
TALK "How can I help you today?"
HEAR question
response = LLM question
TALK response
Feature Parity Matrix
Core Services
| Service Category | Enterprise Cloud | General Bots | Status |
|---|---|---|---|
| Cloud mail service | Stalwart Mail | ✅ Complete | |
| Calendar | Cloud calendar | CalDAV + Calendar API | ✅ Complete |
| Files | Cloud storage | SeaweedFS | ✅ Complete |
| Video | Cloud meetings | LiveKit | ✅ Complete |
| Chat | Cloud messaging | Multi-channel | ✅ Complete |
| Tasks | Cloud tasks | Tasks Module | ✅ Complete |
| Identity | Cloud identity | Zitadel | ✅ Complete |
| Search | Cloud search | Qdrant Vectors | ✅ Semantic |
Automation
| Capability | Cloud Platform | General Bots | Status |
|---|---|---|---|
| Scheduled tasks | Scheduled flows | SET SCHEDULE | ✅ Complete |
| Webhooks | HTTP triggers | WEBHOOK | ✅ Complete |
| API calls | Connectors | HTTP keywords | ✅ Flexible |
| Custom logic | Expressions | Full BASIC | ✅ Powerful |
| Database | Cloud datastore | Direct SQL | ✅ Direct |
| Pricing | Per-user fees | Included | ✅ Free |
AI Capabilities
| Feature | Cloud AI (extra cost) | General Bots | Status |
|---|---|---|---|
| Chat assistance | ✅ | LLM keyword | ✅ Included |
| Document Q&A | ✅ | USE KB + RAG | ✅ Included |
| Code generation | ✅ | LLM with context | ✅ Included |
| Image generation | Limited | IMAGE keyword | ✅ Full |
| Video generation | ❌ | VIDEO keyword | ✅ Available |
| Custom models | ❌ | Any provider | ✅ Flexible |
Cost Comparison
Typical Per-User Cloud Licensing
| License Tier | Monthly Cost | 100 Users/Year |
|---|---|---|
| Basic | $6/user | $7,200 |
| Standard | $12.50/user | $15,000 |
| Premium | $22/user | $26,400 |
| + AI features | $30/user | $36,000 |
| Total Premium + AI | $52/user | $62,400 |
General Bots Self-Hosted
| Component | Monthly Cost | Notes |
|---|---|---|
| Software | $0 | AGPL licensed |
| Infrastructure | $50-200 | Your servers |
| LLM API (optional) | $50-500 | Pay per use |
| Total | $100-700 | Unlimited users |
For 100 users, General Bots costs roughly 1-2% of typical cloud licensing while providing equivalent or better functionality.
Migration Approach
Phase 1: Assessment
Inventory current service usage and map to General Bots equivalents. Most organizations find complete feature coverage for core productivity scenarios.
Phase 2: Parallel Deployment
Run General Bots alongside existing services during transition. Configure identity federation between Zitadel and existing directory services.
Phase 3: Data Migration
Use provided migration tools and APIs:
' Example: Migrate files from external storage
files = GET "https://api.storage.example/files"
FOR EACH file IN files
content = DOWNLOAD file.url
WRITE "/" + file.name, content
NEXT file
Phase 4: Cutover
Redirect DNS, update client configurations, and deprecate cloud subscriptions.
What You Gain
Data Sovereignty - Your data stays on your infrastructure. No third-party access, no cross-border data concerns.
Cost Control - Predictable infrastructure costs instead of per-user licensing that scales with your organization.
Customization - Full source code access. Modify, extend, and integrate as needed.
AI Integration - Native LLM support without additional licensing. Use any provider or run models locally.
Automation Power - BASIC scripting provides more flexibility than visual flow builders with no per-automation limits.
No Vendor Lock-in - Open standards (IMAP, CalDAV, S3, OIDC) mean your data is always portable.
Migration Resources
General Bots provides tools and documentation for smooth migration:
- Import utilities for common data formats
- API compatibility layers for gradual transition
- Identity federation for single sign-on during migration
- Data validation tools to verify migration completeness
Summary
General Bots delivers enterprise productivity features without enterprise pricing:
- 100% API coverage for core productivity services
- Self-hosted deployment with full data sovereignty
- No per-user licensing fees
- Native AI integration without additional cost
- More powerful automation with BASIC scripting
- Open source with full code access
The choice between cloud and self-hosted depends on organizational priorities. For those valuing control, cost efficiency, and customization, General Bots delivers enterprise-grade productivity without enterprise-grade pricing.
See Also
- Quick Start - Deploy in minutes
- Keywords Reference - Full BASIC reference
- REST API Reference - Complete API documentation
- Migration Guide - Detailed migration steps
Projects
Projects organize work and enable team collaboration within General Bots. A project groups related tasks, conversations, documents, and team members into a shared workspace where everyone stays aligned.
Overview
Every bot starts with a default project. Users can create additional projects to organize different initiatives, clients, or departments. When chatting with the bot, selecting a project scopes the conversation to that context—the AI understands what you’re working on and who else is involved.
Projects connect three core capabilities:
- Tasks belong to projects, making it easy to track work across teams
- Conversations can be project-scoped, so the AI has relevant context
- Team members are assigned to projects, enabling collaboration
Creating Projects
Via Chat
User: Create a new project called Website Redesign
Bot: Project "Website Redesign" created. Would you like to add team members?
User: Yes, add maria@company.com and john@company.com
Bot: Added Maria and John to the project. You can now assign tasks and chat within this project context.
Via BASIC
' create-project.bas
PARAM name AS string
PARAM description AS string
DESCRIPTION "Create a new project workspace"
project_id = CREATE PROJECT name WITH DESCRIPTION description
TALK "Project created: " + name
' Add current user as owner
ADD USER TO PROJECT project_id, user.id, "owner"
Via API
POST /api/projects
{
"name": "Website Redesign",
"description": "Q2 website refresh initiative",
"members": ["user-id-1", "user-id-2"]
}
Project Structure
Each project contains:
Project
├── Settings
│ ├── Name and description
│ ├── Visibility (private/team/public)
│ └── Default assignee
├── Members
│ ├── Owners (full control)
│ ├── Members (can edit)
│ └── Viewers (read only)
├── Tasks
│ ├── Boards and columns
│ ├── Task items
│ └── Dependencies
├── Conversations
│ ├── Project chat history
│ └── AI context from project docs
└── Files
└── Project-specific documents
Working with Projects
Switching Project Context
In the chat interface, select a project from the project dropdown. All subsequent conversations use that project’s context:
[Project: Website Redesign ▼]
User: What tasks are due this week?
Bot: In the Website Redesign project, you have 3 tasks due this week:
1. Finalize homepage mockups (Friday)
2. Review navigation structure (Wednesday)
3. Content inventory spreadsheet (Thursday)
Project-Scoped AI
When a project is selected, the AI automatically:
- References project documents in responses
- Knows about team members and their roles
- Understands project-specific terminology from past conversations
- Suggests relevant tasks and next steps
User: Who's working on the design?
Bot: Maria is assigned to the design tasks. She completed the color palette
yesterday and is currently working on the homepage mockups. John is
handling the technical implementation once designs are approved.
Team Collaboration
Project members can chat together with AI assistance:
Maria: @bot summarize what we discussed about the navigation
Bot: Based on your project conversations, the team decided on:
- Simplified top nav with 5 main sections
- Mega menu for product categories
- Mobile-first hamburger menu
John raised concerns about dropdown accessibility that still need addressing.
BASIC Keywords for Projects
CREATE PROJECT
project_id = CREATE PROJECT "Marketing Campaign" WITH DESCRIPTION "Q3 launch"
GET PROJECT
project = GET PROJECT project_id
TALK "Project: " + project.name
TALK "Members: " + LEN(project.members)
TALK "Open tasks: " + project.task_count
LIST PROJECTS
' List user's projects
projects = LIST PROJECTS
FOR EACH p IN projects
TALK p.name + " (" + p.role + ")"
NEXT p
' List projects with filter
active = LIST PROJECTS WHERE "status = 'active'"
ADD USER TO PROJECT
ADD USER TO PROJECT project_id, user_id, "member"
ADD USER TO PROJECT project_id, email, "owner"
REMOVE USER FROM PROJECT
REMOVE USER FROM PROJECT project_id, user_id
SET PROJECT
Set the current conversation’s project context:
SET PROJECT project_id
' Subsequent operations use this project context
CREATE TASK "Review designs" ' Task created in the selected project
DELETE PROJECT
DELETE PROJECT project_id
' Or via dynamic path
DELETE "/projects/" + project_id
API Reference
List Projects
GET /api/projects
Returns projects the authenticated user can access.
Get Project
GET /api/projects/{id}
Returns project details including members and task summary.
Create Project
POST /api/projects
{
"name": "Project Name",
"description": "Optional description",
"visibility": "team",
"members": [
{"user_id": "...", "role": "owner"},
{"user_id": "...", "role": "member"}
]
}
Update Project
PUT /api/projects/{id}
{
"name": "Updated Name",
"description": "Updated description"
}
Delete Project
DELETE /api/projects/{id}
Project Members
GET /api/projects/{id}/members
POST /api/projects/{id}/members
DELETE /api/projects/{id}/members/{user_id}
Project Tasks
GET /api/projects/{id}/tasks
POST /api/projects/{id}/tasks
Project Conversations
GET /api/projects/{id}/conversations
Database Schema
Projects are stored in the projects table:
| Column | Type | Description |
|---|---|---|
id | UUID | Project identifier |
bot_id | UUID | Owning bot |
name | TEXT | Project name |
description | TEXT | Optional description |
visibility | TEXT | private, team, or public |
settings | JSONB | Project configuration |
created_by | UUID | Creator user ID |
created_at | TIMESTAMP | Creation time |
updated_at | TIMESTAMP | Last update |
Project membership in project_members:
| Column | Type | Description |
|---|---|---|
project_id | UUID | Project reference |
user_id | UUID | User reference |
role | TEXT | owner, member, or viewer |
joined_at | TIMESTAMP | When user joined |
Default Project
Every bot has a default project that cannot be deleted. Tasks created without specifying a project go here. Users can:
- Rename the default project
- Move tasks from default to specific projects
- Use the default for personal/unorganized work
' Get the default project
default = GET DEFAULT PROJECT
TALK "Default project: " + default.name
Project Templates
Create projects from templates for common scenarios:
' Create from template
project_id = CREATE PROJECT FROM TEMPLATE "client-onboarding", "Acme Corp Onboarding"
' Available templates
templates = LIST PROJECT TEMPLATES
Built-in templates include:
- Client Onboarding - Tasks for new client setup
- Product Launch - Launch checklist and milestones
- Sprint - Two-week sprint with standard ceremonies
- Content Calendar - Monthly content planning
Best Practices
Keep projects focused. A project should represent a distinct initiative with clear boundaries. If a project grows too large, consider splitting it.
Assign clear ownership. Every project needs at least one owner responsible for keeping it organized and moving forward.
Use project context in chat. When discussing project-specific topics, select the project first so the AI has full context.
Archive completed projects. Rather than deleting, archive finished projects to preserve history:
UPDATE PROJECT project_id SET status = "archived"
Review project membership regularly. Remove users who are no longer involved to keep conversations relevant.
Integration with Tasks
Tasks belong to exactly one project. The task view shows the default project by default, with options to filter by project or view all tasks across projects.
' Create task in specific project
SET PROJECT project_id
CREATE TASK "Design review" DUE DATEADD(NOW(), 7, "day")
' Or specify project directly
CREATE TASK "Design review" IN PROJECT project_id
See Also
- Tasks API - Task management endpoints
- Conversations API - Chat history
- Groups API - User group management
- SET CONTEXT - AI context configuration
Multi-Agent Office Suite - Complete Design Document
🎯 Vision: Beat Microsoft 365, Google Workspace & All AI Competitors
General Bots = Multi-Agent AI + Complete Office Suite + Research Engine + Banking + Everything
This document outlines the complete implementation plan to make General Bots the world’s most powerful FREE enterprise platform.
📋 Table of Contents
- BOT Keyword - Multi-Agent System
- Chat UI Enhancements
- Conversational Banking (bank.gbai)
- Excel Clone (HTMX/Rust)
- Word Editor for .docx
- M365/Office Competitive Analysis
- Google/MS Graph API Compatibility
- Copilot/Gemini Feature Parity
- Attachment System (Plus Button)
- Conversation Branching
- PLAY Keyword - Content Projector
- Implementation Priority
1. BOT Keyword - Multi-Agent System
Concept
Every conversation becomes a group conversation where multiple specialized bots can participate. Bots join based on triggers (tools, schedules, keywords) and collaborate to answer complex queries.
Keywords
' Add a bot to the conversation
ADD BOT "finance-expert" WITH TRIGGER "money, budget, invoice, payment"
ADD BOT "legal-advisor" WITH TRIGGER "contract, agreement, compliance"
ADD BOT "hr-assistant" WITH TRIGGER "employee, vacation, hiring"
' Add bot with tool-based trigger
ADD BOT "data-analyst" WITH TOOLS "AGGREGATE, CHART, REPORT"
' Add bot with schedule-based participation
ADD BOT "daily-reporter" WITH SCHEDULE "0 9 * * *"
' Remove bot from conversation
REMOVE BOT "finance-expert"
' List active bots
bots = LIST BOTS
' Set bot priority (who answers first)
SET BOT PRIORITY "legal-advisor", 1
' Bot-to-bot delegation
DELEGATE TO "specialist-bot" WITH CONTEXT current_conversation
' Create bot swarm for complex tasks
CREATE SWARM "research-team" WITH BOTS "researcher, analyst, writer"
Architecture
┌─────────────────────────────────────────────────────────────────┐
│ CONVERSATION ORCHESTRATOR │
├─────────────────────────────────────────────────────────────────┤
│ │
│ User Message ──▶ Trigger Analyzer ──▶ Bot Selector │
│ │ │ │
│ ▼ ▼ │
│ ┌─────────────────┐ ┌──────────────┐ │
│ │ Keyword Triggers │ │ Tool Triggers │ │
│ │ - finance terms │ │ - AGGREGATE │ │
│ │ - legal terms │ │ - CHART │ │
│ │ - hr terms │ │ - specific │ │
│ └─────────────────┘ └──────────────┘ │
│ │ │ │
│ ▼ ▼ │
│ ┌─────────────────────────────────────┐ │
│ │ BOT RESPONSE AGGREGATOR │ │
│ │ - Merge responses │ │
│ │ - Resolve conflicts │ │
│ │ - Format for user │ │
│ └─────────────────────────────────────┘ │
│ │
└─────────────────────────────────────────────────────────────────┘
Database Schema
-- Bot definitions
CREATE TABLE bots (
id UUID PRIMARY KEY,
name VARCHAR(255) NOT NULL,
description TEXT,
system_prompt TEXT,
model_config JSONB,
tools JSONB,
is_active BOOLEAN DEFAULT true,
created_at TIMESTAMPTZ DEFAULT NOW()
);
-- Bot triggers
CREATE TABLE bot_triggers (
id UUID PRIMARY KEY,
bot_id UUID REFERENCES bots(id),
trigger_type VARCHAR(50), -- 'keyword', 'tool', 'schedule', 'event'
trigger_config JSONB,
priority INT DEFAULT 0,
is_active BOOLEAN DEFAULT true
);
-- Session bot associations
CREATE TABLE session_bots (
id UUID PRIMARY KEY,
session_id UUID,
bot_id UUID REFERENCES bots(id),
joined_at TIMESTAMPTZ DEFAULT NOW(),
priority INT DEFAULT 0,
is_active BOOLEAN DEFAULT true
);
-- Bot message history
CREATE TABLE bot_messages (
id UUID PRIMARY KEY,
session_id UUID,
bot_id UUID REFERENCES bots(id),
content TEXT,
role VARCHAR(50),
created_at TIMESTAMPTZ DEFAULT NOW()
);
Rust Implementation
#![allow(unused)] fn main() { // src/basic/keywords/add_bot.rs use crate::shared::models::UserSession; use crate::shared::state::AppState; use rhai::{Dynamic, Engine}; use std::sync::Arc; #[derive(Debug, Clone, Serialize, Deserialize)] pub struct BotTrigger { pub trigger_type: TriggerType, pub keywords: Option<Vec<String>>, pub tools: Option<Vec<String>>, pub schedule: Option<String>, } #[derive(Debug, Clone, Serialize, Deserialize)] pub enum TriggerType { Keyword, Tool, Schedule, Event, } pub fn add_bot_keyword(state: Arc<AppState>, user: UserSession, engine: &mut Engine) { let state_clone = Arc::clone(&state); let user_clone = user.clone(); // ADD BOT "name" WITH TRIGGER "keywords" engine.register_custom_syntax( &["ADD", "BOT", "$expr$", "WITH", "TRIGGER", "$expr$"], false, move |context, inputs| { let bot_name = context.eval_expression_tree(&inputs[0])?.to_string(); let trigger = context.eval_expression_tree(&inputs[1])?.to_string(); let state_for_thread = Arc::clone(&state_clone); let session_id = user_clone.id; let (tx, rx) = std::sync::mpsc::channel(); std::thread::spawn(move || { let rt = tokio::runtime::Runtime::new().unwrap(); let result = rt.block_on(async { add_bot_to_session( &state_for_thread, session_id, &bot_name, BotTrigger { trigger_type: TriggerType::Keyword, keywords: Some(trigger.split(',').map(|s| s.trim().to_string()).collect()), tools: None, schedule: None, } ).await }); let _ = tx.send(result); }); match rx.recv_timeout(std::time::Duration::from_secs(30)) { Ok(Ok(msg)) => Ok(Dynamic::from(msg)), Ok(Err(e)) => Err(Box::new(rhai::EvalAltResult::ErrorRuntime( e.into(), rhai::Position::NONE, ))), Err(_) => Err(Box::new(rhai::EvalAltResult::ErrorRuntime( "ADD BOT timed out".into(), rhai::Position::NONE, ))), } }, ); // ADD BOT "name" WITH TOOLS "tool1, tool2" engine.register_custom_syntax( &["ADD", "BOT", "$expr$", "WITH", "TOOLS", "$expr$"], false, move |context, inputs| { // Similar implementation for tool-based triggers }, ); // ADD BOT "name" WITH SCHEDULE "cron" engine.register_custom_syntax( &["ADD", "BOT", "$expr$", "WITH", "SCHEDULE", "$expr$"], false, move |context, inputs| { // Similar implementation for schedule-based triggers }, ); } async fn add_bot_to_session( state: &AppState, session_id: Uuid, bot_name: &str, trigger: BotTrigger, ) -> Result<String, String> { // Implementation to add bot to session } }
Multi-Agent Orchestrator
#![allow(unused)] fn main() { // src/core/multi_agent.rs use std::collections::HashMap; use uuid::Uuid; pub struct MultiAgentOrchestrator { state: Arc<AppState>, active_bots: HashMap<Uuid, BotInstance>, } impl MultiAgentOrchestrator { pub async fn process_message( &self, session_id: Uuid, message: &str, ) -> Result<Vec<BotResponse>, Error> { // 1. Get all active bots for this session let bots = self.get_session_bots(session_id).await?; // 2. Analyze message and match triggers let matching_bots = self.match_triggers(&bots, message).await?; // 3. If no specific bot matches, use default if matching_bots.is_empty() { return self.default_bot_response(session_id, message).await; } // 4. Get responses from all matching bots let mut responses = Vec::new(); for bot in matching_bots { let response = self.get_bot_response(&bot, session_id, message).await?; responses.push(response); } // 5. Aggregate responses let final_response = self.aggregate_responses(responses).await?; Ok(final_response) } async fn match_triggers( &self, bots: &[BotInstance], message: &str, ) -> Vec<BotInstance> { let mut matching = Vec::new(); let message_lower = message.to_lowercase(); for bot in bots { if let Some(trigger) = &bot.trigger { match trigger.trigger_type { TriggerType::Keyword => { if let Some(keywords) = &trigger.keywords { for keyword in keywords { if message_lower.contains(&keyword.to_lowercase()) { matching.push(bot.clone()); break; } } } } TriggerType::Tool => { // Check if message implies using specific tools } _ => {} } } } // Sort by priority matching.sort_by(|a, b| b.priority.cmp(&a.priority)); matching } async fn aggregate_responses( &self, responses: Vec<BotResponse>, ) -> Result<Vec<BotResponse>, Error> { // Use LLM to merge multiple bot responses into coherent answer // Or return all responses with bot attribution Ok(responses) } } }
2. Chat UI Enhancements
2.1 Poe/Perplexity-Style Features
Chat Interface Components
<!-- botserver/ui/suite/chat/enhanced-chat.html -->
<div class="chat-container" id="chat-app" hx-ext="ws" ws-connect="/ws">
<!-- Bot Selector Bar (Poe-style) -->
<div class="bot-selector-bar">
<div class="active-bots"
hx-get="/api/chat/active-bots"
hx-trigger="load, bot-changed from:body"
hx-swap="innerHTML">
<!-- Active bots appear here -->
</div>
<button class="add-bot-btn"
hx-get="/api/bots/available"
hx-target="#bot-modal"
hx-swap="innerHTML">
+ Add Bot
</button>
</div>
<!-- Connection Status -->
<div id="connection-status" class="connection-status">
<span class="status-dot"></span>
<span class="status-text">Connected</span>
</div>
<!-- Messages with Bot Attribution -->
<main id="messages" class="messages-container">
<!-- Messages load here with bot avatars and names -->
</main>
<!-- Typing Indicators for Multiple Bots -->
<div id="typing-indicators" class="typing-indicators hidden">
<!-- Shows which bots are "thinking" -->
</div>
<!-- Enhanced Input Area -->
<footer class="input-footer">
<!-- Suggestions -->
<div class="suggestions-container" id="suggestions"
hx-get="/api/suggestions"
hx-trigger="load"
hx-swap="innerHTML">
</div>
<!-- Attachment Preview -->
<div id="attachment-preview" class="attachment-preview hidden">
<!-- Previews of attached files -->
</div>
<!-- Input Form -->
<form class="input-container"
hx-post="/api/chat/send"
hx-target="#messages"
hx-swap="beforeend"
hx-encoding="multipart/form-data"
hx-on::after-request="this.reset(); clearAttachments();">
<!-- Plus Button for Attachments -->
<div class="attachment-menu">
<button type="button" class="plus-btn" onclick="toggleAttachmentMenu()">
<span>+</span>
</button>
<div id="attachment-dropdown" class="attachment-dropdown hidden">
<button type="button" onclick="attachImage()">
📷 Image
</button>
<button type="button" onclick="attachDocument()">
📄 Document
</button>
<button type="button" onclick="attachAudio()">
🎵 Audio
</button>
<button type="button" onclick="attachVideo()">
🎬 Video
</button>
<button type="button" onclick="attachCode()">
💻 Code
</button>
<button type="button" onclick="useCamera()">
📸 Camera
</button>
<button type="button" onclick="useScreenshot()">
🖥️ Screenshot
</button>
</div>
</div>
<!-- Hidden file inputs -->
<input type="file" id="image-input" accept="image/*" multiple hidden>
<input type="file" id="document-input" accept=".pdf,.doc,.docx,.xls,.xlsx,.ppt,.pptx,.txt,.csv" multiple hidden>
<input type="file" id="audio-input" accept="audio/*" hidden>
<input type="file" id="video-input" accept="video/*" hidden>
<input type="file" id="code-input" accept=".js,.ts,.py,.rs,.go,.java,.c,.cpp,.h,.css,.html,.json,.yaml,.xml,.sql,.sh,.bas" hidden>
<!-- Message Input -->
<textarea
name="content"
id="message-input"
placeholder="Message... (@ to mention a bot)"
rows="1"
autofocus
required
></textarea>
<!-- Voice Button -->
<button type="button" id="voice-btn" title="Voice Input"
hx-post="/api/voice/start"
hx-swap="none">
🎤
</button>
<!-- Send Button -->
<button type="submit" id="send-btn" title="Send">
↑
</button>
</form>
</footer>
<!-- Branch Indicator -->
<div id="branch-indicator" class="branch-indicator hidden">
<span>Branch from message #<span id="branch-from"></span></span>
<button onclick="cancelBranch()">Cancel</button>
</div>
<!-- Scroll to Bottom -->
<button class="scroll-to-bottom hidden" id="scroll-to-bottom">↓</button>
<!-- Projector/Player Modal -->
<div id="projector-modal" class="projector-modal hidden">
<div class="projector-header">
<span id="projector-title">Content Viewer</span>
<button onclick="closeProjector()">✕</button>
</div>
<div id="projector-content" class="projector-content">
<!-- Content plays here -->
</div>
<div class="projector-controls">
<button onclick="projectorPrev()">◀</button>
<button onclick="projectorPlayPause()">⏯</button>
<button onclick="projectorNext()">▶</button>
<button onclick="projectorFullscreen()">⛶</button>
</div>
</div>
</div>
2.2 Simple Chat/Talk UIs
Intercom-Style Widget
<!-- botserver/ui/widgets/intercom.html -->
<div class="intercom-widget" id="intercom-widget">
<button class="intercom-trigger" onclick="toggleIntercom()">
<span class="intercom-icon">💬</span>
<span class="intercom-badge" id="unread-count">0</span>
</button>
<div class="intercom-panel hidden" id="intercom-panel">
<div class="intercom-header">
<img src="/static/bot-avatar.png" class="bot-avatar">
<div class="bot-info">
<span class="bot-name">Assistant</span>
<span class="bot-status">Online</span>
</div>
<button onclick="closeIntercom()">✕</button>
</div>
<div class="intercom-messages" id="intercom-messages"
hx-get="/api/chat/messages"
hx-trigger="load"
hx-swap="innerHTML">
</div>
<form class="intercom-input"
hx-post="/api/chat/send"
hx-target="#intercom-messages"
hx-swap="beforeend">
<input type="text" name="content" placeholder="Type a message...">
<button type="submit">Send</button>
</form>
</div>
</div>
<style>
.intercom-widget {
position: fixed;
bottom: 20px;
right: 20px;
z-index: 9999;
}
.intercom-trigger {
width: 60px;
height: 60px;
border-radius: 50%;
background: linear-gradient(135deg, #667eea 0%, #764ba2 100%);
border: none;
cursor: pointer;
box-shadow: 0 4px 12px rgba(0,0,0,0.15);
display: flex;
align-items: center;
justify-content: center;
font-size: 24px;
position: relative;
}
.intercom-badge {
position: absolute;
top: -5px;
right: -5px;
background: #ff4444;
color: white;
border-radius: 50%;
width: 20px;
height: 20px;
font-size: 12px;
display: flex;
align-items: center;
justify-content: center;
}
.intercom-panel {
position: absolute;
bottom: 70px;
right: 0;
width: 350px;
height: 500px;
background: white;
border-radius: 16px;
box-shadow: 0 8px 32px rgba(0,0,0,0.15);
display: flex;
flex-direction: column;
overflow: hidden;
}
.intercom-header {
background: linear-gradient(135deg, #667eea 0%, #764ba2 100%);
color: white;
padding: 16px;
display: flex;
align-items: center;
gap: 12px;
}
.intercom-messages {
flex: 1;
overflow-y: auto;
padding: 16px;
}
.intercom-input {
padding: 12px;
border-top: 1px solid #eee;
display: flex;
gap: 8px;
}
.intercom-input input {
flex: 1;
padding: 10px;
border: 1px solid #ddd;
border-radius: 20px;
outline: none;
}
.intercom-input button {
padding: 10px 20px;
background: #667eea;
color: white;
border: none;
border-radius: 20px;
cursor: pointer;
}
</style>
PTT (Push-to-Talk) Interface
<!-- botserver/ui/widgets/ptt.html -->
<div class="ptt-interface" id="ptt-interface">
<div class="ptt-status" id="ptt-status">
<span class="status-icon">🔇</span>
<span class="status-text">Press and hold to talk</span>
</div>
<div class="ptt-visualizer" id="ptt-visualizer">
<!-- Audio waveform visualization -->
<canvas id="waveform-canvas"></canvas>
</div>
<button class="ptt-button"
id="ptt-button"
onmousedown="startRecording()"
onmouseup="stopRecording()"
ontouchstart="startRecording()"
ontouchend="stopRecording()">
<span class="ptt-icon">🎤</span>
<span class="ptt-label">PUSH TO TALK</span>
</button>
<div class="ptt-response" id="ptt-response">
<!-- Bot response plays here -->
</div>
<div class="ptt-history" id="ptt-history">
<!-- Conversation history -->
</div>
</div>
<style>
.ptt-interface {
display: flex;
flex-direction: column;
align-items: center;
justify-content: center;
min-height: 100vh;
background: linear-gradient(180deg, #1a1a2e 0%, #16213e 100%);
color: white;
padding: 20px;
}
.ptt-button {
width: 150px;
height: 150px;
border-radius: 50%;
background: linear-gradient(145deg, #e74c3c 0%, #c0392b 100%);
border: 4px solid #fff;
cursor: pointer;
display: flex;
flex-direction: column;
align-items: center;
justify-content: center;
font-size: 40px;
box-shadow: 0 8px 24px rgba(231, 76, 60, 0.4);
transition: all 0.2s ease;
user-select: none;
}
.ptt-button:active {
transform: scale(0.95);
background: linear-gradient(145deg, #27ae60 0%, #1e8449 100%);
box-shadow: 0 4px 16px rgba(39, 174, 96, 0.6);
}
.ptt-button.recording {
animation: pulse 1s infinite;
}
@keyframes pulse {
0% { box-shadow: 0 0 0 0 rgba(39, 174, 96, 0.7); }
70% { box-shadow: 0 0 0 30px rgba(39, 174, 96, 0); }
100% { box-shadow: 0 0 0 0 rgba(39, 174, 96, 0); }
}
.ptt-visualizer {
width: 100%;
max-width: 300px;
height: 100px;
margin: 20px 0;
}
.ptt-status {
margin-bottom: 20px;
font-size: 18px;
display: flex;
align-items: center;
gap: 10px;
}
</style>
<script>
let mediaRecorder;
let audioChunks = [];
async function startRecording() {
const button = document.getElementById('ptt-button');
button.classList.add('recording');
const stream = await navigator.mediaDevices.getUserMedia({ audio: true });
mediaRecorder = new MediaRecorder(stream);
mediaRecorder.ondataavailable = (event) => {
audioChunks.push(event.data);
};
mediaRecorder.start();
document.getElementById('ptt-status').innerHTML =
'<span class="status-icon">🔴</span><span class="status-text">Recording...</span>';
}
async function stopRecording() {
const button = document.getElementById('ptt-button');
button.classList.remove('recording');
mediaRecorder.stop();
mediaRecorder.onstop = async () => {
const audioBlob = new Blob(audioChunks, { type: 'audio/webm' });
audioChunks = [];
// Send to server for transcription and response
const formData = new FormData();
formData.append('audio', audioBlob);
document.getElementById('ptt-status').innerHTML =
'<span class="status-icon">⏳</span><span class="status-text">Processing...</span>';
const response = await fetch('/api/voice/ptt', {
method: 'POST',
body: formData
});
const result = await response.json();
// Play response audio
if (result.audio_url) {
const audio = new Audio(result.audio_url);
audio.play();
}
document.getElementById('ptt-status').innerHTML =
'<span class="status-icon">🔇</span><span class="status-text">Press and hold to talk</span>';
};
}
</script>
Totem/Kiosk Interface
<!-- botserver/ui/widgets/totem.html -->
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="UTF-8">
<meta name="viewport" content="width=device-width, initial-scale=1.0">
<title>Bot Totem</title>
<style>
* {
margin: 0;
padding: 0;
box-sizing: border-box;
}
body {
font-family: 'Segoe UI', sans-serif;
background: linear-gradient(135deg, #0f0f23 0%, #1a1a3e 100%);
color: white;
min-height: 100vh;
display: flex;
flex-direction: column;
overflow: hidden;
}
.totem-header {
padding: 30px;
text-align: center;
background: rgba(255,255,255,0.05);
}
.totem-logo {
font-size: 48px;
margin-bottom: 10px;
}
.totem-title {
font-size: 24px;
font-weight: 300;
}
.totem-main {
flex: 1;
display: flex;
flex-direction: column;
align-items: center;
justify-content: center;
padding: 40px;
}
.avatar-container {
width: 200px;
height: 200px;
border-radius: 50%;
background: linear-gradient(135deg, #667eea 0%, #764ba2 100%);
display: flex;
align-items: center;
justify-content: center;
font-size: 80px;
margin-bottom: 40px;
box-shadow: 0 0 60px rgba(102, 126, 234, 0.5);
animation: breathe 3s infinite ease-in-out;
}
@keyframes breathe {
0%, 100% { transform: scale(1); }
50% { transform: scale(1.05); }
}
.avatar-container.listening {
animation: listening 0.5s infinite ease-in-out;
box-shadow: 0 0 80px rgba(39, 174, 96, 0.8);
}
@keyframes listening {
0%, 100% { transform: scale(1); }
50% { transform: scale(1.1); }
}
.message-display {
text-align: center;
font-size: 28px;
max-width: 800px;
line-height: 1.5;
margin-bottom: 40px;
}
.quick-actions {
display: grid;
grid-template-columns: repeat(3, 1fr);
gap: 20px;
max-width: 600px;
}
.quick-action {
padding: 30px 20px;
background: rgba(255,255,255,0.1);
border: 1px solid rgba(255,255,255,0.2);
border-radius: 16px;
cursor: pointer;
text-align: center;
transition: all 0.3s ease;
}
.quick-action:hover {
background: rgba(255,255,255,0.2);
transform: translateY(-5px);
}
.quick-action-icon {
font-size: 40px;
margin-bottom: 10px;
}
.quick-action-label {
font-size: 16px;
}
.totem-footer {
padding: 20px;
text-align: center;
background: rgba(0,0,0,0.3);
}
.touch-hint {
display: flex;
align-items: center;
justify-content: center;
gap: 10px;
font-size: 18px;
opacity: 0.7;
}
</style>
</head>
<body>
<header class="totem-header">
<div class="totem-logo">🤖</div>
<h1 class="totem-title">How can I help you today?</h1>
</header>
<main class="totem-main">
<div class="avatar-container" id="avatar">
🤖
</div>
<div class="message-display" id="message">
Touch any option below or tap the microphone to speak
</div>
<div class="quick-actions">
<div class="quick-action" onclick="quickAction('directions')">
<div class="quick-action-icon">🗺️</div>
<div class="quick-action-label">Directions</div>
</div>
<div class="quick-action" onclick="quickAction('schedule')">
<div class="quick-action-icon">📅</div>
<div class="quick-action-label">Schedule</div>
</div>
<div class="quick-action" onclick="quickAction('services')">
<div class="quick-action-icon">🏢</div>
<div class="quick-action-label">Services</div>
</div>
<div class="quick-action" onclick="quickAction('contact')">
<div class="quick-action-icon">📞</div>
<div class="quick-action-label">Contact</div>
</div>
<div class="quick-action" onclick="startVoice()">
<div class="quick-action-icon">🎤</div>
<div class="quick-action-label">Speak</div>
</div>
<div class="quick-action" onclick="quickAction('help')">
<div class="quick-action-icon">❓</div>
<div class="quick-action-label">Help</div>
</div>
</div>
</main>
<footer class="totem-footer">
<div class="touch-hint">
<span>👆</span>
<span>Touch to interact</span>
</div>
</footer>
<script>
async function quickAction(action) {
document.getElementById('message').textContent = 'Processing...';
const response = await fetch('/api/totem/action', {
method: 'POST',
headers: { 'Content-Type': 'application/json' },
body: JSON.stringify({ action })
});
const result = await response.json();
document.getElementById('message').textContent = result.message;
if (result.audio_url) {
const audio = new Audio(result.audio_url);
audio.play();
}
}
async function startVoice() {
const avatar = document.getElementById('avatar');
avatar.classList.add('listening');
document.getElementById('message').textContent = 'Listening...';
// Implement voice recognition
}
</script>
</body>
</html>
3. Conversational Banking (bank.gbai)
Complete Banking Template
templates/bank.gbai/
├── bank.gbdialog/
│ └── start.json
├── bank.gbot/
│ └── config.csv
├── bank.gbkb/
│ └── banking-faq.md
├── dialogs/
│ ├── account.bas
│ ├── transfer.bas
│ ├── payment.bas
│ ├── loan.bas
│ ├── investment.bas
│ ├── cards.bas
│ └── support.bas
├── tables/
│ ├── accounts.csv
│ ├── transactions.csv
│ ├── cards.csv
│ ├── loans.csv
│ ├── beneficiaries.csv
│ └── scheduled_payments.csv
└── README.md
Bank Configuration
# bank.gbot/config.csv
key,value
bank-name,General Bank
bank-code,001
swift-code,GENBBRSP
support-phone,0800-123-4567
support-email,support@generalbank.com
pix-enabled,true
ted-enabled,true
doc-enabled,true
boleto-enabled,true
credit-card-enabled,true
debit-card-enabled,true
investment-enabled,true
loan-enabled,true
insurance-enabled,true
two-factor-auth,true
transaction-limit-default,5000.00
daily-limit-default,20000.00
Account Management
' dialogs/account.bas
' Show account balance
SUB ShowBalance()
user_id = GET USER ID
accounts = FIND "accounts.csv" WHERE user_id = user_id
IF LEN(accounts) = 0 THEN
TALK "You don't have any accounts registered. Would you like to open one?"
RETURN
END IF
TALK "Here are your account balances:"
TALK ""
total = 0
FOR EACH account IN accounts
TALK "📊 **" + account.account_type + " Account**"
TALK " Account: " + account.account_number
TALK " Balance: R$ " + FORMAT(account.balance, "0.00")
TALK " Available: R$ " + FORMAT(account.available_balance, "0.00")
TALK ""
total = total + account.balance
NEXT
TALK "💰 **Total Balance: R$ " + FORMAT(total, "0.00") + "**"
END SUB
' Show recent transactions
SUB ShowTransactions(account_number, days)
IF days = "" THEN days = 30 END IF
start_date = DATEADD(NOW(), -days, "day")
transactions = FIND "transactions.csv" WHERE account_number = account_number AND date >= start_date ORDER BY date DESC LIMIT 20
IF LEN(transactions) = 0 THEN
TALK "No transactions found in the last " + days + " days."
RETURN
END IF
TALK "📋 **Recent Transactions**"
TALK ""
FOR EACH tx IN transactions
IF tx.type = "credit" THEN
icon = "💵"
sign = "+"
ELSE
icon = "💸"
sign = "-"
END IF
TALK icon + " " + FORMAT(tx.date, "dd/MM") + " | " + tx.description
TALK " " + sign + "R$ " + FORMAT(tx.amount, "0.00") + " | Balance: R$ " + FORMAT(tx.balance_after, "0.00")
TALK ""
NEXT
END SUB
' Generate account statement
SUB GenerateStatement(account_number, start_date, end_date)
transactions = FIND "transactions.csv" WHERE account_number = account_number AND date >= start_date AND date <= end_date ORDER BY date
TABLE statement
COLUMN "Date" FORMAT "dd/MM/yyyy"
COLUMN "Description"
COLUMN "Type"
COLUMN "Amount" FORMAT "R$ #,##0.00"
COLUMN "Balance" FORMAT "R$ #,##0.00"
FOR EACH tx IN transactions
ROW tx.date, tx.description, tx.type, tx.amount, tx.balance_after
NEXT
END TABLE
' Export to PDF
pdf_file = EXPORT TABLE statement TO "pdf" WITH TITLE "Account Statement - " + account_number
TALK "Your statement is ready!"
TALK "📄 [Download Statement](" + pdf_file + ")"
' Send by email
email = GET USER email
IF email <> "" THEN
SEND MAIL email, "Your Account Statement", "Please find attached your account statement.", pdf_file
TALK "I've also sent a copy to your email."
END IF
END SUB
' Open new account
SUB OpenAccount(account_type)
user_id = GET USER ID
user = GET USER
' Verify KYC
IF NOT user.kyc_verified THEN
TALK "To open a new account, we need to verify your identity first."
CALL VerifyKYC()
RETURN
END IF
' Generate account number
account_number = GenerateAccountNumber()
' Create account
TABLE new_account
ROW account_number, user_id, account_type, 0.00, 0.00, NOW(), "active"
END TABLE
SAVE "accounts.csv", new_account
TALK "🎉 Congratulations! Your " + account_type + " account has been created!"
TALK ""
TALK "📋 **Account Details**"
TALK "Account Number: " + account_number
TALK "Type: " + account_type
TALK "Status: Active"
TALK ""
TALK "Your virtual debit card is being generated..."
' Create virtual card
CALL CreateVirtualCard(account_number)
END SUB
FUNCTION GenerateAccountNumber()
' Generate unique account number
branch = "0001"
sequence = GET BOT MEMORY "account_sequence"
IF sequence = "" THEN sequence = 10000 END IF
sequence = sequence + 1
SET BOT MEMORY "account_sequence", sequence
account = branch + "-" + FORMAT(sequence, "000000")
digit = CalculateCheckDigit(account)
RETURN account + "-" + digit
END FUNCTION
Money Transfers
' dialogs/transfer.bas
' PIX Transfer
SUB PIXTransfer()
TALK "Let's make a PIX transfer. What type of key will you use?"
ADD SUGGESTION "CPF/CNPJ"
ADD SUGGESTION "Phone"
ADD SUGGESTION "Email"
ADD SUGGESTION "Random Key"
key_type = HEAR
TALK "Enter the PIX key:"
pix_key = HEAR
' Validate and get recipient info
recipient = ValidatePIXKey(key_type, pix_key)
IF recipient.error THEN
TALK "❌ Invalid PIX key. Please check and try again."
RETURN
END IF
TALK "Recipient: **" + recipient.name + "**"
TALK "Bank: " + recipient.bank_name
TALK ""
TALK "Enter the amount to transfer:"
amount = HEAR
amount = ParseMoney(amount)
' Check balance and limits
account = GET USER primary_account
IF amount > account.available_balance THEN
TALK "❌ Insufficient balance. Available: R$ " + FORMAT(account.available_balance, "0.00")
RETURN
END IF
daily_used = GetDailyTransferTotal(account.account_number)
daily_limit = GET USER daily_transfer_limit
IF daily_used + amount > daily_limit THEN
TALK "❌ This transfer would exceed your daily limit."
TALK "Daily limit: R$ " + FORMAT(daily_limit, "0.00")
TALK "Already used: R$ " + FORMAT(daily_used, "0.00")
TALK "Available: R$ " + FORMAT(daily_limit - daily_used, "0.00")
RETURN
END IF
' Confirm transaction
TALK "📤 **Transfer Summary**"
TALK "To: " + recipient.name
TALK "PIX Key: " + MaskPIXKey(pix_key)
TALK "Amount: R$ " + FORMAT(amount, "0.00")
TALK ""
TALK "Confirm this transfer?"
ADD SUGGESTION "Yes, confirm"
ADD SUGGESTION "No, cancel"
confirmation = HEAR
IF confirmation CONTAINS "yes" OR confirmation CONTAINS "confirm" THEN
' Request 2FA
TALK "For your security, enter the code sent to your phone:"
code = HEAR
IF NOT Verify2FA(code) THEN
TALK "❌ Invalid code. Transfer cancelled for security."
RETURN
END IF
' Execute transfer
result = ExecutePIXTransfer(account.account_number, recipient, amount)
IF result.success THEN
TALK "✅ **Transfer completed!**"
TALK "Transaction ID: " + result.transaction_id
TALK "New balance: R$ " + FORMAT(result.new_balance, "0.00")
' Save transaction
TABLE transaction
ROW result.transaction_id, account.account_number, "pix_out", amount, result.new_balance, NOW(), recipient.pix_key, recipient.name, "completed"
END TABLE
SAVE "transactions.csv", transaction
ELSE
TALK "❌ Transfer failed: " + result.error
END IF
ELSE
TALK "Transfer cancelled."
END IF
END SUB
' TED Transfer
SUB TEDTransfer()
TALK "Let's make a TED transfer."
' Get recipient bank info
TALK "Enter the bank code (e.g., 001 for Banco do Brasil):"
bank_code = HEAR
TALK "Enter the branch number:"
branch = HEAR
TALK "Enter the account number (with digit):"
account_number = HEAR
TALK "Enter the recipient's full name:"
recipient_name = HEAR
TALK "Enter the recipient's CPF/CNPJ:"
document = HEAR
TALK "Enter the amount to transfer:"
amount = HEAR
amount = ParseMoney(amount)
' Validate and process similar to PIX
' ... (similar flow with bank validation)
END SUB
' Schedule recurring transfer
SUB ScheduleTransfer()
TALK "Let's schedule a recurring transfer."
TALK "How often should the transfer occur?"
ADD SUGGESTION "Weekly"
ADD SUGGESTION "Monthly"
ADD SUGGESTION "Custom"
frequency = HEAR
' Get transfer details
TALK "Enter the PIX key of the recipient:"
pix_key = HEAR
TALK "Enter the amount:"
amount = HEAR
TALK "When should the first transfer occur?"
start_date = HEAR
' Create scheduled payment
TABLE scheduled
ROW GenerateID(), GET USER ID, "pix", pix_key, amount, frequency, start_date, "active"
END TABLE
SAVE "scheduled_payments.csv", scheduled
' Set up the schedule
SET SCHEDULE frequency WITH START start_date
CALL ExecuteScheduledTransfer(scheduled.id)
END SCHEDULE
TALK "✅ Recurring transfer scheduled!"
TALK "First transfer: " + FORMAT(start_date, "dd/MM/yyyy")
TALK "Frequency: " + frequency
TALK "Amount: R$ " + FORMAT(amount, "0.00")
END SUB
Bill Payment
' dialogs/payment.bas
' Pay bill/boleto
SUB PayBoleto()
TALK "Enter the barcode or paste the boleto line:"
barcode = HEAR
' Parse boleto
boleto = ParseBoleto(barcode)
IF boleto.error THEN
TALK "❌ Invalid barcode. Please check and try again."
RETURN
END IF
TALK "📄 **Bill Details**"
TALK "Beneficiary: " + boleto.beneficiary
TALK "Amount: R$ " + FORMAT(boleto.amount, "0.00")
TALK "Due date: " + FORMAT(boleto.due_date, "dd/MM/yyyy")
IF boleto.is_overdue THEN
TALK "⚠️ This bill is overdue. Late fees may apply."
TALK "Original amount: R$ " + FORMAT(boleto.original_amount, "0.00")
TALK "Late fee: R$ " + FORMAT(boleto.late_fee, "0.00")
TALK "Interest: R$ " + FORMAT(boleto.interest, "0.00")
END IF
TALK ""
TALK "Pay this bill?"
ADD SUGGESTION "Yes, pay now"
ADD SUGGESTION "Schedule for due date"
ADD SUGGESTION "Cancel"
choice = HEAR
IF choice CONTAINS "now" THEN
' Process payment
result = ProcessBoletoPayment(boleto)
IF result.success THEN
TALK "✅ **Payment completed!**"
TALK "Transaction ID: " + result.transaction_id
TALK "Authentication: " + result.authentication
ELSE
TALK "❌ Payment failed: " + result.error
END IF
ELSEIF choice CONTAINS "schedule" THEN
' Schedule for due date
TABLE scheduled
ROW GenerateID(), GET USER ID, "boleto", barcode, boleto.amount, boleto.due_date, "pending"
END TABLE
SAVE "scheduled_payments.csv", scheduled
TALK "✅ Payment scheduled for " + FORMAT(boleto.due_date, "dd/MM/yyyy")
ELSE
TALK "Payment cancelled."
END IF
END SUB
' Pay utilities
SUB PayUtility(utility_type)
TALK "Enter your " + utility_type + " account number or scan the bill:"
account = HEAR
' Fetch bill info
bill = FetchUtilityBill(utility_type, account)
IF bill.found THEN
TALK "📄 **" + utility_type + " Bill**"
TALK "Account: " + account
TALK "Reference: " + bill.reference
TALK "Amount: R$ " + FORMAT(bill.amount, "0.00")
TALK "Due date: " + FORMAT(bill.due_date, "dd/MM/yyyy")
TALK "Pay this bill?"
' ... continue payment flow
ELSE
TALK "No pending bill found for this account."
END IF
END SUB
Loans
' dialogs/loan.bas
' Loan simulation
SUB SimulateLoan()
TALK "Let's simulate a loan. What type of loan are you interested in?"
ADD SUGGESTION "Personal Loan"
ADD SUGGESTION "Payroll Loan"
ADD SUGGESTION "Home Equity"
ADD SUGGESTION "Vehicle Loan"
loan_type = HEAR
TALK "What amount do you need?"
amount = HEAR
amount = ParseMoney(amount)
TALK "In how many months would you like to pay?"
ADD SUGGESTION "12 months"
ADD SUGGESTION "24 months"
ADD SUGGESTION "36 months"
ADD SUGGESTION "48 months"
ADD SUGGESTION "60 months"
months = HEAR
months = ParseNumber(months)
' Get user's rate based on credit score
user = GET USER
rate = GetPersonalizedRate(user.id, loan_type)
' Calculate loan
monthly_payment = CalculatePMT(amount, rate, months)
total_amount = monthly_payment * months
total_interest = total_amount - amount
TALK "💰 **Loan Simulation**"
TALK ""
TALK "📊 **Summary**"
TALK "Loan type: " + loan_type
TALK "Amount: R$ " + FORMAT(amount, "0.00")
TALK "Term: " + months + " months"
TALK "Interest rate: " + FORMAT(rate * 100, "0.00") + "% per month"
TALK ""
TALK "📅 **Monthly Payment: R$ " + FORMAT(monthly_payment, "0.00") + "**"
TALK ""
TALK "Total to pay: R$ " + FORMAT(total_amount, "0.00")
TALK "Total interest: R$ " + FORMAT(total_interest, "0.00")
TALK ""
TALK "Would you like to proceed with this loan?"
ADD SUGGESTION "Yes, apply now"
ADD SUGGESTION "Try different values"
ADD SUGGESTION "Not now"
choice = HEAR
IF choice CONTAINS "apply" THEN
CALL ApplyForLoan(loan_type, amount, months, rate)
ELSEIF choice CONTAINS "different" THEN
CALL SimulateLoan()
ELSE
TALK "No problem! I'm here whenever you need."
END IF
END SUB
' Apply for loan
SUB ApplyForLoan(loan_type, amount, months, rate)
user = GET USER
' Check eligibility
eligibility = CheckLoanEligibility(user.id, loan_type, amount)
IF NOT eligibility.eligible THEN
TALK "❌ Unfortunately, we couldn't approve this loan at this time."
TALK "Reason: " + eligibility.reason
IF eligibility.alternative_amount > 0 THEN
TALK "However, you're pre-approved for up to R$ " + FORMAT(eligibility.alternative_amount, "0.00")
TALK "Would you like to apply for this amount instead?"
END IF
RETURN
END IF
TALK "✅ **Great news! You're pre-approved!**"
TALK ""
TALK "To complete your application, I need some additional information."
' Collect additional info
TALK "What is your monthly income?"
income = HEAR
TALK "What is your profession?"
profession = HEAR
TALK "Do you have any other loans? (yes/no)"
has_other_loans = HEAR
IF has_other_loans CONTAINS "yes" THEN
TALK "What is the total monthly payment of your other loans?"
other_loans_payment = HEAR
END IF
' Create loan application
application_id = GenerateID()
TABLE loan_application
ROW application_id, user.id, loan_type, amount, months, rate, income, profession, NOW(), "pending_analysis"
END TABLE
SAVE "loan_applications.csv", loan_application
TALK "🎉 **Application Submitted!**"
TALK ""
TALK "Application ID: " + application_id
TALK "Status: Under Analysis"
TALK ""
TALK "We'll analyze your application within 24 hours."
TALK "You'll receive updates via email and app notifications."
' Send notification
SEND MAIL user.email, "Loan Application Received", "Your loan application " + application_id + " has been received and is under analysis."
END SUB
Cards Management
' dialogs/cards.bas
' View cards
SUB ViewCards()
user_id = GET USER ID
cards = FIND "cards.csv" WHERE user_id = user_id AND status = "active"
IF LEN(cards) = 0 THEN
TALK "You don't have any active cards."
TALK "Would you like to request one?"
RETURN
END IF
TALK "💳 **Your Cards**"
TALK ""
FOR EACH card IN cards
IF card.card_type = "credit" THEN
icon = "💳"
ELSE
icon = "💵"
END IF
masked_number = "**** **** **** " + RIGHT(card.card_number, 4)
TALK icon + " **" + card.card_type + " Card**"
TALK " Number: " + masked_number
TALK " Expiry: " + card.expiry_date
IF card.card_type = "credit" THEN
TALK " Limit: R$ " + FORMAT(card.credit_limit, "0.00")
TALK " Available: R$ " + FORMAT(card.available_limit, "0.00")
TALK " Current bill: R$ " + FORMAT(card.current_bill, "0.00")
END IF
TALK " Status: " + card.status
TALK ""
NEXT
TALK "What would you like to do?"
ADD SUGGESTION "View transactions"
ADD SUGGESTION "Block card"
ADD SUGGESTION "Request new card"
ADD SUGGESTION "Increase limit"
END SUB
' Block card
SUB BlockCard(card_id)
TALK "⚠️ **Block Card**"
TALK "Are you sure you want to block this card?"
TALK "This action will prevent all transactions."
ADD SUGGESTION "Yes, block it"
ADD SUGGESTION "Cancel"
choice = HEAR
IF choice CONTAINS "yes" THEN
' Request reason
TALK "Please tell me why you're blocking the card:"
ADD SUGGESTION "Lost"
ADD SUGGESTION "Stolen"
ADD SUGGESTION "Suspicious activity"
ADD SUGGESTION "Temporary block"
reason = HEAR
' Update card status
UPDATE "cards.csv" SET status = "blocked", blocked_reason = reason WHERE id = card_id
' Log the action
TABLE card_log
ROW GenerateID(), card_id, "blocked", reason, NOW()
END TABLE
SAVE "card_logs.csv", card_log
TALK "✅ **Card blocked successfully**"
IF reason CONTAINS "stolen" OR reason CONTAINS "lost" THEN
TALK "For your security, we recommend requesting a new card."
TALK "Would you like to request a replacement?"
IF HEAR CONTAINS "yes" THEN
CALL RequestNewCard("replacement")
END IF
ELSE
TALK "You can unblock your card anytime through this chat or the app."
END IF
ELSE
TALK "Card block cancelled."
END IF
END SUB
' Request credit limit increase
SUB RequestLimitIncrease()
user_id = GET USER ID
cards = FIND "cards.csv" WHERE user_id = user_id AND card_type = "credit" AND status = "active"
IF LEN(cards) = 0 THEN
TALK "You don't have an active credit card."
RETURN
END IF
card = cards[0]
current_limit = card.credit_limit
' Check eligibility for increase
eligibility = CheckLimitIncreaseEligibility(card.id)
IF eligibility.eligible THEN
TALK "📈 **Good news! You're eligible for a limit increase!**"
TALK ""
TALK "Current limit: R$ " + FORMAT(current_limit, "0.00")
TALK "Maximum available: R$ " + FORMAT(eligibility.max_limit, "0.00")
TALK ""
TALK "What limit would you like?"
new_limit = HEAR
new_limit = ParseMoney(new_limit)
IF new_limit > eligibility.max_limit THEN
TALK "The maximum limit available is R$ " + FORMAT(eligibility.max_limit, "0.00")
new_limit = eligibility.max_limit
END IF
' Approve instantly
UPDATE "cards.csv" SET credit_limit = new_limit WHERE id = card.id
TALK "✅ **Limit increased!**"
TALK "New limit: R$ " + FORMAT(new_limit, "0.00")
TALK "Effective immediately."
ELSE
TALK "At this time, we cannot increase your limit."
TALK "Reason: " + eligibility.reason
TALK "Please try again in " + eligibility.wait_days + " days."
END IF
END SUB
Investment Module
' dialogs/investment.bas
' View investments
SUB ViewInvestments()
user_id = GET USER ID
investments = FIND "investments.csv" WHERE user_id = user_id
IF LEN(investments) = 0 THEN
TALK "You don't have any investments yet."
TALK "Would you like to explore our investment options?"
IF HEAR CONTAINS "yes" THEN
CALL ShowInvestmentOptions()
END IF
RETURN
END IF
total_invested = 0
total_earnings = 0
TALK "📊 **Your Investment Portfolio**"
TALK ""
FOR EACH inv IN investments
earnings = inv.current_value - inv.invested_amount
earnings_pct = (earnings / inv.invested_amount) * 100
IF earnings >= 0 THEN
icon = "📈"
color = "green"
ELSE
icon = "📉"
color = "red"
END IF
TALK icon + " **" + inv.product_name + "**"
TALK " Type: " + inv.product_type
TALK " Invested: R$ " + FORMAT(inv.invested_amount, "0.00")
TALK " Current: R$ " + FORMAT(inv.current_value, "0.00")
TALK " Return: " + FORMAT(earnings_pct, "0.00") + "%"
TALK ""
total_invested = total_invested + inv.invested_amount
total_earnings = total_earnings + earnings
NEXT
total_pct = (total_earnings / total_invested) * 100
TALK "💰 **Portfolio Summary**"
TALK "Total invested: R$ " + FORMAT(total_invested, "0.00")
TALK "Total value: R$ " + FORMAT(total_invested + total_earnings, "0.00")
TALK "Total return: " + FORMAT(total_pct, "0.00") + "%"
END SUB
' Show investment options
SUB ShowInvestmentOptions()
TALK "💎 **Investment Options**"
TALK ""
TALK "**Fixed Income:**"
TALK "📌 CDB - from 100% CDI"
TALK "📌 LCI/LCA - Tax-free, from 95% CDI"
TALK "📌 Treasury Bonds - Government backed"
TALK ""
TALK "**Variable Income:**"
TALK "📊 Stocks - Direct investment"
TALK "📊 ETFs - Diversified funds"
TALK "📊 REITs - Real estate funds"
TALK ""
TALK "**Crypto:**"
TALK "🪙 Bitcoin, Ethereum, and more"
TALK ""
TALK "What interests you?"
ADD SUGGESTION "Fixed Income"
ADD SUGGESTION "Stocks"
ADD SUGGESTION "Crypto"
ADD SUGGESTION "I need advice"
END SUB
4. Excel Clone (HTMX/Rust)
Architecture
┌─────────────────────────────────────────────────────────────────┐
│ GENERAL BOTS SHEETS │
├─────────────────────────────────────────────────────────────────┤
│ │
│ ┌─────────────┐ ┌──────────────┐ ┌─────────────────┐ │
│ │ Browser │◄──►│ HTMX/WS │◄──►│ Rust Backend │ │
│ │ (Canvas) │ │ Updates │ │ (Calamine) │ │
│ └─────────────┘ └──────────────┘ └─────────────────┘ │
│ │ │ │
│ │ ▼ │
│ │ ┌─────────────────┐ │
│ │ │ File Storage │ │
│ │ │ (.gbdrive) │ │
│ │ └─────────────────┘ │
│ ▼ │
│ ┌─────────────────────────────────────────────────────────┐ │
│ │ FORMULA ENGINE │ │
│ │ - 400+ Excel functions │ │
│ │ - Array formulas │ │
│ │ - Cross-sheet references │ │
│ │ - Custom functions (BASIC integration) │ │
│ └─────────────────────────────────────────────────────────┘ │
│ │
└─────────────────────────────────────────────────────────────────┘
Rust Backend
#![allow(unused)] fn main() { // src/sheets/mod.rs use calamine::{Reader, Xlsx, DataType, Range}; use rust_xlsxwriter::Workbook; use std::collections::HashMap; pub mod engine; pub mod formulas; pub mod api; #[derive(Debug, Clone, Serialize, Deserialize)] pub struct SpreadsheetState { pub id: Uuid, pub name: String, pub sheets: Vec<SheetState>, pub active_sheet: usize, pub modified: bool, pub last_saved: Option<DateTime<Utc>>, } #[derive(Debug, Clone, Serialize, Deserialize)] pub struct SheetState { pub name: String, pub cells: HashMap<CellRef, CellData>, pub col_widths: HashMap<usize, f64>, pub row_heights: HashMap<usize, f64>, pub frozen_rows: usize, pub frozen_cols: usize, pub selection: Selection, } #[derive(Debug, Clone, Serialize, Deserialize)] pub struct CellRef { pub row: usize, pub col: usize, } #[derive(Debug, Clone, Serialize, Deserialize)] pub struct CellData { pub value: CellValue, pub formula: Option<String>, pub format: CellFormat, pub style: CellStyle, } #[derive(Debug, Clone, Serialize, Deserialize)] pub enum CellValue { Empty, String(String), Number(f64), Boolean(bool), Error(String), DateTime(DateTime<Utc>), } #[derive(Debug, Clone, Serialize, Deserialize)] pub struct CellFormat { pub number_format: String, pub alignment: Alignment, pub wrap_text: bool, } #[derive(Debug, Clone, Serialize, Deserialize)] pub struct CellStyle { pub font: FontStyle, pub fill: FillStyle, pub border: BorderStyle, } // Spreadsheet Engine pub struct SpreadsheetEngine { state: SpreadsheetState, formula_engine: FormulaEngine, dependency_graph: DependencyGraph, } impl SpreadsheetEngine { pub fn new() -> Self { Self { state: SpreadsheetState::default(), formula_engine: FormulaEngine::new(), dependency_graph: DependencyGraph::new(), } } pub fn load_xlsx(&mut self, path: &str) -> Result<(), Error> { let mut workbook: Xlsx<_> = calamine::open_workbook(path)?; for sheet_name in workbook.sheet_names().to_owned() { if let Some(Ok(range)) = workbook.worksheet_range(&sheet_name) { let mut sheet = SheetState::new(&sheet_name); for (row_idx, row) in range.rows().enumerate() { for (col_idx, cell) in row.iter().enumerate() { let cell_ref = CellRef { row: row_idx, col: col_idx }; let cell_data = self.convert_calamine_cell(cell); sheet.cells.insert(cell_ref, cell_data); } } self.state.sheets.push(sheet); } } Ok(()) } pub fn save_xlsx(&self, path: &str) -> Result<(), Error> { let mut workbook = Workbook::new(); for sheet in &self.state.sheets { let worksheet = workbook.add_worksheet(); worksheet.set_name(&sheet.name)?; for (cell_ref, cell_data) in &sheet.cells { match &cell_data.value { CellValue::String(s) => { worksheet.write_string(cell_ref.row as u32, cell_ref.col as u16, s)?; } CellValue::Number(n) => { worksheet.write_number(cell_ref.row as u32, cell_ref.col as u16, *n)?; } CellValue::Boolean(b) => { worksheet.write_boolean(cell_ref.row as u32, cell_ref.col as u16, *b)?; } _ => {} } // Write formula if exists if let Some(formula) = &cell_data.formula { worksheet.write_formula( cell_ref.row as u32, cell_ref.col as u16, formula )?; } } } workbook.save(path)?; Ok(()) } pub fn set_cell(&mut self, sheet: usize, row: usize, col: usize, value: &str) -> Vec<CellUpdate> { let cell_ref = CellRef { row, col }; // Check if it's a formula if value.starts_with('=') { let formula = value[1..].to_string(); let calculated = self.formula_engine.evaluate(&formula, &self.state.sheets[sheet]); self.state.sheets[sheet].cells.insert(cell_ref.clone(), CellData { value: calculated, formula: Some(formula), format: CellFormat::default(), style: CellStyle::default(), }); // Update dependency graph self.dependency_graph.update(&cell_ref, &formula); } else { // Parse as value let cell_value = self.parse_value(value); self.state.sheets[sheet].cells.insert(cell_ref.clone(), CellData { value: cell_value, formula: None, format: CellFormat::default(), style: CellStyle::default(), }); } // Recalculate dependents let updates = self.recalculate_dependents(&cell_ref); self.state.modified = true; updates } fn recalculate_dependents(&mut self, cell_ref: &CellRef) -> Vec<CellUpdate> { let mut updates = Vec::new(); let dependents = self.dependency_graph.get_dependents(cell_ref); for dep in dependents { if let Some(cell) = self.state.sheets[self.state.active_sheet].cells.get_mut(&dep) { if let Some(formula) = &cell.formula { let new_value = self.formula_engine.evaluate( formula, &self.state.sheets[self.state.active_sheet] ); cell.value = new_value.clone(); updates.push(CellUpdate { row: dep.row, col: dep.col, value: new_value, }); } } } updates } } }
Formula Engine
#![allow(unused)] fn main() { // src/sheets/formulas.rs use std::collections::HashMap; pub struct FormulaEngine { functions: HashMap<String, Box<dyn Fn(Vec<CellValue>) -> CellValue>>, } impl FormulaEngine { pub fn new() -> Self { let mut engine = Self { functions: HashMap::new(), }; engine.register_builtin_functions(); engine } fn register_builtin_functions(&mut self) { // Math functions self.register("SUM", |args| { let sum: f64 = args.iter() .filter_map(|v| v.as_number()) .sum(); CellValue::Number(sum) }); self.register("AVERAGE", |args| { let numbers: Vec<f64> = args.iter() .filter_map(|v| v.as_number()) .collect(); if numbers.is_empty() { CellValue::Error("#DIV/0!".to_string()) } else { CellValue::Number(numbers.iter().sum::<f64>() / numbers.len() as f64) } }); self.register("MIN", |args| { args.iter() .filter_map(|v| v.as_number()) .min_by(|a, b| a.partial_cmp(b).unwrap()) .map(CellValue::Number) .unwrap_or(CellValue::Error("#VALUE!".to_string())) }); self.register("MAX", |args| { args.iter() .filter_map(|v| v.as_number()) .max_by(|a, b| a.partial_cmp(b).unwrap()) .map(CellValue::Number) .unwrap_or(CellValue::Error("#VALUE!".to_string())) }); self.register("COUNT", |args| { CellValue::Number(args.iter() .filter(|v| v.as_number().is_some()) .count() as f64) }); self.register("COUNTA", |args| { CellValue::Number(args.iter() .filter(|v| !matches!(v, CellValue::Empty)) .count() as f64) }); // Text functions self.register("CONCATENATE", |args| { let result: String = args.iter() .map(|v| v.to_string()) .collect(); CellValue::String(result) }); self.register("LEFT", |args| { if args.len() >= 2 { let text = args[0].to_string(); let n = args[1].as_number().unwrap_or(1.0) as usize; CellValue::String(text.chars().take(n).collect()) } else { CellValue::Error("#VALUE!".to_string()) } }); self.register("RIGHT", |args| { if args.len() >= 2 { let text = args[0].to_string(); let n = args[1].as_number().unwrap_or(1.0) as usize; let start = text.len().saturating_sub(n); CellValue::String(text.chars().skip(start).collect()) } else { CellValue::Error("#VALUE!".to_string()) } }); self.register("MID", |args| { if args.len() >= 3 { let text = args[0].to_string(); let start = (args[1].as_number().unwrap_or(1.0) as usize).saturating_sub(1); let n = args[2].as_number().unwrap_or(1.0) as usize; CellValue::String(text.chars().skip(start).take(n).collect()) } else { CellValue::Error("#VALUE!".to_string()) } }); self.register("LEN", |args| { if let Some(text) = args.get(0) { CellValue::Number(text.to_string().len() as f64) } else { CellValue::Number(0.0) } }); self.register("TRIM", |args| { if let Some(text) = args.get(0) { CellValue::String(text.to_string().trim().to_string()) } else { CellValue::String(String::new()) } }); self.register("UPPER", |args| { if let Some(text) = args.get(0) { CellValue::String(text.to_string().to_uppercase()) } else { CellValue::String(String::new()) } }); self.register("LOWER", |args| { if let Some(text) = args.get(0) { CellValue::String(text.to_string().to_lowercase()) } else { CellValue::String(String::new()) } }); // Logical functions self.register("IF", |args| { if args.len() >= 3 { let condition = args[0].as_bool().unwrap_or(false); if condition { args[1].clone() } else { args[2].clone() } } else { CellValue::Error("#VALUE!".to_string()) } }); self.register("AND", |args| { CellValue::Boolean(args.iter().all(|v| v.as_bool().unwrap_or(false))) }); self.register("OR", |args| { CellValue::Boolean(args.iter().any(|v| v.as_bool().unwrap_or(false))) }); self.register("NOT", |args| { if let Some(val) = args.get(0) { CellValue::Boolean(!val.as_bool().unwrap_or(false)) } else { CellValue::Error("#VALUE!".to_string()) } }); // Lookup functions self.register("VLOOKUP", |args| { // Implementation for VLOOKUP CellValue::Error("#N/A".to_string()) // Placeholder }); self.register("HLOOKUP", |args| { // Implementation for HLOOKUP CellValue::Error("#N/A".to_string()) // Placeholder }); self.register("INDEX", |args| { // Implementation for INDEX CellValue::Error("#REF!".to_string()) // Placeholder }); self.register("MATCH", |args| { // Implementation for MATCH CellValue::Error("#N/A".to_string()) // Placeholder }); // Date functions self.register("TODAY", |_args| { CellValue::DateTime(Utc::now()) }); self.register("NOW", |_args| { CellValue::DateTime(Utc::now()) }); self.register("YEAR", |args| { if let Some(CellValue::DateTime(dt)) = args.get(0) { CellValue::Number(dt.year() as f64) } else { CellValue::Error("#VALUE!".to_string()) } }); self.register("MONTH", |args| { if let Some(CellValue::DateTime(dt)) = args.get(0) { CellValue::Number(dt.month() as f64) } else { CellValue::Error("#VALUE!".to_string()) } }); self.register("DAY", |args| { if let Some(CellValue::DateTime(dt)) = args.get(0) { CellValue::Number(dt.day() as f64) } else { CellValue::Error("#VALUE!".to_string()) } }); // Financial functions self.register("PMT", |args| { if args.len() >= 3 { let rate = args[0].as_number().unwrap_or(0.0); let nper = args[1].as_number().unwrap_or(0.0); let pv = args[2].as_number().unwrap_or(0.0); if rate == 0.0 { CellValue::Number(-pv / nper) } else { let pmt = pv * rate * (1.0 + rate).powf(nper) / ((1.0 + rate).powf(nper) - 1.0); CellValue::Number(-pmt) } } else { CellValue::Error("#VALUE!".to_string()) } }); // Add 400+ more functions... } fn register<F>(&mut self, name: &str, f: F) where F: Fn(Vec<CellValue>) -> CellValue + 'static, { self.functions.insert(name.to_uppercase(), Box::new(f)); } pub fn evaluate(&self, formula: &str, sheet: &SheetState) -> CellValue { // Parse and evaluate formula let tokens = self.tokenize(formula); let ast = self.parse(tokens); self.eval_ast(&ast, sheet) } } }
HTMX UI Component
<!-- templates/sheets.html -->
{% extends "base.html" %}
{% block title %}Sheets - General Bots{% endblock %}
{% block content %}
<div class="sheets-container" id="sheets-app" hx-ext="ws" ws-connect="/ws/sheets">
<!-- Toolbar -->
<div class="sheets-toolbar">
<div class="toolbar-section file-section">
<button hx-post="/api/sheets/new" hx-target="#sheet-content">
📄 New
</button>
<button onclick="openFile()">📂 Open</button>
<button hx-post="/api/sheets/save" hx-swap="none">💾 Save</button>
<button hx-get="/api/sheets/export?format=xlsx" hx-swap="none">
⬇️ Export
</button>
</div>
<div class="toolbar-section edit-section">
<button onclick="undo()">↩️</button>
<button onclick="redo()">↪️</button>
<button onclick="cut()">✂️</button>
<button onclick="copy()">📋</button>
<button onclick="paste()">📄</button>
</div>
<div class="toolbar-section format-section">
<select id="font-family" onchange="setFontFamily(this.value)">
<option value="Arial">Arial</option>
<option value="Calibri">Calibri</option>
<option value="Times New Roman">Times New Roman</option>
<option value="Courier New">Courier New</option>
</select>
<select id="font-size" onchange="setFontSize(this.value)">
<option value="8">8</option>
<option value="10">10</option>
<option value="11" selected>11</option>
<option value="12">12</option>
<option value="14">14</option>
<option value="18">18</option>
<option value="24">24</option>
</select>
<button onclick="toggleBold()"><b>B</b></button>
<button onclick="toggleItalic()"><i>I</i></button>
<button onclick="toggleUnderline()"><u>U</u></button>
<input type="color" id="text-color" onchange="setTextColor(this.value)" value="#000000">
<input type="color" id="fill-color" onchange="setFillColor(this.value)" value="#ffffff">
</div>
<div class="toolbar-section align-section">
<button onclick="alignLeft()">⬅️</button>
<button onclick="alignCenter()">↔️</button>
<button onclick="alignRight()">➡️</button>
</div>
<div class="toolbar-section number-section">
<select id="number-format" onchange="setNumberFormat(this.value)">
<option value="general">General</option>
<option value="number">Number</option>
<option value="currency">Currency</option>
<option value="percentage">Percentage</option>
<option value="date">Date</option>
<option value="time">Time</option>
<option value="text">Text</option>
</select>
</div>
<div class="toolbar-section ai-section">
<button onclick="openAIAssist()" class="ai-button">
🤖 AI Assist
</button>
</div>
</div>
<!-- Formula Bar -->
<div class="formula-bar">
<div class="cell-ref" id="cell-ref">A1</div>
<div class="fx-label">fx</div>
<input type="text" id="formula-input" class="formula-input"
placeholder="Enter value or formula"
onkeydown="handleFormulaInput(event)"
hx-trigger="change"
hx-post="/api/sheets/cell"
hx-vals='js:{cell: getCellRef(), value: this.value}'
hx-swap="none">
</div>
<!-- Spreadsheet Grid -->
<div class="sheet-grid-container">
<canvas id="sheet-canvas"
onmousedown="handleMouseDown(event)"
onmousemove="handleMouseMove(event)"
onmouseup="handleMouseUp(event)"
ondblclick="handleDoubleClick(event)"
oncontextmenu="handleContextMenu(event); return false;">
</canvas>
<!-- Cell Editor (shown on double-click) -->
<input type="text" id="cell-editor" class="cell-editor hidden"
onkeydown="handleCellEditorKey(event)"
onblur="commitCellEdit()">
</div>
<!-- Sheet Tabs -->
<div class="sheet-tabs">
<div class="sheet-tab-list" id="sheet-tabs"
hx-get="/api/sheets/tabs"
hx-trigger="load"
hx-swap="innerHTML">
<!-- Tabs load here -->
</div>
<button class="add-sheet-btn"
hx-post="/api/sheets/add-sheet"
hx-target="#sheet-tabs"
hx-swap="beforeend">
+
</button>
</div>
<!-- Status Bar -->
<div class="status-bar">
<span id="selection-info">Ready</span>
<span id="sum-info"></span>
<span id="average-info"></span>
<span id="count-info"></span>
</div>
<!-- Context Menu -->
<div id="context-menu" class="context-menu hidden">
<div onclick="cut()">✂️ Cut</div>
<div onclick="copy()">📋 Copy</div>
<div onclick="paste()">📄 Paste</div>
<hr>
<div onclick="insertRow()">Insert Row</div>
<div onclick="insertColumn()">Insert Column</div>
<div onclick="deleteRow()">Delete Row</div>
<div onclick="deleteColumn()">Delete Column</div>
<hr>
<div onclick="formatCells()">Format Cells...</div>
</div>
<!-- AI Assistant Modal -->
<div id="ai-modal" class="modal hidden">
<div class="modal-content">
<h3>🤖 AI Assistant</h3>
<textarea id="ai-prompt" placeholder="Describe what you want to do...
Examples:
- Create a formula to sum column A
- Format as currency
- Create a pivot table from this data
- Generate sample data for testing"></textarea>
<div class="modal-actions">
<button onclick="closeAIModal()">Cancel</button>
<button onclick="executeAICommand()" class="primary">Execute</button>
</div>
</div>
</div>
</div>
<style>
.sheets-container {
display: flex;
flex-direction: column;
height: calc(100vh - 60px);
background: white;
}
.sheets-toolbar {
display: flex;
gap: 16px;
padding: 8px 16px;
border-bottom: 1px solid #e0e0e0;
background: #f8f9fa;
flex-wrap: wrap;
}
.toolbar-section {
display: flex;
gap: 4px;
align-items: center;
padding-right: 16px;
border-right: 1px solid #e0e0e0;
}
.toolbar-section:last-child {
border-right: none;
}
.toolbar-section button {
padding: 6px 10px;
background: white;
border: 1px solid #ddd;
border-radius: 4px;
cursor: pointer;
}
.toolbar-section button:hover {
background: #e8e8e8;
}
.formula-bar {
display: flex;
align-items: center;
padding: 4px 8px;
border-bottom: 1px solid #e0e0e0;
background: white;
}
.cell-ref {
width: 80px;
padding: 4px 8px;
background: #f0f0f0;
border: 1px solid #ddd;
border-radius: 4px;
font-family: monospace;
text-align: center;
}
.fx-label {
padding: 0 8px;
font-style: italic;
color: #666;
}
.formula-input {
flex: 1;
padding: 4px 8px;
border: 1px solid #ddd;
border-radius: 4px;
font-family: 'Segoe UI', sans-serif;
}
.sheet-grid-container {
flex: 1;
overflow: hidden;
position: relative;
}
#sheet-canvas {
width: 100%;
height: 100%;
}
.cell-editor {
position: absolute;
border: 2px solid #1a73e8;
padding: 2px 4px;
font-family: 'Segoe UI', sans-serif;
font-size: 13px;
outline: none;
z-index: 100;
}
.sheet-tabs {
display: flex;
align-items: center;
padding: 4px 8px;
border-top: 1px solid #e0e0e0;
background: #f8f9fa;
}
.sheet-tab-list {
display: flex;
gap: 2px;
}
.sheet-tab {
padding: 6px 16px;
background: white;
border: 1px solid #ddd;
border-bottom: none;
border-radius: 4px 4px 0 0;
cursor: pointer;
}
.sheet-tab.active {
background: #1a73e8;
color: white;
}
.status-bar {
display: flex;
justify-content: space-between;
padding: 4px 16px;
background: #f0f0f0;
border-top: 1px solid #ddd;
font-size: 12px;
color: #666;
}
.context-menu {
position: fixed;
background: white;
border: 1px solid #ddd;
border-radius: 4px;
box-shadow: 0 4px 12px rgba(0,0,0,0.15);
z-index: 1000;
}
.context-menu div {
padding: 8px 16px;
cursor: pointer;
}
.context-menu div:hover {
background: #f0f0f0;
}
.ai-button {
background: linear-gradient(135deg, #667eea 0%, #764ba2 100%) !important;
color: white !important;
border: none !important;
}
</style>
<script>
// Spreadsheet rendering and interaction
const canvas = document.getElementById('sheet-canvas');
const ctx = canvas.getContext('2d');
const COL_WIDTH = 100;
const ROW_HEIGHT = 25;
const HEADER_HEIGHT = 25;
const ROW_HEADER_WIDTH = 50;
let cells = {};
let selection = { start: {row: 0, col: 0}, end: {row: 0, col: 0} };
let scrollOffset = { x: 0, y: 0 };
let isSelecting = false;
function resizeCanvas() {
canvas.width = canvas.offsetWidth * window.devicePixelRatio;
canvas.height = canvas.offsetHeight * window.devicePixelRatio;
ctx.scale(window.devicePixelRatio, window.devicePixelRatio);
render();
}
function render() {
ctx.clearRect(0, 0, canvas.width, canvas.height);
const width = canvas.offsetWidth;
const height = canvas.offsetHeight;
// Calculate visible range
const startCol = Math.floor(scrollOffset.x / COL_WIDTH);
const endCol = Math.ceil((scrollOffset.x + width - ROW_HEADER_WIDTH) / COL_WIDTH);
const startRow = Math.floor(scrollOffset.y / ROW_HEIGHT);
const endRow = Math.ceil((scrollOffset.y + height - HEADER_HEIGHT) / ROW_HEIGHT);
// Draw column headers
ctx.fillStyle = '#f8f9fa';
ctx.fillRect(0, 0, width, HEADER_HEIGHT);
ctx.fillStyle = '#333';
ctx.font = '12px Segoe UI';
ctx.textAlign = 'center';
ctx.textBaseline = 'middle';
for (let col = startCol; col <= endCol; col++) {
const x = ROW_HEADER_WIDTH + col * COL_WIDTH - scrollOffset.x;
ctx.fillText(colToLetter(col), x + COL_WIDTH / 2, HEADER_HEIGHT / 2);
// Column border
ctx.strokeStyle = '#e0e0e0';
ctx.beginPath();
ctx.moveTo(x + COL_WIDTH, 0);
ctx.lineTo(x + COL_WIDTH, height);
ctx.stroke();
}
// Draw row headers
ctx.fillStyle = '#f8f9fa';
ctx.fillRect(0, HEADER_HEIGHT, ROW_HEADER_WIDTH, height);
ctx.fillStyle = '#333';
ctx.textAlign = 'center';
for (let row = startRow; row <= endRow; row++) {
const y = HEADER_HEIGHT + row * ROW_HEIGHT - scrollOffset.y;
ctx.fillText(String(row + 1), ROW_HEADER_WIDTH / 2, y + ROW_HEIGHT / 2);
// Row border
ctx.strokeStyle = '#e0e0e0';
ctx.beginPath();
ctx.moveTo(0, y + ROW_HEIGHT);
ctx.lineTo(width, y + ROW_HEIGHT);
ctx.stroke();
}
// Draw cells
for (let row = startRow; row <= endRow; row++) {
for (let col = startCol; col <= endCol; col++) {
const x = ROW_HEADER_WIDTH + col * COL_WIDTH - scrollOffset.x;
const y = HEADER_HEIGHT + row * ROW_HEIGHT - scrollOffset.y;
const cellRef = `${colToLetter(col)}${row + 1}`;
const cell = cells[cellRef];
if (cell) {
// Cell background
if (cell.style?.fill) {
ctx.fillStyle = cell.style.fill;
ctx.fillRect(x + 1, y + 1, COL_WIDTH - 2, ROW_HEIGHT - 2);
}
// Cell text
ctx.fillStyle = cell.style?.color || '#000';
ctx.font = cell.style?.font || '13px Segoe UI';
ctx.textAlign = cell.format?.alignment || 'left';
ctx.textBaseline = 'middle';
const textX = ctx.textAlign === 'left' ? x + 4 :
ctx.textAlign === 'right' ? x + COL_WIDTH - 4 :
x + COL_WIDTH / 2;
ctx.fillText(formatCellValue(cell), textX, y + ROW_HEIGHT / 2);
}
}
}
// Draw selection
drawSelection();
}
function drawSelection() {
const startRow = Math.min(selection.start.row, selection.end.row);
const endRow = Math.max(selection.start.row, selection.end.row);
const startCol = Math.min(selection.start.col, selection.end.col);
const endCol = Math.max(selection.start.col, selection.end.col);
const x = ROW_HEADER_WIDTH + startCol * COL_WIDTH - scrollOffset.x;
const y = HEADER_HEIGHT + startRow * ROW_HEIGHT - scrollOffset.y;
const width = (endCol - startCol + 1) * COL_WIDTH;
const height = (endRow - startRow + 1) * ROW_HEIGHT;
// Selection fill
ctx.fillStyle = 'rgba(26, 115, 232, 0.1)';
ctx.fillRect(x, y, width, height);
// Selection border
ctx.strokeStyle = '#1a73e8';
ctx.lineWidth = 2;
ctx.strokeRect(x, y, width, height);
ctx.lineWidth = 1;
}
function colToLetter(col) {
let result = '';
while (col >= 0) {
result = String.fromCharCode(65 + (col % 26)) + result;
col = Math.floor(col / 26) - 1;
}
return result;
}
function handleMouseDown(event) {
const rect = canvas.getBoundingClientRect();
const x = event.clientX - rect.left;
const y = event.clientY - rect.top;
if (x > ROW_HEADER_WIDTH && y > HEADER_HEIGHT) {
const col = Math.floor((x - ROW_HEADER_WIDTH + scrollOffset.x) / COL_WIDTH);
const row = Math.floor((y - HEADER_HEIGHT + scrollOffset.y) / ROW_HEIGHT);
selection.start = { row, col };
selection.end = { row, col };
isSelecting = true;
updateCellRef();
render();
}
}
function handleMouseMove(event) {
if (!isSelecting) return;
const rect = canvas.getBoundingClientRect();
const x = event.clientX - rect.left;
const y = event.clientY - rect.top;
const col = Math.floor((x - ROW_HEADER_WIDTH + scrollOffset.x) / COL_WIDTH);
const row = Math.floor((y - HEADER_HEIGHT + scrollOffset.y) / ROW_HEIGHT);
selection.end = { row: Math.max(0, row), col: Math.max(0, col) };
render();
}
function handleMouseUp() {
isSelecting = false;
updateSelectionInfo();
}
function handleDoubleClick(event) {
const cellRef = getCellRef();
showCellEditor(selection.start.row, selection.start.col);
}
function showCellEditor(row, col) {
const editor = document.getElementById('cell-editor');
const x = ROW_HEADER_WIDTH + col * COL_WIDTH - scrollOffset.x;
const y = HEADER_HEIGHT + row * ROW_HEIGHT - scrollOffset.y;
editor.style.left = x + 'px';
editor.style.top = y + 'px';
editor.style.width = COL_WIDTH + 'px';
editor.style.height = ROW_HEIGHT + 'px';
const cellRef = `${colToLetter(col)}${row + 1}`;
const cell = cells[cellRef];
editor.value = cell?.formula ? `=${cell.formula}` : (cell?.value || '');
editor.classList.remove('hidden');
editor.focus();
}
function commitCellEdit() {
const editor = document.getElementById('cell-editor');
const value = editor.value;
const cellRef = getCellRef();
// Send to server
htmx.ajax('POST', '/api/sheets/cell', {
values: { cell: cellRef, value: value }
});
editor.classList.add('hidden');
}
function getCellRef() {
return `${colToLetter(selection.start.col)}${selection.start.row + 1}`;
}
function updateCellRef() {
document.getElementById('cell-ref').textContent = getCellRef();
const cellRef = getCellRef();
const cell = cells[cellRef];
const formulaInput = document.getElementById('formula-input');
formulaInput.value = cell?.formula ? `=${cell.formula}` : (cell?.value || '');
}
// WebSocket for real-time updates
htmx.on('htmx:wsMessage', function(event) {
const data = JSON.parse(event.detail.message);
if (data.type === 'cell_update') {
cells[data.cell] = data.data;
render();
}
});
// Initialize
window.addEventListener('resize', resizeCanvas);
resizeCanvas();
</script>
{% endblock %}
5. Word Editor for .docx
Architecture
#![allow(unused)] fn main() { // src/docs/mod.rs use docx_rs::{Docx, Paragraph, Run, Table, TableCell, TableRow}; pub struct DocumentEditor { document: Docx, file_path: Option<String>, modified: bool, } impl DocumentEditor { pub fn new() -> Self { Self { document: Docx::new(), file_path: None, modified: false, } } pub fn open(path: &str) -> Result<Self, Error> { let file = std::fs::File::open(path)?; let document = docx_rs::read_docx(&file)?; Ok(Self { document, file_path: Some(path.to_string()), modified: false, }) } pub fn save(&self, path: &str) -> Result<(), Error> { let file = std::fs::File::create(path)?; self.document.build().pack(file)?; Ok(()) } pub fn add_paragraph(&mut self, text: &str, style: &ParagraphStyle) -> &mut Self { let mut paragraph = Paragraph::new(); let mut run = Run::new().add_text(text); if style.bold { run = run.bold(); } if style.italic { run = run.italic(); } if let Some(size) = style.font_size { run = run.size(size * 2); // half-points } paragraph = paragraph.add_run(run); self.document = std::mem::take(&mut self.document).add_paragraph(paragraph); self.modified = true; self } pub fn to_html(&self) -> String { // Convert document to HTML for editing let mut html = String::new(); // Implementation... html } pub fn from_html(&mut self, html: &str) -> Result<(), Error> { // Parse HTML and update document Ok(()) } } }
HTMX Word Editor UI
<!-- templates/docs.html -->
{% extends "base.html" %}
{% block title %}Documents - General Bots{% endblock %}
{% block content %}
<div class="docs-container" id="docs-app" hx-ext="ws" ws-connect="/ws/docs">
<!-- Toolbar -->
<div class="docs-toolbar">
<div class="toolbar-section">
<button hx-post="/api/docs/new">📄 New</button>
<button onclick="openDocument()">📂 Open</button>
<button hx-post="/api/docs/save" hx-swap="none">💾 Save</button>
<button hx-get="/api/docs/export?format=docx">⬇️ Export</button>
<button hx-get="/api/docs/export?format=pdf">📑 PDF</button>
</div>
<div class="toolbar-section format-section">
<select id="style-select" onchange="applyStyle(this.value)">
<option value="normal">Normal</option>
<option value="heading1">Heading 1</option>
<option value="heading2">Heading 2</option>
<option value="heading3">Heading 3</option>
<option value="title">Title</option>
<option value="subtitle">Subtitle</option>
</select>
<select id="font-family" onchange="setFont(this.value)">
<option value="Calibri">Calibri</option>
<option value="Arial">Arial</option>
<option value="Times New Roman">Times New Roman</option>
<option value="Georgia">Georgia</option>
</select>
<select id="font-size" onchange="setFontSize(this.value)">
<option value="10">10</option>
<option value="11">11</option>
<option value="12" selected>12</option>
<option value="14">14</option>
<option value="16">16</option>
<option value="18">18</option>
<option value="24">24</option>
<option value="36">36</option>
</select>
</div>
<div class="toolbar-section">
<button onclick="execCommand('bold')"><b>B</b></button>
<button onclick="execCommand('italic')"><i>I</i></button>
<button onclick="execCommand('underline')"><u>U</u></button>
<button onclick="execCommand('strikeThrough')"><s>S</s></button>
</div>
<div class="toolbar-section">
<button onclick="execCommand('justifyLeft')">⬅️</button>
<button onclick="execCommand('justifyCenter')">↔️</button>
<button onclick="execCommand('justifyRight')">➡️</button>
<button onclick="execCommand('justifyFull')">☰</button>
</div>
<div class="toolbar-section">
<button onclick="execCommand('insertUnorderedList')">• List</button>
<button onclick="execCommand('insertOrderedList')">1. List</button>
<button onclick="execCommand('indent')">→ Indent</button>
<button onclick="execCommand('outdent')">← Outdent</button>
</div>
<div class="toolbar-section">
<button onclick="insertTable()">📊 Table</button>
<button onclick="insertImage()">🖼️ Image</button>
<button onclick="insertLink()">🔗 Link</button>
</div>
<div class="toolbar-section ai-section">
<button onclick="openAIWriter()" class="ai-button">
🤖 AI Writer
</button>
</div>
</div>
<!-- Ruler -->
<div class="ruler">
<div class="ruler-marks"></div>
</div>
<!-- Document Canvas -->
<div class="document-canvas">
<div class="page" id="document-editor"
contenteditable="true"
hx-trigger="blur"
hx-post="/api/docs/content"
hx-swap="none"
oninput="markModified()">
<!-- Document content here -->
</div>
</div>
<!-- Status Bar -->
<div class="status-bar">
<span id="page-info">Page 1 of 1</span>
<span id="word-count">0 words</span>
<span id="char-count">0 characters</span>
<span id="save-status">Saved</span>
</div>
<!-- AI Writer Modal -->
<div id="ai-writer-modal" class="modal hidden">
<div class="modal-content large">
<h3>🤖 AI Writer</h3>
<div class="ai-options">
<button onclick="aiAction('improve')">✨ Improve Writing</button>
<button onclick="aiAction('shorten')">📝 Make Shorter</button>
<button onclick="aiAction('expand')">📖 Expand</button>
<button onclick="aiAction('formal')">👔 Make Formal</button>
<button onclick="aiAction('casual')">😊 Make Casual</button>
<button onclick="aiAction('translate')">🌐 Translate</button>
</div>
<textarea id="ai-prompt" placeholder="Or describe what you want..."></textarea>
<div class="modal-actions">
<button onclick="closeAIWriter()">Cancel</button>
<button onclick="executeAI()" class="primary">Generate</button>
</div>
</div>
</div>
</div>
<style>
.docs-container {
display: flex;
flex-direction: column;
height: calc(100vh - 60px);
background: #525659;
}
.docs-toolbar {
display: flex;
gap: 12px;
padding: 8px 16px;
background: #f3f3f3;
border-bottom: 1px solid #d6d6d6;
flex-wrap: wrap;
}
.ruler {
height: 24px;
background: white;
border-bottom: 1px solid #ddd;
}
.document-canvas {
flex: 1;
overflow: auto;
padding: 40px;
display: flex;
justify-content: center;
}
.page {
width: 8.5in;
min-height: 11in;
background: white;
box-shadow: 0 2px 8px rgba(0,0,0,0.2);
padding: 1in;
font-family: 'Calibri', sans-serif;
font-size: 12pt;
line-height: 1.5;
outline: none;
}
.page:focus {
outline: none;
}
.status-bar {
display: flex;
justify-content: space-between;
padding: 4px 16px;
background: #f0f0f0;
font-size: 12px;
color: #666;
}
.ai-button {
background: linear-gradient(135deg, #667eea 0%, #764ba2 100%) !important;
color: white !important;
}
.ai-options {
display: flex;
flex-wrap: wrap;
gap: 8px;
margin-bottom: 16px;
}
.ai-options button {
padding: 8px 16px;
background: #f0f0f0;
border: 1px solid #ddd;
border-radius: 20px;
cursor: pointer;
}
.ai-options button:hover {
background: #e0e0e0;
}
</style>
<script>
function execCommand(command, value = null) {
document.execCommand(command, false, value);
document.getElementById('document-editor').focus();
}
function setFont(font) {
execCommand('fontName', font);
}
function setFontSize(size) {
execCommand('fontSize', size);
}
function applyStyle(style) {
const selection = window.getSelection();
if (!selection.rangeCount) return;
let tag = 'p';
switch (style) {
case 'heading1': tag = 'h1'; break;
case 'heading2': tag = 'h2'; break;
case 'heading3': tag = 'h3'; break;
case 'title': tag = 'h1'; break;
case 'subtitle': tag = 'h2'; break;
}
execCommand('formatBlock', tag);
}
function insertTable() {
const rows = prompt('Number of rows:', '3');
const cols = prompt('Number of columns:', '3');
if (rows && cols) {
let html = '<table border="1" style="border-collapse: collapse; width: 100%;">';
for (let r = 0; r < parseInt(rows); r++) {
html += '<tr>';
for (let c = 0; c < parseInt(cols); c++) {
html += '<td style="padding: 8px; border: 1px solid #ddd;"> </td>';
}
html += '</tr>';
}
html += '</table><p></p>';
execCommand('insertHTML', html);
}
}
function insertImage() {
const url = prompt('Image URL:');
if (url) {
execCommand('insertImage', url);
}
}
function insertLink() {
const url = prompt('Link URL:');
if (url) {
execCommand('createLink', url);
}
}
function markModified() {
document.getElementById('save-status').textContent = 'Modified';
updateWordCount();
}
function updateWordCount() {
const text = document.getElementById('document-editor').innerText;
const words = text.trim().split(/\s+/).filter(w => w.length > 0).length;
const chars = text.length;
document.getElementById('word-count').textContent = `${words} words`;
document.getElementById('char-count').textContent = `${chars} characters`;
}
function openAIWriter() {
document.getElementById('ai-writer-modal').classList.remove('hidden');
}
function closeAIWriter() {
document.getElementById('ai-writer-modal').classList.add('hidden');
}
async function aiAction(action) {
const selection = window.getSelection();
const selectedText = selection.toString();
if (!selectedText) {
alert('Please select some text first');
return;
}
const response = await fetch('/api/docs/ai', {
method: 'POST',
headers: { 'Content-Type': 'application/json' },
body: JSON.stringify({ action, text: selectedText })
});
const result = await response.json();
if (result.text) {
execCommand('insertText', result.text);
}
}
</script>
{% endblock %}
6. M365/Office Competitive Analysis
Feature Comparison Matrix
| Feature | Microsoft 365 | Google Workspace | General Bots | Status |
|---|---|---|---|---|
| Outlook | Gmail | Complete | ||
| Calendar | Outlook Calendar | Google Calendar | ✅ Calendar | Complete |
| File Storage | OneDrive | Google Drive | ✅ .gbdrive | Complete |
| Word Processing | Word | Docs | 🔄 Docs Editor | In Progress |
| Spreadsheets | Excel | Sheets | 🔄 Sheets Editor | In Progress |
| Presentations | PowerPoint | Slides | 📋 Planned | Planned |
| Video Calls | Teams | Meet | 🔄 Meet | In Progress |
| Chat | Teams Chat | Google Chat | ✅ Chat | Complete |
| AI Assistant | Copilot | Gemini | ✅ Multi-LLM | Complete |
| Tasks | To Do/Planner | Tasks | ✅ Tasks | Complete |
| Forms | Forms | Forms | ✅ Forms | Complete |
| Notes | OneNote | Keep | 📋 Planned | Planned |
| Whiteboard | Whiteboard | Jamboard | 📋 Planned | Planned |
Missing Features to Implement
#![allow(unused)] fn main() { // Priority 1: Core Office Features // - Presentations engine (PowerPoint/Slides equivalent) // - Real-time collaboration (multiple users editing) // - Version history and restore // - Comments and suggestions mode // Priority 2: Copilot/Gemini Parity // - AI in documents (rewrite, summarize, expand) // - AI in spreadsheets (formula generation, data analysis) // - AI in email (compose, reply, summarize threads) // - AI in meetings (transcription, summary, action items) // Priority 3: Enterprise Features // - Admin console // - Compliance center (eDiscovery, legal hold) // - Data loss prevention // - Retention policies // - Audit logs (already have basic) }
7. Google/MS Graph API Compatibility
API Endpoints to Implement
#![allow(unused)] fn main() { // src/api/compat/google.rs // Google Drive API compatible endpoints // GET /drive/v3/files // POST /drive/v3/files // GET /drive/v3/files/{fileId} // DELETE /drive/v3/files/{fileId} // PATCH /drive/v3/files/{fileId} // Google Calendar API compatible endpoints // GET /calendar/v3/calendars/{calendarId}/events // POST /calendar/v3/calendars/{calendarId}/events // GET /calendar/v3/calendars/{calendarId}/events/{eventId} // Google Gmail API compatible endpoints // GET /gmail/v1/users/{userId}/messages // POST /gmail/v1/users/{userId}/messages/send // GET /gmail/v1/users/{userId}/threads // src/api/compat/msgraph.rs // Microsoft Graph API compatible endpoints // GET /v1.0/me/drive/root/children // GET /v1.0/me/messages // POST /v1.0/me/sendMail // GET /v1.0/me/calendar/events // POST /v1.0/me/calendar/events // GET /v1.0/me/contacts pub fn configure_compat_routes(cfg: &mut web::ServiceConfig) { // Google API compatibility cfg.service( web::scope("/drive/v3") .route("/files", web::get().to(google_list_files)) .route("/files", web::post().to(google_create_file)) .route("/files/{fileId}", web::get().to(google_get_file)) ); // MS Graph API compatibility cfg.service( web::scope("/v1.0") .route("/me/drive/root/children", web::get().to(graph_list_files)) .route("/me/messages", web::get().to(graph_list_messages)) .route("/me/sendMail", web::post().to(graph_send_mail)) ); } }
8. Copilot/Gemini Feature Parity
AI Features Checklist
| Feature | Copilot | Gemini | General Bots | BASIC Keyword |
|---|---|---|---|---|
| Chat with AI | ✅ | ✅ | ✅ | LLM |
| Web search | ✅ | ✅ | 📋 | SEARCH WEB |
| Image generation | ✅ | ✅ | ✅ | IMAGE |
| Code generation | ✅ | ✅ | ✅ | LLM |
| Document summary | ✅ | ✅ | ✅ | LLM with file |
| Email compose | ✅ | ✅ | ✅ | SEND MAIL |
| Meeting summary | ✅ | ✅ | 📋 | SUMMARIZE MEETING |
| Data analysis | ✅ | ✅ | ✅ | AGGREGATE |
| Create presentations | ✅ | ✅ | 📋 | CREATE PPT |
| Voice input | ✅ | ✅ | ✅ | Voice API |
| Multi-modal | ✅ | ✅ | ✅ | SEE, IMAGE |
| Tool use | ✅ | ✅ | ✅ | USE TOOL |
| Memory/context | ✅ | ✅ | ✅ | SET CONTEXT |
| Multi-turn | ✅ | ✅ | ✅ | Built-in |
9. Attachment System (Plus Button)
Implementation
#![allow(unused)] fn main() { // src/api/attachments.rs #[derive(Debug, Serialize, Deserialize)] pub struct Attachment { pub id: Uuid, pub message_id: Option<Uuid>, pub file_type: AttachmentType, pub file_name: String, pub file_size: i64, pub mime_type: String, pub storage_path: String, pub thumbnail_path: Option<String>, pub created_at: DateTime<Utc>, } #[derive(Debug, Serialize, Deserialize)] pub enum AttachmentType { Image, Document, Audio, Video, Code, Archive, Other, } pub async fn upload_attachment( State(state): State<Arc<AppState>>, Extension(user): Extension<UserSession>, mut multipart: Multipart, ) -> Result<Json<Attachment>, ApiError> { while let Some(field) = multipart.next_field().await? { let name = field.name().unwrap_or("file").to_string(); let file_name = field.file_name().unwrap_or("unnamed").to_string(); let content_type = field.content_type().unwrap_or("application/octet-stream").to_string(); let data = field.bytes().await?; // Determine attachment type let file_type = detect_attachment_type(&content_type, &file_name); // Store file let storage_path = store_attachment(&state, &user, &data, &file_name).await?; // Generate thumbnail for images/videos let thumbnail_path = if matches!(file_type, AttachmentType::Image | AttachmentType::Video) { Some(generate_thumbnail(&storage_path).await?) } else { None }; // Create attachment record let attachment = Attachment { id: Uuid::new_v4(), message_id: None, file_type, file_name, file_size: data.len() as i64, mime_type: content_type, storage_path, thumbnail_path, created_at: Utc::now(), }; // Save to database save_attachment(&state, &attachment).await?; return Ok(Json(attachment)); } Err(ApiError::BadRequest("No file provided".to_string())) } }
10. Conversation Branching
Database Schema
-- Conversation branches
CREATE TABLE conversation_branches (
id UUID PRIMARY KEY,
parent_session_id UUID NOT NULL,
branch_session_id UUID NOT NULL,
branch_from_message_id UUID NOT NULL,
branch_name VARCHAR(255),
created_at TIMESTAMPTZ DEFAULT NOW(),
FOREIGN KEY (parent_session_id) REFERENCES sessions(id),
FOREIGN KEY (branch_session_id) REFERENCES sessions(id),
FOREIGN KEY (branch_from_message_id) REFERENCES messages(id)
);
Implementation
#![allow(unused)] fn main() { // src/api/branches.rs pub async fn create_branch( State(state): State<Arc<AppState>>, Extension(user): Extension<UserSession>, Json(req): Json<CreateBranchRequest>, ) -> Result<Json<BranchResponse>, ApiError> { // Create new session for branch let branch_session = create_session(&state, user.user_id, user.bot_id).await?; // Copy messages up to branch point copy_messages_to_branch( &state, user.id, branch_session.id, req.branch_from_message_id, ).await?; // Create branch record let branch = ConversationBranch { id: Uuid::new_v4(), parent_session_id: user.id, branch_session_id: branch_session.id, branch_from_message_id: req.branch_from_message_id, branch_name: req.name, created_at: Utc::now(), }; save_branch(&state, &branch).await?; Ok(Json(BranchResponse { branch_id: branch.id, session_id: branch_session.id, })) } }
UI Component
<!-- Message with branch option -->
<div class="message" data-message-id="{{ message.id }}">
<div class="message-content">{{ message.content }}</div>
<div class="message-actions">
<button onclick="branchFromMessage('{{ message.id }}')" title="Create branch">
🌿
</button>
<button onclick="copyMessage('{{ message.id }}')" title="Copy">
📋
</button>
</div>
</div>
<script>
async function branchFromMessage(messageId) {
const name = prompt('Name for this branch:', 'Branch ' + new Date().toLocaleString());
if (!name) return;
const response = await fetch('/api/chat/branch', {
method: 'POST',
headers: { 'Content-Type': 'application/json' },
body: JSON.stringify({
branch_from
What’s New
This page has been consolidated with the Multi-Agent Orchestration documentation.
For the latest features including:
- Agent-to-Agent (A2A) Protocol - Bots communicate and delegate tasks
- Cross-Session User Memory - User data persists across bots and sessions
- Dynamic Model Routing - Switch LLM models based on task requirements
- Hybrid RAG Search - Combined semantic + keyword search with RRF
- Code Sandbox - Safe Python/JavaScript/Bash execution
- Agent Reflection - Self-analysis for continuous improvement
- SSE Streaming - Real-time response streaming
Please see: Multi-Agent Orchestration
Quick Links
- Multi-Agent Orchestration - Complete guide to multi-agent systems
- Memory Management - User and bot memory systems
- Hybrid RAG Search - Search configuration and tuning
- Keywords Reference - All BASIC keywords
Multi-Agent Orchestration
General Bots supports sophisticated multi-agent systems where multiple specialized bots collaborate to handle complex tasks. This chapter covers the architecture, keywords, and best practices for building multi-agent solutions.
Overview
Multi-agent orchestration enables:
- Task specialization - Each bot focuses on what it does best
- Collaborative problem-solving - Bots work together on complex tasks
- Scalable architectures - Add new specialists without modifying existing bots
- Resilient systems - Failures are isolated and handled gracefully
Architecture
┌─────────────────────────────────────────────────────────────┐
│ Multi-Agent System │
├─────────────────────────────────────────────────────────────┤
│ │
│ ┌──────────┐ A2A Protocol ┌──────────┐ │
│ │ │◄──────────────────►│ │ │
│ │ Sales │ │ Support │ │
│ │ Bot │ ┌──────────┐ │ Bot │ │
│ │ │◄──►│ │◄──►│ │ │
│ └──────────┘ │ Billing │ └──────────┘ │
│ │ Bot │ │
│ ┌──────────┐ │ │ ┌──────────┐ │
│ │ │◄──►└──────────┘◄──►│ │ │
│ │ Research │ │ Analytics│ │
│ │ Bot │ │ Bot │ │
│ │ │ │ │ │
│ └──────────┘ └──────────┘ │
│ │
└─────────────────────────────────────────────────────────────┘
Core Keywords
ADD BOT
Adds a bot to the current session with optional triggers and tools.
' Add a bot with keyword triggers
ADD BOT "billing-bot" TRIGGER ON "billing,invoice,payment"
' Add a bot with tool access
ADD BOT "analyst-bot" TOOLS "calculate,forecast,report"
' Add a bot with scheduled tasks
ADD BOT "monitor-bot" SCHEDULE "0 */1 * * *"
DELEGATE TO BOT
Sends a task to another bot and optionally waits for response.
' Fire-and-forget delegation
DELEGATE "Process this order" TO BOT "order-processor"
' Get response from delegation
result = DELEGATE "Calculate total for items" TO BOT "calculator-bot"
TALK "Total: " + result
' Delegation with timeout
result = DELEGATE "Analyze report" TO BOT "analyst-bot" TIMEOUT 60
BROADCAST TO BOTS
Sends a message to all bots in the session.
' Notify all bots of an event
BROADCAST "New customer signup: " + customerid TO BOTS
' Emergency shutdown signal
BROADCAST "SHUTDOWN" TO BOTS
TRANSFER CONVERSATION
Hands off the entire conversation to another bot.
' Transfer to specialist
TALK "Let me connect you with our billing specialist."
TRANSFER CONVERSATION TO "billing-bot"
' Transfer with context
SET CONTEXT "issue" AS "refund request"
SET CONTEXT "amount" AS "$150"
TRANSFER CONVERSATION TO "refunds-bot"
A2A Protocol
The Agent-to-Agent (A2A) protocol handles all inter-bot communication.
Message Types
| Type | Description | Use Case |
|---|---|---|
Request | Ask bot to perform task | Task delegation |
Response | Reply to a request | Return results |
Broadcast | Message to all bots | Notifications |
Delegate | Hand off task | Specialization |
Collaborate | Joint task | Team work |
Message Structure
' A2A messages contain:
' - from_agent: Source bot ID
' - to_agent: Target bot ID
' - message_type: Request, Response, etc.
' - payload: The actual content
' - correlation_id: Links request/response
' - timestamp: When sent
Configuration
name,value
a2a-enabled,true
a2a-timeout,30
a2a-max-hops,5
a2a-retry-count,3
a2a-queue-size,100
| Option | Default | Description |
|---|---|---|
a2a-enabled | true | Enable A2A communication |
a2a-timeout | 30 | Default timeout (seconds) |
a2a-max-hops | 5 | Maximum delegation chain depth |
a2a-retry-count | 3 | Retries on failure |
a2a-queue-size | 100 | Max pending messages |
Memory Management
User Memory (Cross-Bot)
User memory is accessible across all bots, enabling seamless personalization.
' In any bot - store user preference
SET USER MEMORY "language", "pt-BR"
SET USER MEMORY "timezone", "America/Sao_Paulo"
' In any other bot - retrieve preference
language = GET USER MEMORY("language")
TALK "Olá!" IF language = "pt-BR"
Bot Memory (Per-Bot)
Bot memory is isolated to each bot for bot-specific state.
' In sales-bot
SET BOT MEMORY "deals_closed", dealscount
' In support-bot (different memory space)
SET BOT MEMORY "tickets_resolved", ticketcount
Session Memory (Temporary)
Session memory is shared within a conversation session.
' Store in session
SET "current_topic", "billing"
' Available to all bots in session
topic = GET "current_topic"
Memory Scope Comparison
| Memory Type | Scope | Persistence | Use Case |
|---|---|---|---|
| User Memory | Per user, all bots | Permanent | Preferences, profile |
| Bot Memory | Per bot, all users | Permanent | Bot state, counters |
| Session Memory | Per session | Session lifetime | Current context |
Model Routing
Different bots can use different models optimized for their tasks.
USE MODEL Keyword
' In customer service bot - use quality model
USE MODEL "quality"
' In quick-answer bot - use fast model
USE MODEL "fast"
' In code helper bot - use code model
USE MODEL "code"
' Let system decide
USE MODEL "auto"
Model Routing Strategies
| Strategy | Description |
|---|---|
manual | Explicit model selection only |
auto | System chooses based on query |
load-balanced | Distribute for throughput |
fallback | Try models in order |
Configuration
name,value
model-routing-strategy,auto
model-default,fast
model-fast,DeepSeek-R3-Distill-Qwen-1.5B-Q3_K_M.gguf
model-quality,claude-sonnet-4.5
model-code,codellama-7b.gguf
Hybrid RAG Search
Multi-agent systems benefit from shared knowledge bases with advanced search.
Configuration
name,value
rag-hybrid-enabled,true
rag-dense-weight,0.7
rag-sparse-weight,0.3
rag-reranker-enabled,true
How It Works
- Dense Search - Semantic/vector similarity (0.7 weight)
- Sparse Search - BM25 keyword matching (0.3 weight)
- Fusion - Reciprocal Rank Fusion combines results
- Reranking - Optional LLM reranking for quality
' Hybrid search is automatic when enabled
USE KB "company-knowledge"
result = FIND "customer refund policy"
' Returns results using both semantic and keyword matching
Code Sandbox
Bots can execute code in isolated sandboxes for data processing.
Supported Languages
' Python for data science
result = RUN PYTHON "
import pandas as pd
df = pd.DataFrame({'a': [1,2,3]})
print(df.sum().to_json())
"
' JavaScript for JSON processing
result = RUN JAVASCRIPT "
const data = {items: [1,2,3]};
console.log(JSON.stringify(data.items.map(x => x * 2)));
"
' Bash for system tasks
result = RUN BASH "ls -la /data"
Sandbox Configuration
name,value
sandbox-runtime,lxc
sandbox-timeout,30
sandbox-memory-mb,512
sandbox-cpu-percent,50
sandbox-network,false
Runtimes
| Runtime | Security | Performance | Requirements |
|---|---|---|---|
| LXC | High | Excellent | LXC installed |
| Docker | High | Good | Docker daemon |
| Firecracker | Highest | Good | Firecracker |
| Process | Low | Best | None (fallback) |
Agent Reflection
Bots can self-analyze and improve through reflection.
Enable Reflection
' Enable self-reflection
BOT REFLECTION true
' Monitor specific metrics
BOT REFLECTION ON "conversation_quality"
BOT REFLECTION ON "response_accuracy"
Get Insights
' Retrieve reflection analysis
insights = BOT REFLECTION INSIGHTS()
PRINT "Quality Score: " + insights.qualityScore
PRINT "Issues: " + insights.issuesCount
FOR EACH suggestion IN insights.suggestions
PRINT "Suggestion: " + suggestion
NEXT suggestion
Reflection Metrics
| Metric | Description |
|---|---|
conversation_quality | Overall conversation effectiveness |
response_accuracy | Correctness of responses |
user_satisfaction | Estimated user satisfaction |
tone_appropriateness | Tone matches context |
resolution_rate | Issues successfully resolved |
SSE Streaming
Real-time streaming for responsive multi-agent UIs.
Enable Streaming
name,value
sse-enabled,true
sse-heartbeat,30
sse-max-connections,1000
Client Integration
// Connect to SSE endpoint
const eventSource = new EventSource('/api/chat/stream?session=' + sessionId);
eventSource.onmessage = (event) => {
const data = JSON.parse(event.data);
if (data.type === 'token') {
// Streaming token
appendToMessage(data.content);
} else if (data.type === 'bot_switch') {
// Different bot responding
showBotIndicator(data.botName);
} else if (data.type === 'complete') {
// Response complete
finalizeMessage();
}
};
Patterns and Best Practices
Router Pattern
A central router bot directs queries to specialists.
' router-bot/start.bas
HEAR userquery
' Classify the query
category = LLM "Classify into: billing, technical, sales, general. Query: " + userquery
SWITCH category
CASE "billing"
result = DELEGATE userquery TO BOT "billing-bot"
CASE "technical"
result = DELEGATE userquery TO BOT "tech-bot"
CASE "sales"
result = DELEGATE userquery TO BOT "sales-bot"
CASE ELSE
result = LLM userquery
END SWITCH
TALK result
Expert Panel Pattern
Multiple bots provide perspectives on complex questions.
' Get input from multiple experts
question = "Should we expand into the European market?"
marketAnalysis = DELEGATE question TO BOT "market-analyst"
financialView = DELEGATE question TO BOT "finance-expert"
riskAssessment = DELEGATE question TO BOT "risk-assessor"
' Synthesize responses
synthesis = LLM "Synthesize these expert opinions into a recommendation:
Market: " + marketAnalysis + "
Finance: " + financialView + "
Risk: " + riskAssessment
BEGIN TALK
**Expert Panel Summary**
📊 **Market Analysis:** {marketAnalysis}
💰 **Financial View:** {financialView}
⚠️ **Risk Assessment:** {riskAssessment}
📋 **Recommendation:** {synthesis}
END TALK
Escalation Pattern
Automatic escalation when bot can’t handle query.
' First-line support bot
confidence = LLM "Rate your confidence (0-100) in answering: " + userquery
IF confidence < 50 THEN
' Escalate to specialist
TALK "Let me connect you with a specialist who can better help."
SET CONTEXT "escalation_reason" AS "low_confidence"
SET CONTEXT "original_query" AS userquery
TRANSFER CONVERSATION TO "senior-support-bot"
ELSE
' Handle normally
response = LLM userquery
TALK response
END IF
Supervisor Pattern
A supervisor bot monitors and coordinates workers.
' supervisor-bot/monitor.bas
SET SCHEDULE "*/5 * * * *" ' Run every 5 minutes
' Check all worker bots
workers = ["processor-1", "processor-2", "processor-3"]
FOR EACH worker IN workers
status = DELEGATE "HEALTH_CHECK" TO BOT worker TIMEOUT 10
IF status = "" OR status = "ERROR" THEN
' Worker unresponsive
SEND MAIL admin, "Bot Alert", worker + " is unresponsive"
DELEGATE "RESTART" TO BOT "bot-manager"
END IF
NEXT worker
Database Schema
Multi-agent systems use several database tables:
a2a_messages
Stores inter-agent communication.
| Column | Type | Description |
|---|---|---|
id | UUID | Message ID |
from_agent | VARCHAR | Sender bot ID |
to_agent | VARCHAR | Recipient bot ID |
message_type | VARCHAR | Request, Response, etc. |
payload | JSONB | Message content |
correlation_id | UUID | Links request/response |
status | VARCHAR | pending, delivered, failed |
created_at | TIMESTAMP | When created |
user_memory
Stores cross-bot user data.
| Column | Type | Description |
|---|---|---|
user_id | UUID | User identifier |
key | VARCHAR | Memory key |
value | JSONB | Stored value |
memory_type | VARCHAR | preference, fact, context |
ttl | TIMESTAMP | Optional expiration |
agent_reflections
Stores reflection analysis results.
| Column | Type | Description |
|---|---|---|
id | UUID | Reflection ID |
bot_id | UUID | Bot that was analyzed |
conversation_id | UUID | Analyzed conversation |
quality_score | FLOAT | Overall quality (0-1) |
insights | JSONB | Analysis details |
created_at | TIMESTAMP | When analyzed |
Troubleshooting
Bot Not Responding to Delegation
- Check bot is registered:
LIST BOTS - Verify A2A is enabled:
a2a-enabled,true - Check timeout is sufficient
- Review bot logs for errors
Memory Not Sharing Between Bots
- Ensure using
SET USER MEMORYnotSET BOT MEMORY - Check
user-memory-enabled,true - Verify same user identity across bots
Circular Delegation Detected
- Review delegation chains
- Increase
a2a-max-hopsif legitimately deep - Add guards to prevent loops:
hops = GET "delegation_hops"
IF hops > 3 THEN
TALK "I'll handle this directly."
' Don't delegate further
ELSE
SET "delegation_hops", hops + 1
DELEGATE task TO BOT "specialist"
END IF
See Also
Memory Management
General Bots provides a comprehensive memory system that enables persistent storage, cross-session continuity, and multi-agent data sharing. This chapter covers all memory types, their use cases, and best practices.
Overview
The memory system supports four distinct scopes:
| Memory Type | Scope | Persistence | Use Case |
|---|---|---|---|
| User Memory | Per user, all bots | Permanent | Preferences, profile, facts |
| Bot Memory | Per bot, all users | Permanent | Bot state, counters, config |
| Session Memory | Per session | Session lifetime | Current conversation context |
| Episodic Memory | Per conversation | Permanent | Conversation summaries |
User Memory
User memory follows users across all bots and sessions, enabling personalization and continuity.
Keywords
' Store user data
SET USER MEMORY "key", value
' Retrieve user data
value = GET USER MEMORY("key")
' Store a fact about the user
SET USER FACT "occupation", "software engineer"
' Get all user facts
facts = USER FACTS()
Examples
Personalized Greeting
' Check if returning user
name = GET USER MEMORY("name")
IF name = "" THEN
TALK "Hello! What's your name?"
HEAR name
SET USER MEMORY "name", name
TALK "Nice to meet you, " + name + "!"
ELSE
TALK "Welcome back, " + name + "!"
END IF
Cross-Bot Preferences
' In any bot - store preference
SET USER MEMORY "language", "pt-BR"
SET USER MEMORY "timezone", "America/Sao_Paulo"
' In any other bot - use preference
language = GET USER MEMORY("language")
IF language = "pt-BR" THEN
TALK "Olá! Como posso ajudar?"
ELSE
TALK "Hello! How can I help?"
END IF
User Facts for AI Context
' Store facts about the user
SET USER FACT "company", "Acme Corp"
SET USER FACT "role", "Product Manager"
SET USER FACT "interests", "AI, automation, productivity"
' Later, use facts to personalize AI responses
facts = USER FACTS()
SET CONTEXT "user_profile" AS facts
response = LLM "Help me draft a product roadmap"
' AI now knows user's role and interests
Database Schema
User memory is stored in the user_memory table:
| Column | Type | Description |
|---|---|---|
id | UUID | Primary key |
user_id | UUID | User identifier |
key | VARCHAR(255) | Memory key |
value | JSONB | Stored value (any type) |
memory_type | VARCHAR(50) | preference, fact, context |
ttl | TIMESTAMP | Optional expiration |
created_at | TIMESTAMP | Creation time |
updated_at | TIMESTAMP | Last update |
Configuration
name,value
user-memory-enabled,true
user-memory-max-keys,1000
user-memory-default-ttl,0
| Option | Default | Description |
|---|---|---|
user-memory-enabled | true | Enable user memory |
user-memory-max-keys | 1000 | Max keys per user |
user-memory-default-ttl | 0 | Default TTL (0 = no expiry) |
Bot Memory
Bot memory stores data at the bot level, shared across all users but isolated per bot.
Keywords
' Store bot data
SET BOT MEMORY "key", value
' Retrieve bot data
value = GET BOT MEMORY("key")
Examples
Bot Statistics
' Track bot usage
conversations = GET BOT MEMORY("total_conversations")
conversations = conversations + 1
SET BOT MEMORY "total_conversations", conversations
PRINT "This bot has had " + conversations + " conversations"
Feature Flags
' Store feature configuration
SET BOT MEMORY "enable_voice", true
SET BOT MEMORY "max_retries", 3
SET BOT MEMORY "welcome_message", "Hello! I'm your assistant."
' Use in logic
enableVoice = GET BOT MEMORY("enable_voice")
IF enableVoice THEN
' Enable voice features
END IF
Cache API Results
' Cache expensive API calls
cachedRates = GET BOT MEMORY("exchange_rates")
cachedTime = GET BOT MEMORY("exchange_rates_time")
IF cachedRates = "" OR (NOW() - cachedTime) > 3600 THEN
' Refresh cache
rates = GET "https://api.exchangerate.host/latest"
SET BOT MEMORY "exchange_rates", rates
SET BOT MEMORY "exchange_rates_time", NOW()
ELSE
rates = cachedRates
END IF
Use Cases
| Use Case | Example Key | Description |
|---|---|---|
| Counters | total_orders | Track bot-wide metrics |
| Config | max_items | Runtime configuration |
| Cache | api_cache_products | Cached API responses |
| State | last_sync_time | Operational state |
Session Memory
Session memory is temporary storage for the current conversation session.
Keywords
' Store in session
SET "key", value
' Retrieve from session
value = GET "key"
' Set context for AI
SET CONTEXT "topic" AS "billing inquiry"
Examples
Conversation State
' Track conversation flow
SET "current_step", "collecting_info"
SET "collected_name", username
SET "collected_email", useremail
' Later in conversation
step = GET "current_step"
IF step = "collecting_info" THEN
' Continue collecting
END IF
Multi-Turn Context
' Build context through conversation
SET CONTEXT "customer_id" AS customerid
SET CONTEXT "issue_type" AS "refund"
SET CONTEXT "order_id" AS orderid
' AI has full context for responses
response = LLM "Help resolve this customer issue"
Session Lifetime
- Created when user starts conversation
- Persists across messages in same conversation
- Cleared when conversation ends or times out
- Default timeout: 30 minutes of inactivity
Episodic Memory
Episodic memory stores summaries of past conversations for long-term context.
How It Works
- Conversation Ends - System detects conversation completion
- Summary Generated - LLM creates concise summary
- Stored - Summary saved with metadata
- Retrieved - Used in future conversations for context
Example
' System automatically creates episode summaries
' Example summary stored:
' {
' "conversation_id": "abc123",
' "summary": "User asked about refund policy, was satisfied with explanation",
' "topics": ["refunds", "policy"],
' "sentiment": "positive",
' "resolution": "resolved",
' "created_at": "2025-01-15T10:30:00Z"
' }
' In future conversations, retrieve relevant episodes
episodes = GET USER MEMORY("recent_episodes")
SET CONTEXT "previous_interactions" AS episodes
Configuration
name,value
episodic-memory-enabled,true
episodic-memory-threshold,4
episodic-memory-history,2
episodic-memory-model,fast
episodic-memory-max-episodes,100
episodic-memory-retention-days,365
episodic-memory-auto-summarize,true
Memory Patterns
Profile Builder Pattern
Build user profile progressively through conversations.
' Check what we know
profile = GET USER MEMORY("profile")
IF profile = "" THEN
profile = #{ }
END IF
' Fill in missing information naturally
IF profile.name = "" THEN
' Ask for name when appropriate
END IF
IF profile.preferences = "" THEN
' Learn preferences from behavior
END IF
' Update profile
SET USER MEMORY "profile", profile
Preference Learning Pattern
Learn preferences from user behavior.
' Track user choices
choice = HEAR selection
choices = GET USER MEMORY("choices_history")
IF choices = "" THEN choices = []
' Add new choice
choices = APPEND(choices, #{
choice: choice,
context: currentContext,
timestamp: NOW()
})
SET USER MEMORY "choices_history", choices
' Analyze patterns periodically
IF LEN(choices) >= 10 THEN
preferences = LLM "Analyze these choices and identify preferences: " + JSON(choices)
SET USER MEMORY "learned_preferences", preferences
END IF
Context Handoff Pattern
Pass context between bots in multi-agent scenarios.
' Sending bot: Store context for receiving bot
handoffContext = #{
topic: currentTopic,
userIntent: detectedIntent,
conversationSummary: summary,
relevantFacts: facts
}
SET USER MEMORY "handoff_context", handoffContext
' Transfer to specialist
TRANSFER CONVERSATION TO "specialist-bot"
' Receiving bot: Retrieve context
context = GET USER MEMORY("handoff_context")
SET CONTEXT "background" AS context.conversationSummary
SET CONTEXT "intent" AS context.userIntent
' Clear handoff context after use
SET USER MEMORY "handoff_context", ""
TTL Pattern
Use time-to-live for temporary data.
' Store with expiration (implementation depends on memory type)
' For session-like data in user memory:
SET USER MEMORY "temp_auth_code", #{
code: authCode,
expires: NOW() + 300 ' 5 minutes
}
' Check expiration
stored = GET USER MEMORY("temp_auth_code")
IF stored <> "" AND stored.expires > NOW() THEN
' Valid
ELSE
' Expired or not found
SET USER MEMORY "temp_auth_code", ""
END IF
Best Practices
Key Naming Conventions
' Use consistent prefixes
SET USER MEMORY "pref_language", "en" ' Preferences
SET USER MEMORY "pref_timezone", "UTC"
SET USER MEMORY "fact_name", "John" ' Facts
SET USER MEMORY "fact_company", "Acme"
SET USER MEMORY "ctx_last_topic", "sales" ' Context
SET USER MEMORY "cache_products", data ' Cached data
Don’t Store Sensitive Data
' ❌ DON'T: Store sensitive data
SET USER MEMORY "password", userPassword
SET USER MEMORY "ssn", socialSecurityNumber
SET USER MEMORY "credit_card", cardNumber
' ✅ DO: Store references only
SET USER MEMORY "payment_method_id", paymentId
SET USER MEMORY "verified", true
Handle Missing Data Gracefully
' Always check for empty/missing
name = GET USER MEMORY("name")
IF name = "" THEN
name = "there" ' Default value
END IF
TALK "Hello, " + name + "!"
Clean Up Old Data
' Periodic cleanup of old data
lastCleanup = GET BOT MEMORY("last_memory_cleanup")
IF lastCleanup = "" OR (NOW() - lastCleanup) > 86400 THEN
' Run cleanup logic
' Remove expired entries, old cache, etc.
SET BOT MEMORY "last_memory_cleanup", NOW()
END IF
Troubleshooting
Memory Not Persisting
- Check memory type - session memory doesn’t persist
- Verify database connection
- Check for key name typos (keys are case-sensitive)
- Review memory limits
Cross-Bot Memory Not Sharing
- Ensure using
USER MEMORYnotBOT MEMORY - Verify same user identity
- Check
user-memory-enabledconfig
Memory Full Errors
- Clean up old/unused keys
- Increase
user-memory-max-keys - Use TTL for temporary data
- Consolidate related keys into objects
See Also
- SET USER MEMORY - Store user memory
- GET USER MEMORY - Retrieve user memory
- SET BOT MEMORY - Store bot memory
- GET BOT MEMORY - Retrieve bot memory
- Multi-Agent Orchestration - Cross-bot data sharing
Hybrid RAG Search
Hybrid search combines dense (semantic) and sparse (keyword) retrieval for better search quality than either method alone.
Overview
| Method | Strengths | Weaknesses |
|---|---|---|
| Dense (Semantic) | Synonyms, meaning, paraphrasing | Rare terms, exact matches |
| Sparse (BM25) | Exact terms, product codes, names | No semantic understanding |
| Hybrid | Best of both | Slightly more computation |
How It Works
User Query
│
├──────────────────┐
▼ ▼
Dense Search Sparse Search
(Weight: 0.7) (Weight: 0.3)
│ │
└────────┬─────────┘
▼
Reciprocal Rank Fusion
│
▼
Optional Reranking
│
▼
Final Results
Reciprocal Rank Fusion (RRF):
RRF_score(d) = Σ 1 / (k + rank_i(d))
Configuration
In config.csv:
name,value
rag-hybrid-enabled,true
rag-dense-weight,0.7
rag-sparse-weight,0.3
rag-top-k,10
rag-rrf-k,60
rag-reranker-enabled,false
Weight Tuning
| Content Type | Dense | Sparse | Use Case |
|---|---|---|---|
| Balanced | 0.7 | 0.3 | General purpose |
| Semantic-Heavy | 0.9 | 0.1 | Conversational, multilingual |
| Keyword-Heavy | 0.4 | 0.6 | Technical docs, product catalogs |
| Equal | 0.5 | 0.5 | When unsure |
Reranking
Optional LLM-based reranking for highest quality:
name,value
rag-reranker-enabled,true
rag-reranker-model,quality
rag-reranker-top-n,20
| Aspect | Without | With Reranking |
|---|---|---|
| Latency | ~50ms | ~500ms |
| Quality | Good | Excellent |
| Cost | None | LLM API cost |
Use for: Legal, medical, financial, compliance-critical queries.
Usage
Hybrid search is automatic when enabled. No code changes needed:
USE KB "company-policies"
' Queries automatically use hybrid search
Performance
| Metric | Target |
|---|---|
| MRR (Mean Reciprocal Rank) | > 0.7 |
| Recall@10 | > 0.9 |
| Latency P95 | < 200ms |
| Cache Hit Rate | > 40% |
Caching
name,value
rag-cache-enabled,true
rag-cache-ttl,3600
rag-cache-max-size,10000
Troubleshooting
| Issue | Solution |
|---|---|
| Poor results | Adjust weights for content type |
| High latency | Reduce rag-top-k, enable caching, disable reranking |
| Missing expected results | Check document indexed, verify no filters excluding it |
See Also
- Semantic Search - Dense search details
- Document Indexing - How documents are processed
- Knowledge Base - KB overview
Transfer to Human
The TRANSFER TO HUMAN keyword enables seamless handoff from bot conversations to human attendants. This is a critical feature for hybrid support workflows where complex issues require human intervention.
Overview
When a conversation requires human attention—whether due to customer request, issue complexity, or emotional escalation—the bot can transfer the conversation to a human attendant using the TRANSFER TO HUMAN keyword.
The system sets needs_human = true in the session context, which routes all subsequent messages from that customer to human attendants instead of the bot.
How It Works
Customer Message → Check needs_human
↓
┌───────────────┴───────────────┐
↓ ↓
needs_human=false needs_human=true
↓ ↓
Bot Processing Human Attendant
↓ ↓
TRANSFER TO HUMAN? Respond via
↓ Console/WhatsApp
Set needs_human=true ↓
↓ /resolve command
Notify Attendants ↓
needs_human=false
↓
Back to Bot
Configuration
Enable CRM Features
Add the following to your bot’s config.csv:
name,value
# Required: Enable CRM/Transfer functionality
crm-enabled,true
# Optional: Enable LLM-assisted attendant features
attendant-llm-tips,true
attendant-polish-message,true
attendant-smart-replies,true
attendant-auto-summary,true
attendant-sentiment-analysis,true
The crm-enabled setting activates:
- Transfer to human functionality
- Attendant queue management
- WebSocket notifications
- LLM assist features (if configured)
Configure Attendants
Create attendant.csv in your bot’s .gbai folder:
id,name,channel,preferences,department,aliases
att-001,John Smith,all,sales,commercial,john;johnny;js
att-002,Jane Doe,web,support,customer-service,jane
att-003,Bob Wilson,whatsapp,technical,engineering,bob;bobby
att-004,Maria Santos,all,collections,finance,maria
| Column | Description |
|---|---|
id | Unique identifier for the attendant |
name | Display name shown to customers |
channel | Channel they handle: all, web, whatsapp, teams, etc. |
preferences | Type of work they prefer |
department | Department for routing |
aliases | Semicolon-separated nicknames for name matching |
The needs_human Flag
When TRANSFER TO HUMAN is called, the system sets needs_human = true in the session’s context data. This flag controls message routing:
needs_human Value | Behavior |
|---|---|
false (default) | Messages go to bot for processing |
true | Messages go to human attendant |
Checking the Flag in BASIC
' Check if conversation needs human
IF session.needs_human THEN
TALK "You're connected to our support team."
ELSE
TALK "I'm your AI assistant. How can I help?"
END IF
Manual Flag Control (Advanced)
' Force transfer without using keyword
SET SESSION "needs_human", true
SET SESSION "transfer_reason", "Customer requested human"
' Return to bot mode (usually done by attendant via /resolve)
SET SESSION "needs_human", false
Basic Usage
Transfer to Any Available Attendant
' Simple transfer to next available human
TRANSFER TO HUMAN
TALK result.message
Transfer to Specific Person
' Transfer to a specific attendant by name
TRANSFER TO HUMAN "John Smith"
' Also works with aliases
TRANSFER TO HUMAN "johnny"
' Or by ID
TRANSFER TO HUMAN "att-001"
Transfer to Department
' Transfer to sales department
TRANSFER TO HUMAN "sales"
' Transfer with priority
result = TRANSFER TO HUMAN "support", "high"
IF result.success THEN
TALK "You are now connected to " + result.assigned_to_name
ELSE
TALK result.message
END IF
Transfer with Context
' Transfer with department, priority, and context
TRANSFER TO HUMAN "technical", "urgent", "Customer needs help with API integration"
Advanced Usage
Extended Transfer with Named Parameters
' Using transfer_to_human_ex for full control
params = #{
name: "John",
department: "support",
priority: "high",
reason: "Complex billing issue",
context: "Customer has been a member since 2020, premium tier"
}
result = transfer_to_human_ex(params)
IF result.success THEN
TALK "Transferring you to " + result.assigned_to_name
TALK "Estimated wait time: " + result.estimated_wait_seconds + " seconds"
ELSE
TALK "Sorry, " + result.message
END IF
Conditional Transfer
' Transfer based on conversation context
sentiment = ANALYZE SENTIMENT conversation
IF sentiment.score < -0.5 THEN
' Frustrated customer - high priority
TRANSFER TO HUMAN "support", "urgent", "Customer appears frustrated"
ELSE IF topic = "billing" THEN
TRANSFER TO HUMAN "billing"
ELSE IF topic = "technical" THEN
TRANSFER TO HUMAN "technical"
ELSE
TRANSFER TO HUMAN
END IF
Check Availability Before Transfer
' Check if any attendants are available
attendants = GET "/api/attendance/attendants"
available = 0
FOR EACH att IN attendants
IF att.status = "online" THEN
available = available + 1
END IF
NEXT
IF available > 0 THEN
TRANSFER TO HUMAN
ELSE
TALK "Our team is currently unavailable. Would you like to:"
TALK "1. Leave a message"
TALK "2. Schedule a callback"
TALK "3. Continue with our AI assistant"
HEAR choice
END IF
Transfer Result
The TRANSFER TO HUMAN keyword returns a result object:
| Property | Type | Description |
|---|---|---|
success | Boolean | Whether the transfer was successful |
status | String | Status: queued, assigned, connected, no_attendants, crm_disabled, attendant_not_found, error |
queue_position | Integer | Position in queue (if queued) |
assigned_to | String | Attendant ID (if assigned) |
assigned_to_name | String | Attendant name (if assigned) |
estimated_wait_seconds | Integer | Estimated wait time |
message | String | Human-readable message |
Handling Different Statuses
result = TRANSFER TO HUMAN "sales"
SELECT CASE result.status
CASE "assigned"
TALK "Great news! " + result.assigned_to_name + " will be with you shortly."
CASE "queued"
TALK "You are #" + result.queue_position + " in line."
TALK "Estimated wait: " + (result.estimated_wait_seconds / 60) + " minutes."
CASE "connected"
TALK "You are now connected with " + result.assigned_to_name
CASE "no_attendants"
TALK "No attendants are currently available."
TALK "Would you like to leave a message?"
CASE "attendant_not_found"
TALK "That person is not available. Let me find someone else."
TRANSFER TO HUMAN
CASE "crm_disabled"
TALK "I'm sorry, human support is not configured for this bot."
CASE ELSE
TALK "Something went wrong. Please try again."
END SELECT
LLM Tool Integration
The TRANSFER TO HUMAN keyword is automatically registered as an LLM tool, allowing the AI to decide when to transfer:
Tool Schema
{
"name": "transfer_to_human",
"description": "Transfer the conversation to a human attendant. Use when the customer explicitly asks to speak with a person, when the issue is too complex, or when emotional support is needed.",
"parameters": {
"type": "object",
"properties": {
"name": {
"type": "string",
"description": "If someone wants to talk to somebody specific, provide their name or alias"
},
"department": {
"type": "string",
"description": "Department to transfer to: sales, support, technical, billing, etc."
},
"priority": {
"type": "string",
"enum": ["normal", "high", "urgent"],
"default": "normal"
},
"reason": {
"type": "string",
"description": "Brief reason for the transfer"
}
}
}
}
AI-Initiated Transfer Example
When a customer says “I want to talk to a real person,” the LLM can automatically invoke:
{
"tool": "transfer_to_human",
"arguments": {
"reason": "Customer requested human assistance"
}
}
Priority Levels
| Priority | Value | Use Case |
|---|---|---|
low | 0 | Non-urgent inquiries |
normal | 1 | Standard requests (default) |
high | 2 | Important customers, time-sensitive issues |
urgent | 3 | Escalations, complaints, VIP customers |
Higher priority conversations are served first in the queue.
Attendant Status
Attendants can have the following statuses:
| Status | Description |
|---|---|
online | Available and ready for conversations |
busy | Currently handling conversations |
away | Temporarily unavailable |
offline | Not working |
Only online attendants receive new conversation assignments.
Queue Status
Conversations in the queue have these statuses:
| Status | Description |
|---|---|
waiting | Waiting for an attendant |
assigned | Assigned but not yet active |
active | Conversation in progress |
resolved | Conversation completed |
abandoned | Customer left before assignment |
REST API Endpoints
Queue Management
| Endpoint | Method | Description |
|---|---|---|
/api/attendance/queue | GET | List conversations in queue |
/api/attendance/attendants | GET | List all attendants |
/api/attendance/assign | POST | Assign conversation to attendant |
/api/attendance/transfer | POST | Transfer between attendants |
/api/attendance/resolve/:session_id | POST | Mark conversation resolved |
/api/attendance/insights | GET | Get queue insights |
Example: Manual Transfer via API
' Transfer using direct API call
body = #{
session_id: session.id,
from_attendant_id: "att-001",
to_attendant_id: "att-002",
reason: "Specialist needed for technical issue"
}
result = POST "/api/attendance/transfer", body
Attendant Console
When CRM is enabled, the Attendant Console becomes available at /suite/attendant/. This provides a full-featured interface for human agents:
Features
- Queue Management: View and filter waiting conversations
- Real-time Updates: WebSocket-powered live updates
- AI Insights: Sentiment analysis, intent detection, suggested replies
- Transfer: Transfer conversations between attendants
- Customer Details: View customer history and information
- Quick Responses: Pre-configured response templates
Accessing the Console
- Enable
crm-enabled,truein config.csv - Create
attendant.csvwith your team - Navigate to
/suite/attendant/or click “Attendant” in the Suite menu
WhatsApp Attendant Mode
Attendants can manage conversations directly from WhatsApp using commands:
Queue Commands
| Command | Description |
|---|---|
/queue or /fila | View waiting conversations |
/take or /pegar | Take next conversation |
/status [online|busy|away|offline] | Set availability |
/transfer @name or /transferir | Transfer to another attendant |
/resolve or /resolver | Mark complete, set needs_human=false |
AI Assist Commands
| Command | Description |
|---|---|
/tips or /dicas | Get AI tips for current conversation |
/polish <msg> or /polir | Polish message before sending |
/replies or /respostas | Get 3 smart reply suggestions |
/summary or /resumo | Get conversation summary |
/help or /ajuda | Show all commands |
Example WhatsApp Session
Attendant: /queue
Bot: 📋 Queue (2 waiting)
1. João Silva (whatsapp) - Status: waiting
2. Maria Santos (web) - Status: waiting
Attendant: /take
Bot: ✅ Conversation assigned
Customer: João Silva
Session: abc12345
[Customer message arrives]
Customer: Preciso de ajuda com meu pedido
Attendant: /tips
Bot: 💡 Tips:
🎯 Customer needs help with order
✅ Ask for order number
📚 Check order status in system
Attendant: /polish oi joao, vou verificar seu pedido agora
Bot: ✨ Polished:
"Olá João! Vou verificar seu pedido agora mesmo."
Attendant: Olá João! Vou verificar seu pedido agora mesmo.
[Message sent to customer]
Attendant: /resolve
Bot: ✅ Conversation resolved
Customer returned to bot mode.
Best Practices
1. Set Clear Expectations
result = TRANSFER TO HUMAN
IF result.success AND result.status = "queued" THEN
TALK "You're now in line to speak with a team member."
TALK "Your position: #" + result.queue_position
TALK "While you wait, I can still help with simple questions."
END IF
2. Provide Context to Attendants
' Build context from conversation
context = "Customer inquiry about: " + detected_topic + ". "
context = context + "Sentiment: " + sentiment + ". "
context = context + "Key entities: " + entities.join(", ")
TRANSFER TO HUMAN "support", "normal", context
3. Handle Off-Hours
' Check business hours
hour = HOUR(NOW())
day = WEEKDAY(NOW())
IF day >= 1 AND day <= 5 AND hour >= 9 AND hour < 18 THEN
TRANSFER TO HUMAN
ELSE
TALK "Our team is available Monday-Friday, 9 AM - 6 PM."
TALK "Would you like to leave a message or schedule a callback?"
END IF
4. VIP Routing
' Check if customer is VIP
customer = FIND "customers", "email='" + user.email + "'"
IF customer.tier = "premium" OR customer.tier = "enterprise" THEN
TRANSFER TO HUMAN "vip-support", "high", "Premium customer"
ELSE
TRANSFER TO HUMAN
END IF
Troubleshooting
“CRM not enabled” Error
Add crm-enabled,true to your config.csv file.
“No attendants configured” Error
Create attendant.csv in your bot’s .gbai folder with at least one attendant.
Transfer Not Finding Attendant by Name
- Check that the name or alias is spelled correctly
- Verify the attendant exists in
attendant.csv - Aliases are case-insensitive and separated by semicolons
Queue Not Updating
- Ensure WebSocket connection is active
- Check that the attendant status is
online - Verify the bot has proper database permissions
Analytics & Insights
The attendance system provides analytics through the API:
Queue Insights
GET /api/attendance/insights/{session_id}
Response:
{
"session_id": "uuid",
"sentiment": "neutral",
"message_count": 15,
"suggested_reply": "How can I help?",
"key_topics": ["billing", "refund"],
"priority": "normal",
"language": "pt"
}
LLM-Powered Analytics
When attendant-sentiment-analysis is enabled:
POST /api/attendance/llm/sentiment
Response:
{
"overall": "negative",
"score": -0.6,
"emotions": [{"name": "frustration", "intensity": 0.8}],
"escalation_risk": "high",
"urgency": "high",
"emoji": "😟"
}
Troubleshooting
Customer Stuck in Human Mode
If a customer is stuck with needs_human=true after the issue is resolved:
- Attendant uses
/resolvecommand - Or manually via API:
POST /api/attendance/resolve/{session_id}
Messages Not Reaching Attendant
- Check
crm-enabled,truein config.csv - Verify attendant.csv exists with valid entries
- Ensure attendant status is
online - Check WebSocket connection in browser console
Attendant Commands Not Working on WhatsApp
- Verify phone number is in attendant.csv
- Phone must match exactly (with country code)
- Check that bot is receiving webhooks
See Also
- LLM-Assisted Attendant - AI copilot features
- Attendance Queue Module - Full queue configuration
- Human Approval - Approval workflows
- CRM Automations - Sales, collections, scheduling
- WhatsApp Setup - Channel configuration
LLM-Assisted Attendant Features
General Bots provides AI-powered assistance to human attendants during customer conversations. These features help attendants respond faster, more professionally, and with better context awareness.
Overview
When the bot transfers a conversation to a human attendant (via TRANSFER TO HUMAN), the LLM orchestrator continues working in the background to assist the human. This creates a hybrid experience where AI augments human capability rather than replacing it.
The system uses the same PROMPT.md and bot personality configured for the bot, ensuring consistency between bot responses and attendant assistance.
Features
| Feature | Config Key | Description |
|---|---|---|
| Real-time Tips | attendant-llm-tips | Contextual tips when customer messages arrive |
| Message Polish | attendant-polish-message | Improve grammar and tone before sending |
| Smart Replies | attendant-smart-replies | Generate 3 contextual reply suggestions |
| Auto Summary | attendant-auto-summary | Summarize conversation when attendant joins |
| Sentiment Analysis | attendant-sentiment-analysis | Real-time emotional state tracking |
Configuration
Add these settings to your bot’s config.csv:
name,value
# Enable all LLM assist features
attendant-llm-tips,true
attendant-polish-message,true
attendant-smart-replies,true
attendant-auto-summary,true
attendant-sentiment-analysis,true
# Optional: Set bot personality for context
bot-system-prompt,You are a friendly customer service assistant for Acme Corp
bot-description,Premium support for enterprise customers
Selective Enablement
Enable only the features you need:
name,value
attendant-smart-replies,true
attendant-sentiment-analysis,true
Feature Details
1. Real-time Tips (attendant-llm-tips)
When a customer sends a message, the LLM analyzes it and provides actionable tips to the attendant.
Tip Types
| Type | Icon | Description |
|---|---|---|
intent | 🎯 | What the customer wants |
action | ✅ | Suggested action to take |
warning | ⚠️ | Sentiment or escalation concern |
knowledge | 📚 | Relevant info to share |
history | 📜 | Insight from conversation history |
general | 💡 | General helpful advice |
Example Tips
Customer says: “This is ridiculous! I’ve been waiting 3 days for a response!”
Tips generated:
- ⚠️ Customer is frustrated - use empathetic language and apologize
- 🎯 Customer has been waiting for support response
- ✅ Acknowledge the delay and provide immediate assistance
API Usage
' Internal API - automatically called by UI
POST /api/attendance/llm/tips
{
"session_id": "uuid",
"customer_message": "message text",
"history": [{"role": "customer", "content": "..."}]
}
2. Message Polish (attendant-polish-message)
Before sending, attendants can polish their message with one click. The LLM improves grammar, clarity, and tone while preserving the original meaning.
Supported Tones
professional(default)friendlyempatheticformal
Example
Original: “ya we can do that but u need to wait til tmrw”
Polished: “Yes, we can certainly help with that! Please allow until tomorrow for us to process your request.”
Changes: Fixed grammar, improved clarity, added professional tone
API Usage
POST /api/attendance/llm/polish
{
"session_id": "uuid",
"message": "original message",
"tone": "professional"
}
Response:
{
"success": true,
"original": "ya we can do that...",
"polished": "Yes, we can certainly...",
"changes": ["Fixed grammar", "Improved tone"]
}
3. Smart Replies (attendant-smart-replies)
Generate three contextually appropriate reply suggestions based on the conversation history and bot personality.
Reply Categories
greeting- Opening responsesanswer- Direct answers to questionsacknowledgment- Empathetic acknowledgmentssolution- Problem-solving responsesfollow_up- Continuation questionsclosing- Conversation wrap-up
Example
Customer: “How do I reset my password?”
Suggested Replies:
-
Empathetic: “I understand how frustrating it can be when you can’t access your account. I’ll help you reset your password right away.”
-
Solution-focused: “To reset your password, please go to the login page and click ‘Forgot Password’. You’ll receive an email with reset instructions.”
-
Follow-up: “I can help you with that! Are you trying to reset the password for your main account or a sub-account?”
API Usage
POST /api/attendance/llm/smart-replies
{
"session_id": "uuid",
"history": [
{"role": "customer", "content": "How do I reset my password?"},
{"role": "attendant", "content": "Hi! Let me help you with that."}
]
}
4. Auto Summary (attendant-auto-summary)
When an attendant takes a conversation, they receive an instant summary of what’s happened so far. This is especially useful for:
- Long conversations
- Transfers between attendants
- Complex multi-issue discussions
Summary Contents
| Field | Description |
|---|---|
brief | One-sentence overview |
key_points | Main discussion points |
customer_needs | What the customer wants |
unresolved_issues | Open items |
sentiment_trend | Improving/stable/declining |
recommended_action | What to do next |
message_count | Number of messages |
duration_minutes | Conversation length |
Example Summary
{
"brief": "Customer requesting refund for damaged product received yesterday",
"key_points": [
"Order #12345 arrived damaged",
"Customer sent photos as proof",
"Previous agent offered replacement"
],
"customer_needs": [
"Full refund instead of replacement",
"Confirmation email"
],
"unresolved_issues": [
"Refund approval pending"
],
"sentiment_trend": "stable",
"recommended_action": "Escalate to supervisor for refund approval"
}
API Usage
GET /api/attendance/llm/summary/{session_id}
5. Sentiment Analysis (attendant-sentiment-analysis)
Real-time analysis of customer emotional state to help attendants respond appropriately.
Analysis Components
| Component | Values | Description |
|---|---|---|
overall | positive, neutral, negative | General sentiment |
score | -1.0 to 1.0 | Numeric sentiment score |
emotions | List | Detected emotions with intensity |
escalation_risk | low, medium, high | Risk of escalation |
urgency | low, normal, high, urgent | Message urgency |
emoji | 😊😐😟 | Visual indicator |
Example Analysis
Customer message: “I’ve been trying to get help for TWO WEEKS! This is absolutely unacceptable!”
{
"overall": "negative",
"score": -0.8,
"emotions": [
{"name": "frustration", "intensity": 0.9},
{"name": "anger", "intensity": 0.7}
],
"escalation_risk": "high",
"urgency": "high",
"emoji": "😟"
}
The UI shows a warning: ⚠️ High escalation risk detected
API Usage
POST /api/attendance/llm/sentiment
{
"session_id": "uuid",
"message": "customer message",
"history": []
}
WhatsApp Attendant Commands
Attendants using WhatsApp can access LLM assist features via commands:
Queue Management
| Command | Description |
|---|---|
/queue or /fila | View waiting conversations |
/take or /pegar | Take next conversation |
/status [online|busy|away|offline] | Set your status |
/transfer @name or /transferir | Transfer conversation |
/resolve or /resolver | Mark as resolved |
AI Assistance
| Command | Description |
|---|---|
/tips or /dicas | Get tips for current conversation |
/polish <message> or /polir | Polish a message before sending |
/replies or /respostas | Get smart reply suggestions |
/summary or /resumo | Get conversation summary |
/help or /ajuda | Show all commands |
Example Usage
Attendant: /queue
Bot: 📋 Queue (3 waiting)
1. John Smith (whatsapp)
Status: waiting | ID: abc123
2. Maria Santos (web)
Status: waiting | ID: def456
...
Attendant: /take
Bot: ✅ Conversation assigned
Customer: John Smith
Session: abc123
Attendant: /tips
Bot: 💡 Tips for this conversation
🎯 Customer is asking about billing
✅ Check their account status before responding
⚠️ Previous conversation ended unresolved
Attendant: /polish thank u for waiting ill check ur account now
Bot: ✨ Polished message
"Thank you for your patience! I'll check your account right now."
Changes:
• Fixed grammar
• Improved professionalism
Attendant: /resolve
Bot: ✅ Conversation resolved
Session abc123 marked as resolved.
Web Attendant Console
The web attendant console (/suite/attendant/) integrates LLM assist features directly:
Insights Panel
The right sidebar shows:
- Customer Sentiment - Real-time with emoji indicator
- Detected Intent - What the customer wants
- Conversation Summary - Auto-generated when you select a conversation
Suggested Replies
Below the insights, three AI-generated reply suggestions appear. Click any suggestion to insert it into the message input.
Polish Button
The ✨ button next to the message input polishes your text before sending.
Tips Notifications
When a customer sends a message:
- Tips appear in the insights panel
- High-priority tips show as toast notifications
- Warning tips (escalation risk) are highlighted
Bot Personality Integration
LLM assist uses your bot’s personality when generating suggestions. Set this in config.csv:
name,value
bot-system-prompt,You are a friendly tech support agent for CloudSoft Inc. Be helpful and patient.
bot-description,Enterprise software support
Or in your start.bas header:
REM CloudSoft Support Bot
REM Friendly, patient, and technically knowledgeable
REM Always offer to escalate complex issues
TALK "Welcome to CloudSoft Support!"
The LLM reads these comments to understand the bot’s personality and applies the same tone to:
- Smart reply suggestions
- Message polishing
- Tips generation
API Reference
Endpoints
| Method | Endpoint | Description |
|---|---|---|
| POST | /api/attendance/llm/tips | Generate tips |
| POST | /api/attendance/llm/polish | Polish message |
| POST | /api/attendance/llm/smart-replies | Generate replies |
| GET | /api/attendance/llm/summary/{session_id} | Get summary |
| POST | /api/attendance/llm/sentiment | Analyze sentiment |
| GET | /api/attendance/llm/config/{bot_id} | Get config |
Check Configuration
GET /api/attendance/llm/config/{bot_id}
Response:
{
"tips_enabled": true,
"polish_enabled": true,
"smart_replies_enabled": true,
"auto_summary_enabled": true,
"sentiment_enabled": true,
"any_enabled": true
}
Fallback Behavior
When LLM is unavailable, the system provides fallback functionality:
| Feature | Fallback |
|---|---|
| Tips | Keyword-based analysis (urgent, problem, question) |
| Polish | Returns original message |
| Smart Replies | Generic template replies |
| Summary | Basic message count and duration |
| Sentiment | Keyword-based positive/negative detection |
Best Practices
1. Start with Smart Replies
If you’re unsure which features to enable, start with attendant-smart-replies. It provides immediate value with low overhead.
2. Enable Sentiment for High-Volume Support
For teams handling many conversations, attendant-sentiment-analysis helps prioritize frustrated customers.
3. Use Polish for Quality Consistency
Enable attendant-polish-message to ensure consistent, professional communication regardless of individual writing skills.
4. Tips for Complex Products
For products with many features or complex workflows, attendant-llm-tips helps attendants quickly understand context.
5. Summary for Shift Changes
Enable attendant-auto-summary if your team has shift changes or frequent transfers between attendants.
Troubleshooting
“Feature is disabled” Message
Add the feature to your config.csv:
attendant-smart-replies,true
Slow Response Times
LLM calls add latency. If responses are slow:
- Use a faster LLM model
- Enable only essential features
- Check your
llm-urlconfiguration
Generic Suggestions
If suggestions seem generic:
- Set
bot-system-promptin config.csv - Add personality comments to
start.bas - Ensure conversation history is being passed
WhatsApp Commands Not Working
- Verify the attendant is registered in
attendant.csv - Check that the phone number matches exactly
- Ensure
crm-enabled,trueis set
See Also
- Transfer to Human - Bot-to-human handoff
- Attendance Queue - Queue configuration
- LLM Configuration - LLM setup
- config.csv Format - Configuration reference
Chapter 12: Authentication & Security
User authentication and permission management for BotServer.
Overview
BotServer provides enterprise-grade security with flexible authentication options and granular permissions.
Authentication Methods
| Method | Use Case |
|---|---|
| Session Token | Web/API access |
| OAuth2/OIDC | SSO integration |
| API Key | Service accounts |
| Bot Auth | Bot-to-bot communication |
Quick Start
' Check if user is authenticated
IF user.authenticated THEN
TALK "Welcome, " + user.name
ELSE
TALK "Please log in first"
END IF
Security Features
- Password Hashing: Argon2 with secure defaults
- Session Management: Cryptographic tokens, configurable expiry
- Rate Limiting: Prevent brute force attacks
- Audit Logging: Track all authentication events
- Encryption: AES-GCM for data at rest
Permission Levels
| Level | Access |
|---|---|
admin | Full system access |
manager | Bot management |
user | Standard access |
guest | Read-only |
Configuration
name,value
auth-session-ttl,3600
auth-max-attempts,5
auth-lockout-duration,900
Chapter Contents
- User Authentication - Login flows
- Password Security - Password policies
- API Endpoints - Auth API reference
- Bot Authentication - Service accounts
- Security Features - Protection mechanisms
- Security Policy - Best practices
- Compliance Requirements - GDPR, LGPD, HIPAA
- Permissions Matrix - Access control
- User vs System Context - Execution contexts
See Also
- REST API - API authentication
- Configuration - Auth settings
User Authentication
General Bots uses a directory service component for user authentication and authorization. No passwords are stored internally in General Bots.
Overview
Authentication in General Bots is handled entirely by the directory service, which provides:
- User identity management
- OAuth 2.0 / OpenID Connect (OIDC) authentication
- Single Sign-On (SSO) capabilities
- Multi-factor authentication (MFA)
- User and organization management
- Role-based access control (RBAC)
Architecture
Directory Service Integration
General Bots integrates with the directory service through:
- DirectoryClient: Client for API communication
- AuthService: Service layer for authentication operations
- OIDC Flow: Standard OAuth2/OIDC authentication flow
- Service Account: For administrative operations
No Internal Password Storage
- No password_hash columns: Users table only stores directory user IDs
- No Argon2 hashing: All password operations handled by directory service
- No password reset logic: Managed through directory service’s built-in flows
- Session tokens only: General Bots only manages session state
Authentication Flow
Authentication Architecture
User Registration
- User registration request sent to directory service
- Directory service creates user account
- User ID returned to BotServer
- General Bots creates local user reference
- Session established with General Bots
User Login
- User redirected to directory service login page
- Credentials validated by directory service
- OIDC tokens returned via callback
- General Bots validates tokens
- Local session created
- Session token issued to client
Token Validation
- Client includes session token
- General Bots validates local session
- Optional: Refresh with directory service if expired
- User context loaded from directory service
- Request processed with user identity
Directory Service Configuration
Auto-Configuration
During bootstrap, General Bots automatically:
- Installs directory service via installer.rs
- Configures directory service with PostgreSQL
- Creates default organization
- Sets up service account
- Creates initial admin user
- Configures OIDC application
Database Schema
Users Table (Simplified)
| Column | Type | Description |
|---|---|---|
| id | UUID | Internal General Bots ID |
| directory_id | TEXT | User ID in directory service |
| username | TEXT | Cached username |
| TEXT | Cached email | |
| created_at | TIMESTAMPTZ | First login time |
| updated_at | TIMESTAMPTZ | Last sync with directory |
Note: No password_hash or any password-related fields exist.
User Sessions Table
| Column | Type | Description |
|---|---|---|
| id | UUID | Session ID |
| user_id | UUID | Reference to users table |
| session_token | TEXT | General Bots session token |
| directory_token | TEXT | Cached OIDC token |
| expires_at | TIMESTAMPTZ | Session expiration |
| created_at | TIMESTAMPTZ | Session start |
Authentication Endpoints
Login Initiation
GET /auth/login
Redirects to Zitadel login page with OIDC parameters.
OAuth Callback
GET /auth/callback?code=...&state=...
Handles return from Zitadel after successful authentication.
Logout
POST /auth/logout
Terminates local session and optionally triggers Zitadel logout.
Session Validation
GET /auth/validate
Headers: Authorization: Bearer {session_token}
Directory Service Features
User Management
- Create, update, delete users
- Password reset flows
- Email verification
- Profile management
- Password policies (managed in Zitadel)
- Account locking
- Password recovery
Multi-Factor Authentication
Configured in Zitadel:
- TOTP (Time-based One-Time Passwords)
- WebAuthn/FIDO2
- SMS OTP (if configured)
- Email OTP
Single Sign-On
- One login for all applications
- Session management across services
- Centralized user directory
- External IdP integration
Organizations
- Multi-tenant support
- Organization-specific policies
- Delegated administration
- User isolation
Directory Service Integration
Directory Client Implementation
Located in src/directory/client.rs:
- Manages API communication
- Handles token refresh
- Caches access tokens
- Provides user operations
AuthService
Located in src/directory/mod.rs:
- High-level authentication operations
- Session management
- User profile caching
- Group/role management
Security Benefits
Centralized Security
- Professional identity platform
- Regular security updates
- Compliance certifications
- Audit logging
No Password Liability
- No password storage risks
- No hashing implementation errors
- No password database leaks
- Reduced compliance burden
Advanced Features
- Passwordless authentication
- Adaptive authentication
- Risk-based access control
- Session security policies
User Operations
Creating Users
Creating users via Directory Client:
- Username: john_doe
- Email: john@example.com
- First name: John
- Last name: Doe
- Password: Set through Directory UI or email flow
Getting User Info
User information is fetched from the Directory service using the directory ID.
Managing Sessions
Sessions are managed locally by General Bots but authenticated through Directory Service:
- Session creation after Directory auth
- Local session tokens for performance
- Periodic validation with Zitadel
- Session termination on logout
Default Users
During bootstrap, the system creates:
-
Admin User
- Username: admin (configurable)
- Email: admin@localhost
- Password: Randomly generated (displayed once during setup)
- Role: Administrator
-
Regular User
- Username: user
- Email: user@default
- Password: Randomly generated (displayed once during setup)
- Role: Standard user
Groups and Roles
Organization Management
- Organizations created in Zitadel
- Users assigned to organizations
- Roles defined per organization
- Permissions inherited from roles
Role-Based Access
- Admin: Full system access
- User: Standard bot interaction
- Custom roles: Defined in Zitadel
Monitoring and Audit
Directory Service Audit Logs
- All authentication events logged
- User actions tracked
- Administrative changes recorded
- Security events monitored
Session Metrics
General Bots tracks:
- Active sessions count
- Session creation rate
- Failed authentication attempts
- Token refresh frequency
Troubleshooting
Common Issues
-
Zitadel Connection Failed
- Check Zitadel is running on port 8080
- Verify ZITADEL_ISSUER_URL
- Check network connectivity
-
Authentication Fails
- Verify client credentials
- Check redirect URI configuration
- Review Zitadel logs
-
Session Issues
- Clear browser cookies
- Check session expiry settings
- Verify token refresh logic
Best Practices
- Use Zitadel UI: Manage users through Zitadel interface
- Configure MFA: Enable multi-factor for admin accounts
- Regular Updates: Keep Zitadel updated
- Monitor Logs: Review authentication logs regularly
- Session Timeout: Configure appropriate session duration
- Secure Communication: Use HTTPS in production
Migration from Other Systems
When migrating from password-based systems:
- Export user data (without passwords)
- Import users into Zitadel
- Force password reset for all users
- Update application to use OIDC flow
- Remove password-related code
Summary
General Bots’ integration with the Directory Service provides enterprise-grade authentication without the complexity and risk of managing passwords internally. All authentication operations are delegated to the Directory Service, while General Bots focuses on session management and bot interactions.
Password Security
General Bots delegates all password security to the Directory Service (currently Zitadel, can be migrated to Keycloak), an enterprise-grade identity management platform. No passwords are ever stored, hashed, or managed within General Bots itself.
Overview
Password security is handled entirely by Zitadel, which provides:
- Industry-standard password hashing (Argon2/bcrypt)
- Configurable password policies
- Password history and rotation
- Breach detection
- Self-service password recovery
No Internal Password Management
What General Bots Does NOT Do
- No password storage: No password or hash columns in database
- No hashing implementation: No Argon2/bcrypt code in BotServer
- No password validation: All validation done by Zitadel
- No password reset logic: Handled through Zitadel workflows
- No password policies: Configured in Zitadel admin console
What General Bots DOES Do
- Redirects to Zitadel for authentication
- Stores Zitadel user IDs
- Manages local session tokens
- Caches user profile information
- Validates sessions locally for performance
Zitadel Password Security
Hashing Algorithm
Zitadel uses industry-standard algorithms:
- Default: Argon2id (recommended)
- Alternative: bcrypt (for compatibility)
- Configurable: Parameters can be adjusted
- Automatic: Rehashing on algorithm updates
Password Policies
Configured in Zitadel admin console:
- Minimum length (default: 8 characters)
- Maximum length (configurable)
- Character requirements (uppercase, lowercase, numbers, symbols)
- Complexity rules
- Common password blacklist
- Password history (prevent reuse)
- Expiration policies
Password Storage in Zitadel
Zitadel stores:
- Hashed passwords (never plaintext)
- Salt per password
- Algorithm identifier
- Hash parameters
- Password history
- Last changed timestamp
Configuration
Setting Password Policies
Access Zitadel admin console:
- Navigate to Settings → Password Complexity
- Configure requirements:
- Min/max length
- Required character types
- Expiry settings
- Save changes (applies immediately)
Example Policy Configuration
In Zitadel UI or API:
{
"minLength": 12,
"maxLength": 128,
"hasUppercase": true,
"hasLowercase": true,
"hasNumber": true,
"hasSymbol": true,
"passwordHistory": 5,
"expiryDays": 90
}
Password Reset Flow
User-Initiated Reset
- User clicks “Forgot Password” on Zitadel login
- Email sent with reset link
- User clicks link (time-limited)
- New password entered in Zitadel UI
- Password validated against policy
- Hash updated in Zitadel database
- User can login with new password
Admin-Initiated Reset
- Admin accesses Zitadel console
- Navigates to user management
- Triggers password reset
- User receives reset email
- Same flow as user-initiated
Security Features
Breach Detection
Zitadel includes:
- Have I Been Pwned integration
- Checks passwords against breach databases
- Warns users of compromised passwords
- Forces reset if detected in breach
Multi-Factor Authentication
Additional security beyond passwords:
- TOTP (Google Authenticator, etc.)
- WebAuthn/FIDO2 keys
- SMS OTP (if configured)
- Email verification codes
Account Protection
- Account lockout after failed attempts
- CAPTCHA after threshold
- IP-based rate limiting
- Suspicious activity detection
- Passwordless options available
Integration Points
Bootstrap Process
During setup, General Bots:
- Installs Directory Service (Zitadel)
- Configures database connection
- Creates admin account with randomly generated password
- Password is displayed once during initial setup
Authentication Flow
- User enters credentials in Directory Service UI
- Directory Service validates password
- OIDC tokens issued
- General Bots receives tokens
- No password ever touches General Bots
Session Management
After Directory Service authentication:
- General Bots creates local session
- Session token generated (not password-related)
- User ID linked to Directory Service ID
- No password data stored
Default Credentials
Initial Admin Account
Created during bootstrap:
- Username:
admin - Password: Randomly generated
- Displayed once during initial setup
- Should be stored securely or changed immediately
Initial User Account
Created during bootstrap:
- Username:
user - Password: Randomly generated
- Displayed once during initial setup
- Must be changed on first login
Best Practices
For Administrators
- Secure Initial Passwords: Store or change randomly generated passwords immediately
- Configure Policies: Set appropriate password requirements
- Enable MFA: Require for admin accounts
- Monitor Logs: Review authentication attempts
- Update Regularly: Keep Zitadel updated
- Test Recovery: Verify password reset works through Directory Service
For Developers
- Never Touch Passwords: Let Zitadel handle everything
- Use OIDC Flow: Standard OAuth2/OpenID Connect
- Validate Tokens: Check with Zitadel when needed
- Cache Carefully: Don’t cache sensitive data
- Log Safely: Never log authentication details
For Users
- Use Strong Passwords: Follow policy requirements
- Enable MFA: Add extra security layer
- Unique Passwords: Don’t reuse across services
- Regular Updates: Change periodically if required
- Report Issues: Alert admins of problems
Compliance
Zitadel’s password handling helps meet:
- GDPR: Data protection requirements
- NIST 800-63B: Modern password guidelines
- OWASP: Security best practices
- PCI DSS: Payment card standards
- HIPAA: Healthcare requirements
- SOC 2: Security controls
Troubleshooting
Common Password Issues
-
Password Reset Not Working
- Check email configuration
- Verify SMTP settings in Zitadel
- Check spam folders
-
Policy Not Enforced
- Review Zitadel configuration
- Check policy is active
- Verify user’s organization settings
-
Account Locked
- Check lockout policy
- Admin can unlock via console
- Wait for timeout period
-
MFA Issues
- Verify time sync for TOTP
- Check backup codes
- Admin can reset MFA
Security Benefits
Delegated Security
- Professional Implementation: Security experts maintain Zitadel
- Regular Updates: Security patches applied by Zitadel team
- Compliance: Certifications maintained by Zitadel
- No Liability: Password breaches not BotServer’s responsibility
Reduced Attack Surface
- No password code to exploit
- No hashing vulnerabilities
- No timing attacks possible
- No password database to breach
Advanced Features
Available through Zitadel:
- Passwordless authentication
- Biometric support
- Hardware key support
- Risk-based authentication
- Adaptive security
Migration Guide
From Internal Passwords
If migrating from a system with internal passwords:
- Export Users: Username and email only (no passwords)
- Import to Zitadel: Create accounts
- Force Reset: All users must set new passwords
- Remove Old Code: Delete password-related code
- Update Docs: Reflect new authentication flow
Password Policy Migration
- Document existing policy
- Configure equivalent in Zitadel
- Test with sample accounts
- Communicate changes to users
- Provide support during transition
Summary
General Bots achieves enterprise-grade password security by not handling passwords at all. The Directory Service provides professional identity management with all the security features needed for production deployments. This separation of concerns allows General Bots to focus on bot functionality while delegating security to a specialized platform.
API Endpoints
This chapter provides a comprehensive reference for the API endpoints exposed by General Bots. The platform offers RESTful endpoints for authentication, session management, user operations, and bot interactions, as well as WebSocket connections for real-time communication.
Authentication Endpoints
Authentication in General Bots is delegated to the Directory Service, which implements industry-standard OAuth2 and OpenID Connect protocols. The authentication endpoints primarily serve as integration points with this external identity provider.
OAuth Login
The login process begins when a client application directs the user to the /auth/login endpoint using a GET request. This endpoint does not require any request body or authentication headers, as its purpose is to initiate the OAuth2 flow. Upon receiving this request, the server generates appropriate OAuth2 parameters and redirects the user’s browser to the Zitadel login page, where they can enter their credentials securely within the identity provider’s domain.
OAuth Callback
After successful authentication with the Directory Service, the user’s browser is redirected back to /auth/callback with authorization parameters. This GET endpoint expects two query parameters: a code parameter containing the authorization code issued by the Directory Service, and a state parameter that serves as a CSRF protection mechanism to ensure the callback corresponds to a legitimate login attempt.
When the callback is processed successfully, the server exchanges the authorization code for access tokens, creates a local session, sets a session cookie in the response, and redirects the user to the main application interface. This seamless flow means users typically don’t notice the redirect chain happening in the background.
Logout
To terminate a session, clients send a POST request to /auth/logout. This endpoint requires the current session token in the Authorization header using the Bearer scheme. The server invalidates the session both locally and with the Directory Service, returning a JSON response confirming successful logout. After logout, the session token becomes invalid and cannot be used for further requests.
Session Validation
The /auth/validate endpoint allows clients to verify whether their current session token remains valid without performing any other operation. By sending a GET request with the session token in the Authorization header, clients receive a JSON response indicating whether the token is valid, the associated user identifier, and the session’s expiration timestamp. This endpoint is particularly useful for single-page applications that need to check session status before making other API calls.
Session Management
Session management endpoints provide control over the user’s active sessions and their associations with bots.
Current Session Information
Clients can retrieve information about their current session by sending a GET request to /api/session. The response includes the session identifier, the user’s identifier, the currently selected bot identifier if any, and timestamps indicating when the session was created and when it will expire. This information helps applications understand the current authentication context and present appropriate interface elements.
Creating Bot Sessions
When a user wants to interact with a specific bot, the application creates a bot session by sending a POST request to /api/session/create. The request body contains a JSON object with the target bot’s identifier. If the user has permission to access the requested bot, the server creates a new session linking the user to that bot and returns the session details including its identifier, the associated bot identifier, and the session’s active status.
This separation between authentication sessions and bot sessions allows users to maintain their login while switching between different bots without requiring re-authentication.
Terminating Sessions
To end a specific session, clients send a DELETE request to /api/session/:id, where the path parameter identifies the session to terminate. The server validates that the requester has permission to terminate the specified session, typically by verifying they own it, and then invalidates it. The response confirms whether the termination succeeded.
User Management
User endpoints provide access to profile information and allow limited profile modifications.
Retrieving User Information
The /api/users/me endpoint responds to GET requests with the current user’s profile information. This includes their unique identifier, username, email address, and account creation timestamp. Since user data is managed in the Directory Service, this endpoint essentially proxies information from that system into a format convenient for the application.
Profile Updates
Users can update certain profile fields by sending a PUT request to /api/users/me with a JSON body containing the fields to modify. Supported fields typically include email address, first name, and last name. It’s important to note that these updates are actually propagated to the Directory Service, which serves as the authoritative source for user information. The endpoint validates the requested changes and forwards them to Zitadel for persistence.
Bot Interaction
Real-time communication with bots occurs primarily through WebSocket connections, though REST endpoints exist for bot discovery.
WebSocket Communication
The primary channel for bot interaction is the WebSocket endpoint at /ws. After establishing a connection, clients send JSON-formatted messages containing a message type, the content of the message, and the session identifier. The server processes these messages, routes them to the appropriate bot, and sends responses back through the same WebSocket connection.
This real-time bidirectional communication enables responsive conversational experiences without the overhead of repeated HTTP connections. The WebSocket connection maintains state throughout the conversation, allowing for context-aware responses.
Bot Discovery
Users discover available bots by sending a GET request to /api/bots. The response contains an array of bot objects, each including the bot’s identifier, display name, description, and current operational status. Only bots that the authenticated user has permission to access appear in this list, ensuring users see a curated view appropriate to their organizational role and permissions.
Administrative Endpoints
Administrative endpoints provide system management capabilities for users with appropriate privileges. The system status endpoint at /api/admin/system/status returns health information about the various system components. The metrics endpoint at /api/admin/system/metrics provides operational statistics useful for monitoring and capacity planning. Both endpoints require administrative privileges, which are validated against the user’s roles in the Directory Service.
Group Management
Group management endpoints support the organization’s permission structure. The /api/groups/create endpoint accepts POST requests to establish new groups. The /api/groups/list endpoint returns all groups visible to the requesting user. Individual group membership can be queried through /api/groups/:id/members. These endpoints work in conjunction with the Directory Service to maintain consistent group definitions across the platform.
Rate Limiting
To ensure fair resource allocation and protect against abuse, all API endpoints implement rate limiting. Public endpoints, including the health check, allow 60 requests per hour from unauthenticated clients. Authenticated users can make up to 1000 requests per hour across all endpoints. Administrative users receive a higher limit of 5000 requests per hour to accommodate their management responsibilities.
Rate limit information is communicated through response headers. The X-RateLimit-Limit header indicates the maximum requests allowed in the current window, X-RateLimit-Remaining shows how many requests remain, and X-RateLimit-Reset provides a Unix timestamp indicating when the limit resets. Applications should monitor these headers and implement appropriate backoff strategies when approaching limits.
Error Handling
All API endpoints return errors in a consistent JSON format. The response body contains an error object with a machine-readable code, a human-readable message, and an optional details object providing additional context. Common error codes include UNAUTHORIZED for missing or invalid authentication, FORBIDDEN when the user lacks required permissions, NOT_FOUND for requests targeting non-existent resources, RATE_LIMITED when request quotas are exceeded, and SERVER_ERROR for internal failures.
Clients should implement error handling that examines the error code to determine appropriate recovery actions. Authentication errors might prompt a re-login flow, while rate limiting errors should trigger request throttling.
Cross-Origin Resource Sharing
The API supports Cross-Origin Resource Sharing (CORS) to enable browser-based applications hosted on different domains. In development environments, the server accepts requests from any origin. Production deployments should configure specific allowed origins to prevent unauthorized cross-domain access. The allowed methods include GET, POST, PUT, DELETE, and OPTIONS, with Content-Type and Authorization as permitted headers.
Health Monitoring
The /health endpoint provides a simple way to verify the server is operational. Unlike other endpoints, this one requires no authentication, making it suitable for external monitoring systems and load balancer health checks. The response includes a status indicator and a timestamp, providing basic confirmation that the server can process requests.
Implementation Status
The current implementation provides full support for WebSocket communication, administrative endpoints, group management, and health checking. OAuth authentication flows through the Directory Service are functional but continue to evolve. Session management endpoints work for basic scenarios with ongoing enhancements planned. Some user profile endpoints and direct REST messaging capabilities remain under development, with batch operations planned for future releases.
Security Considerations
Several security practices should guide API usage. With the exception of the health endpoint, all API calls require valid authentication. Administrative operations additionally verify that the requester holds appropriate roles within the Directory Service. Session tokens must be treated as secrets, stored securely on clients, and never logged or exposed. Production deployments must use HTTPS to encrypt all API traffic. Applications performing state-changing operations should implement CSRF protection through the state parameter and appropriate token validation.
Recommended Practices
Effective API integration follows several patterns. Always include the session token in the Authorization header for authenticated requests. Implement graceful handling of token expiration by detecting authentication errors and prompting re-login when necessary. Use exponential backoff for retry logic, starting with short delays and increasing them progressively for repeated failures. Cache responses where appropriate to reduce server load and improve application responsiveness. Prefer WebSocket connections for conversational interactions where real-time response is important. Monitor rate limit headers proactively to avoid hitting limits during normal operation.
Bot Authentication
This section describes how General Bots handles bot authentication through its session-based architecture. Unlike traditional systems where bots might have independent credentials, General Bots implements a model where bots operate within the context of authenticated user sessions.
Overview
Bot authentication in General Bots follows a fundamentally different approach from conventional bot platforms. Rather than assigning credentials directly to bots, the system ties all bot operations to user sessions. When a user authenticates through the Directory Service, they gain access to interact with bots based on their organizational membership and assigned permissions. This design eliminates the complexity of managing separate bot credentials while maintaining robust security through user-based access control.
The key principle underlying this architecture is that bots are resources accessed by users, not independent actors with their own identities. This approach simplifies security management and creates a clear audit trail linking all bot activities to specific authenticated users.
Bot Registration
When the system bootstraps, bots are registered in the database through an automated discovery process. The system scans the templates/ directory for any folder ending in .gbai and creates corresponding entries in the database.
Database Storage
Each bot entry in the bots table contains a UUID primary key that uniquely identifies the bot, the bot’s display name, an organization association that determines which users can access it, and timestamps tracking when the bot was created and last modified. This minimal schema reflects the philosophy that bots themselves don’t require authentication credentials—they simply need to be identifiable and associable with organizations.
Configuration Management
Bot-specific settings are stored separately in the bot_configuration table, which maintains key-value pairs loaded from the bot’s config.csv file. This table holds runtime parameters, feature flags, LLM configuration, and any other settings that control the bot’s behavior. By separating configuration from the core bot record, administrators can update settings without affecting the bot’s fundamental identity or registration status.
Session-Based Bot Access
The session-based access model forms the foundation of how users interact with bots. When a user wants to communicate with a bot, they must first authenticate through the Directory Service using standard OAuth2/OIDC flows. Once authenticated, the user can select from available bots based on their permissions, and the system creates a session that links that specific user to their chosen bot.
Session Structure
The user_sessions table maintains the critical relationship between users and bots. Each session record contains a unique identifier, references to both the user and the selected bot, a session token for subsequent requests, and an expiration timestamp. All operations within that session are automatically scoped to the associated bot, preventing any accidental or intentional cross-bot data access.
This session structure means that when a user sends a message or requests information, the system automatically knows which bot should handle the request and which data stores should be queried. The session token serves as proof of both user authentication and bot selection, streamlining the authorization process for each subsequent request.
Data Isolation
General Bots implements strict data isolation between bots to ensure that information from one bot cannot leak to another. Each bot maintains its own isolated storage for message history, memories, knowledge bases, configuration, and drive bucket contents.
Cross-Bot Protection
The isolation model works at multiple levels. Sessions are locked to a single bot for their entire duration, meaning the system cannot accidentally route requests to the wrong bot. All database queries include the bot identifier as a filter condition, ensuring that even if a bug existed in the application logic, the database layer would prevent cross-bot data access. Storage buckets in the drive system are segregated by bot, with each bot’s files residing in a dedicated bucket that other bots cannot access.
This defense-in-depth approach means that data isolation doesn’t depend on any single mechanism being perfect. Multiple independent safeguards work together to maintain separation between bots.
Bot Discovery and Selection
Users access bots through a discovery process that respects organizational boundaries and permission assignments. The available bots for any given user depend on their organization membership, any direct bot assignments they’ve received, whether specific bots are marked as publicly available, and their role-based access permissions.
When starting a new conversation, users are presented with a list of bots they’re authorized to access. After selecting a bot, the system creates a new session linking the user to that bot, loads the bot’s context including its configuration and any persistent memories, and the conversation begins with the bot’s welcome message or startup script.
Bot Lifecycle
Understanding the bot lifecycle helps administrators manage their bot deployments effectively. Bots move through several states from creation to active operation.
Creation Process
During the bootstrap process, the system discovers bot templates and registers them in the database. For each template found, the system creates a bot record with generated identifiers, loads configuration from the bot’s config.csv file, uploads the bot’s resources to the drive storage system, and indexes any knowledge base documents into the vector database. This automated process means that deploying a new bot is as simple as adding its folder to the templates directory and restarting the server.
Activation Requirements
A bot becomes active and available for user access when its registration is complete, its configuration passes validation, all required resources are available in storage, and no critical errors occurred during initialization. If any of these conditions aren’t met, the bot remains in an inactive state and won’t appear in users’ available bot lists.
Updating Bots
Bot updates follow a similar automated process. Changes to configuration files are detected and applied, modified scripts are reloaded, and knowledge base updates trigger reindexing. Importantly, none of these updates require any authentication changes because bots don’t have their own credentials to manage.
Permission Levels
Bot access is controlled through a hierarchy of visibility settings that administrators configure per bot. At the most open level, public bots can be accessed by anyone with a valid user account. Organization-level bots restrict access to members of the bot’s associated organization. Private bots limit access to specifically assigned users. Admin-level bots require administrative privileges to access.
These permission levels work in conjunction with the Directory Service’s group and role system, allowing fine-grained control over who can access which bots within an organization.
Configuration Settings
Bot identity and access configuration are specified in the bot’s config.csv file. The identity settings include the bot’s display name and its organization association. Access configuration specifies the visibility level, which roles are permitted to access the bot, and operational limits like maximum concurrent sessions.
For example, a customer service bot might be configured with organization-level access, allowing any authenticated member of the organization to interact with it, while an HR bot might restrict access to members of the HR role group.
Security Considerations
The design decision to not give bots their own credentials has significant security implications, all of them positive. Bots cannot authenticate independently, which means there’s no possibility of a bot’s credentials being compromised or misused. Every bot operation requires a valid user context, creating a complete audit trail. There’s no mechanism for unauthorized bot-to-bot communication because bots can’t initiate actions without a user session.
Preventing Bot Impersonation
Because bots have no credentials, they cannot be impersonated through stolen credentials. An attacker would need to compromise an actual user account to interact with a bot, and even then, their actions would be logged against that user account. This makes detecting and investigating security incidents straightforward—every bot interaction traces back to a specific authenticated user.
API Integration
All programmatic access to bots follows the same user-authenticated model as interactive access. API requests must include a valid user session token in the Authorization header, along with the target bot identifier in the request body or URL.
There are no separate bot API keys or service accounts for bot access. This uniformity simplifies the security model and ensures that API access receives the same level of auditing and access control as interactive access through the web interface.
Multi-Bot Scenarios
Users who need to work with multiple bots can do so through several mechanisms. They can end their current bot session and start a new one with a different bot, with their conversation context switching to the new bot while history from each bot remains preserved separately. For users who need simultaneous access to multiple bots, the system supports concurrent sessions with different session identifiers, separate conversation contexts, and fully isolated data access.
This flexibility allows power users to leverage multiple bots for different tasks without the complexity of managing separate credentials or authentication contexts.
Monitoring and Auditing
Administrators can monitor bot access patterns through built-in metrics and logging capabilities. Authentication metrics track sessions per bot, user engagement levels, access attempts, and permission denials. Audit logging captures session creation events, bot selection actions, configuration changes, and any access violations.
These monitoring capabilities support both operational oversight and compliance requirements, providing the visibility needed to understand how bots are being used across the organization.
Best Practices
Successful bot deployment follows several established patterns. Organizing bots by organization groups them logically and simplifies permission management. Configuring appropriate access levels ensures that sensitive bots aren’t accidentally exposed to unauthorized users. Monitoring usage patterns helps identify both popular bots that might need additional resources and underutilized bots that might need better documentation or training. Regular permission audits ensure that access levels remain appropriate as organizational roles change. Maintaining documentation for each bot helps users understand what each bot can do and when to use it. Testing data isolation periodically verifies that the security boundaries between bots remain intact.
Troubleshooting Common Issues
When users report that a bot isn’t accessible, several common causes should be investigated. The user might not be a member of the bot’s organization, they might lack sufficient permissions for the bot’s access level, the bot might not have completed its activation process, or there might be a configuration error preventing the bot from loading properly.
Session-related issues typically stem from expired sessions requiring re-authentication, invalid bot identifiers in API requests, concurrent session limits being exceeded, or database connectivity problems preventing session validation.
Implementation Notes
Bot authentication is not implemented as a separate module but is integrated throughout the session management, user authentication, and database query systems. This integration reflects the fundamental design principle that bot access is a function of user authentication rather than an independent system.
Future versions might consider enhancements such as bot-specific API tokens for automated workflows, service accounts for scheduled bot operations, controlled bot-to-bot communication for complex scenarios, and webhook authentication for external system integration. However, any such features would be implemented as extensions of the user-session model rather than as independent bot credentials.
Summary
The bot authentication model in General Bots achieves security through simplicity. By tying all bot access to authenticated user sessions, the system eliminates an entire class of credential management problems while maintaining complete auditability of all bot interactions. This design allows organizations to focus on building useful bots rather than managing complex authentication infrastructure, while still meeting enterprise security requirements.
General Bots Security Features Guide
Overview
This document provides a comprehensive overview of all security features and configurations available in General Bots, designed for security experts and enterprise deployments. Understanding these features enables organizations to deploy General Bots with confidence in regulated environments.
Feature Flags
Core Security Features
Security features are configured through Cargo.toml or via build flags at compile time. A basic build with desktop UI uses cargo build --features desktop. A full security-enabled build uses cargo build --features "desktop,vectordb,email". A server-only build without desktop UI uses cargo build --no-default-features --features "vectordb,email".
Available Features
The desktop feature provides the Tauri desktop UI with a sandboxed runtime and controlled system access, and is enabled by default. The vectordb feature enables Qdrant integration for AI-powered threat detection and semantic search, and must be explicitly enabled. The email feature provides IMAP and SMTP support, which requires secure credential storage, and must also be explicitly enabled.
Enterprise Security Features
Enterprise-ready security features include built-in encryption for data at rest via the aes-gcm library, comprehensive audit logging capabilities, role-based access control implemented through the Directory Service, multi-factor authentication available via the Directory Service, and SAML/OIDC single sign-on support also through the Directory Service.
Authentication and Authorization
Directory Service Integration
General Bots uses the Directory Service as the primary identity provider. Currently this is Zitadel, though it can be migrated to Keycloak or other OIDC providers. The integration provides OAuth2 and OIDC authentication, JWT token validation, user and group management, permission management, and session handling.
Password Security
Password hashing uses the Argon2id algorithm, which is memory-hard and GPU-resistant. The configuration uses 19456 KB of memory, 2 iterations, parallelism of 1, and a random 32-byte salt. This configuration provides strong protection against both online and offline attacks while maintaining reasonable authentication performance.
Token Management
Access tokens use JWT format with RS256 signing for verifiable authentication. Refresh tokens consist of secure random 256-bit values for session renewal. Session tokens use UUID v4 format with cache storage for fast validation. Token rotation happens automatically when tokens approach expiry, ensuring continuous secure access without user interruption.
Encryption and Cryptography
Cryptographic Libraries
The platform uses well-vetted cryptographic libraries for all security operations. The aes-gcm library version 0.10 provides authenticated encryption using AES-256-GCM. The argon2 library version 0.5 handles password hashing with Argon2id. The sha2 library version 0.10.9 provides cryptographic hashing with SHA-256. The hmac library version 0.12.1 enables message authentication using HMAC-SHA256. The rand library version 0.9.2 provides cryptographic random number generation using ChaCha20.
Data Encryption
Encryption at rest protects stored data throughout the system. Database encryption applies column-level encryption to sensitive fields. File storage encryption uses AES-256-GCM for all uploaded files. Configuration encryption protects secrets using a master key.
Encryption in transit protects data during transmission. All external communications use TLS 1.3 for strong protection. Service-to-service communication uses mutual TLS (mTLS) for bidirectional authentication. Certificate pinning applies to critical services to prevent man-in-the-middle attacks.
Network Security
API Security
Rate limiting through Caddy protects against abuse. Per-IP limits default to 100 requests per minute. Per-user limits default to 1000 requests per hour. These limits are configured in the Caddyfile and can be adjusted for specific deployment requirements.
CORS configuration through Caddy controls cross-origin requests. Origins use a strict whitelist approach. Credentials are enabled for authenticated requests. HTTP methods are explicitly allowed rather than using wildcards.
Input validation protects against injection attacks. Schema validation applies to all inputs before processing. SQL injection prevention uses PostgreSQL prepared statements exclusively. XSS protection applies output encoding to all user-generated content. Path traversal prevention validates all file paths against allowed directories.
WebSocket Security
WebSocket connections require authentication before establishment. Message size limits default to 10MB to prevent resource exhaustion. Heartbeat and ping-pong mechanisms validate connection health. Suspicious activity triggers automatic disconnection to protect the system.
Data Protection
Database Security
PostgreSQL security features provide comprehensive database protection. Row-level security (RLS) restricts access to specific rows based on user context. Column encryption protects personally identifiable information. Audit logging records all database access. Connection pooling limits resource consumption. Prepared statements prevent SQL injection. SSL/TLS connections are enforced for all database communication.
File Storage Security
Drive configuration provides secure object storage. Bucket encryption uses AES-256 for all stored objects. Policy-based access control restricts file access. Versioning enables recovery from accidental changes. Immutable objects support prevents tampering. TLS encryption protects data in transit.
Local storage follows security best practices. Directory permissions are set to 700 for restricted access. File permissions are set to 600 for owner-only access. Temporary files undergo secure deletion to prevent data leakage.
Memory Protection
Memory protection measures prevent sensitive data exposure. Zeroization clears sensitive data from memory after use. Logging configurations exclude secrets from log output. Secure random generation uses cryptographic sources. Protected memory pages safeguard cryptographic keys during operation.
Audit and Compliance
Log Security
Structured logging configuration ensures comprehensive audit trails. Log level uses INFO in production and DEBUG in development. Format uses JSON for machine parsing and analysis. Rotation occurs daily with 30-day retention by default. Sensitive data is automatically redacted from log output.
Audit Events
The system automatically logs security-relevant events including authentication attempts both successful and failed, authorization failures when users attempt unauthorized actions, data access operations for both reads and writes, configuration changes by administrators, administrative actions across the system, API calls with relevant parameters, and security violations when detected.
Compliance Support
GDPR compliance features include data deletion capabilities and data export for portability. SOC2 compliance is supported through comprehensive audit trails and access controls. HIPAA compliance can be achieved with encryption and access logging configuration. PCI DSS requirements are addressed through no credit card storage and tokenization support for payment processing.
Security Configuration
Environment Variables
Required security settings include BOTSERVER_JWT_SECRET as a 256-bit hex string for token signing, BOTSERVER_ENCRYPTION_KEY as a 256-bit hex string for data encryption, and DATABASE_ENCRYPTION_KEY as a 256-bit hex string for database field encryption.
Directory service configuration requires ZITADEL_DOMAIN pointing to your Zitadel instance, ZITADEL_CLIENT_ID with your application client ID, and ZITADEL_CLIENT_SECRET with your application secret.
Drive configuration requires MINIO_ENDPOINT for the storage server address, MINIO_ACCESS_KEY and MINIO_SECRET_KEY for authentication, and MINIO_USE_SSL set to true for encrypted connections.
Cache configuration requires CACHE_URL pointing to the Redis-compatible server and CACHE_PASSWORD for authentication.
Optional security enhancements include BOTSERVER_ENABLE_AUDIT to enable comprehensive audit logging, BOTSERVER_REQUIRE_MFA to enforce multi-factor authentication, BOTSERVER_SESSION_TIMEOUT to set session duration in seconds, BOTSERVER_MAX_LOGIN_ATTEMPTS to limit failed login attempts, and BOTSERVER_LOCKOUT_DURATION to set account lockout time in seconds.
Network security settings include BOTSERVER_ALLOWED_ORIGINS for CORS whitelist, BOTSERVER_RATE_LIMIT_PER_IP for per-IP request limits, BOTSERVER_RATE_LIMIT_PER_USER for per-user request limits, and BOTSERVER_MAX_UPLOAD_SIZE for maximum file upload size in bytes.
Database Configuration
PostgreSQL security settings should be added to postgresql.conf to enable SSL with ssl set to on, specify certificate files with ssl_cert_file and ssl_key_file, configure strong ciphers with ssl_ciphers, enable server cipher preference with ssl_prefer_server_ciphers, and set the ECDH curve with ssl_ecdh_curve. The database connection string should include sslmode=require to enforce encrypted connections.
Caddy Configuration
Caddy provides secure reverse proxy functionality with automatic HTTPS. Global options should disable the admin interface and enable automatic HTTPS. TLS configuration should enforce TLS 1.3 only with strong cipher suites. Security headers should include Strict-Transport-Security, X-Frame-Options, X-Content-Type-Options, X-XSS-Protection, Referrer-Policy, and Content-Security-Policy. Rate limiting should be configured per remote host. The reverse proxy should forward appropriate headers including X-Real-IP, X-Forwarded-For, and X-Forwarded-Proto. Access logging should output to files in JSON format for analysis.
Best Practices
Development Practices
Dependency management requires regular security updates. Run cargo audit to check for known vulnerabilities. Run cargo update to apply security patches. Use cargo audit –deny warnings in CI to prevent vulnerable dependencies.
Code quality is enforced through Cargo.toml lints. Unsafe code is prohibited in application code. Unwrap calls are forbidden in production code paths. Panic macros are not allowed. Complete error handling is required for all fallible operations.
Security testing validates protection mechanisms. Run the security test suite with cargo test –features security_tests. Fuzzing for input validation uses cargo fuzz run api_fuzzer to find edge cases.
Deployment Practices
Container security for LXC deployments requires disabling privileged mode with security.privileged set to false, enabling isolated ID mapping with security.idmap.isolated set to true, and disabling nesting with security.nesting set to false. Applications should run as non-root users within containers.
Container security profiles should specify resource limits including CPU and memory caps. Root device configuration should use appropriate storage pools. Security settings should prevent privilege escalation.
Network policies should restrict traffic appropriately. Ingress should only be allowed from the Caddy proxy. Egress should be limited to PostgreSQL, Drive, Qdrant, and Cache. All other traffic should be blocked. Internal communication between components should use isolated networks.
Monitoring Practices
Security metrics to track include failed authentication rate, unusual API access patterns, resource usage anomalies, and geographic access patterns for detecting account compromise.
Alerting thresholds should trigger warnings at 5 or more failed logins, lock accounts at 10 or more failed logins, alert on unusual geographic access patterns, and issue critical alerts for any privilege escalation attempts.
Incident response capabilities include automatic session termination when threats are detected, account lockout for repeated failures, and comprehensive logging for forensic analysis.
Security Checklist
Before deploying General Bots in production, verify that all environment variables are set with strong random values, TLS is properly configured with valid certificates, database connections use SSL, file storage uses encryption, audit logging is enabled, rate limiting is configured appropriately, security headers are set in the reverse proxy, monitoring and alerting are configured, backup and recovery procedures are tested, and incident response procedures are documented.
See Also
The Security Policy chapter provides organizational security policies and procedures. The Password Security chapter details password requirements and implementation. The User Authentication chapter covers authentication flows and configuration. The Compliance Requirements chapter addresses regulatory compliance in detail.
General Bots Security Policy
Overview
This comprehensive security policy establishes the framework for protecting General Bots systems, data, and operations. It covers information security, access control, data protection, incident response, and ongoing maintenance procedures. All personnel, contractors, and third parties with access to General Bots systems must understand and comply with this policy.
1. Information Security Policy
1.1 Purpose and Scope
This Information Security Policy applies to all users, systems, and data within the General Bots infrastructure. It establishes the standards for protecting confidential information, maintaining system integrity, and ensuring business continuity across all operations.
1.2 Information Classification
We classify information into categories to ensure proper protection and appropriate resource allocation. Unclassified information can be made public without implications for the company, including marketing materials and public documentation. Employee Confidential information encompasses personal employee data such as medical records, salary information, performance reviews, and contact details. Company Confidential information includes business-critical assets such as contracts, source code, business plans, passwords for critical IT systems, client contact records, financial accounts, and strategic plans. Client Confidential information covers client personally identifiable information, passwords to client systems, client business plans, new product information, and market-sensitive information.
1.3 Security Objectives
Our security framework aims to reduce the risk of IT problems through proactive measures and continuous monitoring. We plan for problems and establish procedures to deal with them effectively when they occur. Our systems are designed to keep working even when something goes wrong through redundancy and failover capabilities. We protect company, client, and employee data through encryption, access controls, and monitoring. We keep valuable company information such as plans and designs confidential through strict access controls. We meet our legal obligations under the General Data Protection Regulation and other applicable laws. We fulfill our professional obligations towards our clients and customers through transparent practices and reliable service.
1.4 Roles and Responsibilities
Rodrigo Rodriguez serves as the director with overall responsibility for IT security strategy and policy approval. Pragmatismo Data Center functions as the IT partner organization we use to help with planning and technical support. The Data Protection Officer advises on data protection laws and best practices, reporting directly to senior management. All employees are responsible for following security policies and reporting security incidents promptly. System administrators are responsible for implementing and maintaining security controls according to this policy. Department heads are responsible for ensuring their teams comply with security policies and complete required training.
1.5 Review Process
We review this policy annually, with the next review scheduled for the date indicated in the document control section. Questions, suggestions, or feedback should be directed to security@pragmatismo.com.br for consideration during the review process or for immediate clarification.
2. Access Control Policy
2.1 Access Management Principles
Our access management follows four core principles. The Least Privilege principle ensures users receive only the minimum access rights necessary to perform their job functions. The Need-to-Know principle restricts access to confidential information to those who require it for their specific duties. Separation of Duties divides critical functions among multiple people to prevent fraud and error. Regular Reviews conducted quarterly ensure access rights remain appropriate as roles and responsibilities evolve.
2.2 User Account Management
Account creation follows a controlled process where new accounts are created only upon approval from the user’s manager. Default accounts are disabled immediately after system installation to prevent unauthorized access. Each user has a unique account because shared accounts are strictly prohibited to maintain accountability.
Account modification requires manager approval for any access changes. Privilege escalation requires security team approval in addition to manager approval. All changes are logged and reviewed monthly to detect anomalies.
Account termination procedures ensure accounts are disabled within 2 hours of employment termination. Access is revoked immediately for terminated employees without exception. Contractor accounts expire automatically at contract end. All company devices and access credentials must be returned before departure.
2.3 Access Review Procedures
Monthly reviews examine privileged account usage patterns, check for inactive accounts that have been dormant for more than 30 days, and verify that administrative access justifications remain valid.
Quarterly reviews require department heads to review all team member access, remove unnecessary permissions, and document review results along with any actions taken.
Annual reviews conduct a comprehensive examination of all user accounts, validate role-based access assignments against current organizational structure, and audit system administrator privileges for appropriateness.
3. Password Policy
3.1 Password Requirements
Password complexity requirements mandate a minimum of 12 characters for standard users and 16 characters for administrative accounts. Passwords must include uppercase letters, lowercase letters, numbers, and special characters. Passwords cannot contain the username or common dictionary words.
Password lifetime requirements specify 90-day rotation for standard accounts, 60-day rotation for administrative accounts, and 180-day rotation for service accounts with documented exceptions approved by the security team.
Password history settings ensure the system remembers the last 12 passwords, and users cannot reuse any of these previous passwords when setting a new one.
3.2 Password Storage and Transmission
All passwords are hashed using the Argon2id algorithm, which provides strong resistance against both CPU and GPU-based attacks. Passwords are never stored in plaintext under any circumstances. Passwords are never transmitted via email or unencrypted channels. Password managers are recommended for secure storage of credentials.
3.3 Multi-Factor Authentication
Multi-factor authentication is required for all administrative accounts, remote access connections, access to confidential data, and financial system access.
Acceptable MFA methods include Time-based One-Time Passwords (TOTP) as the preferred method, hardware tokens such as YubiKey, SMS codes only as a backup method due to SIM-swapping risks, and biometric authentication where available and appropriate.
4. Data Protection Policy
4.1 Data Encryption
Encryption at rest protects stored data across all systems. Databases use AES-256-GCM encryption for sensitive fields. File storage applies AES-256-GCM encryption to all uploaded files. Backups are encrypted before transmission and storage. Mobile devices require full-disk encryption.
Encryption in transit protects data during transmission. All external communications use TLS 1.3. Service-to-service communication uses mutual TLS (mTLS). Remote access requires VPN connections. Certificate pinning applies to critical services to prevent man-in-the-middle attacks.
4.2 Data Retention and Disposal
Retention periods define how long different data types are kept. User data is retained as long as the account is active plus 30 days after closure. Audit logs are retained for 7 years to meet compliance requirements. Full backups are retained for 90 days while incremental backups are retained for 30 days. Email is retained for 2 years unless a legal hold applies.
Secure disposal ensures data cannot be recovered after deletion. Digital data undergoes secure deletion with multiple overwrites. Physical media is destroyed through shredding or degaussing. Certificates of destruction are maintained for 3 years as proof of proper disposal.
4.3 Data Privacy and GDPR Compliance
We classify and process only information necessary for the completion of our duties. We limit access to personal data to only those who need it for processing. Our classification system ensures information is protected properly and that we allocate security resources appropriately based on sensitivity levels.
User rights under GDPR are fully supported. Users have the right to access their personal data upon request. Users have the right to correction of inaccurate data. Users have the right to deletion, also known as the right to be forgotten. Users have the right to data portability in machine-readable formats. Users have the right to restrict processing of their data.
Data breach notification follows strict timelines. Breach assessment must be completed within 24 hours of discovery. Notification to authorities occurs within 72 hours if required by regulation. User notification happens without undue delay when their data is affected. All breaches are documented regardless of whether notification is required.
5. Incident Response Plan
5.1 Incident Classification
Incidents are classified into four severity levels to guide response priorities and resource allocation.
Critical incidents (P1) include active data breaches with confirmed data exfiltration, ransomware infections affecting production systems, complete system outages affecting all users, and compromise of administrative credentials. These require immediate response with all available resources.
High priority incidents (P2) include suspected data breaches under investigation, malware infections on non-critical systems, unauthorized access attempts that were detected, and partial system outages affecting critical services.
Medium priority incidents (P3) include failed security controls requiring attention, policy violations without immediate risk, minor system vulnerabilities discovered, and isolated user account compromises.
Low priority incidents (P4) include security alerts requiring investigation, policy clarification needs, security awareness issues, and minor configuration issues.
5.2 Incident Response Procedures
Detection and reporting occurs within the first 0-15 minutes. Security incidents are detected via monitoring systems or reported by users. Initial assessment determines severity level. The incident is logged in the tracking system. The security team is notified immediately for P1 and P2 incidents, or within 1 hour for P3 and P4 incidents.
Containment occurs from 15 minutes to 2 hours after detection. Affected systems are isolated from the network. Compromised accounts are disabled. Evidence is preserved for investigation. Temporary security controls are implemented. Management and stakeholders are notified.
Investigation occurs from 2 to 24 hours after containment. Logs and forensic evidence are gathered. Attack vectors and scope are analyzed. Root cause is identified. Findings are documented. A determination is made whether external authorities need notification.
Eradication typically takes 1-3 days. Malware and unauthorized access are removed. Vulnerabilities are patched. Compromised credentials are reset. Additional security controls are applied. Systems are verified to be clean.
Recovery typically takes 1-5 days. Systems are restored from clean backups if needed. Systems gradually return to production. Enhanced monitoring watches for re-infection. System functionality is validated. User communication and support is provided.
Post-incident review occurs within 1 week. The complete incident timeline is documented. Response effectiveness is analyzed. Lessons learned are identified. Security controls are updated. Detection capabilities are improved. Incident response procedures are updated based on findings.
5.3 Contact Information
Internal contacts for security matters include the Security Team at security@pragmatismo.com.br, IT Support at support@pragmatismo.com.br, and Management through Rodrigo Rodriguez.
External contacts should be maintained in a separate secure document and include local law enforcement authorities, legal counsel, the relevant Data Protection Authority, and the cyber insurance provider.
5.4 Communication Plan
Internal communication follows escalation timelines. The security team and management are notified immediately. Affected department heads are notified within 2 hours. All staff are notified within 4 hours if the impact is widespread. Daily updates continue during active incidents.
External communication follows regulatory requirements. Customers are notified within 24 hours if their data is affected. Partners are notified within 12 hours if systems are shared. Authorities are notified within 72 hours per GDPR requirements. Public and media communication occurs only through the designated spokesperson.
6. Backup and Recovery Procedures
6.1 Backup Schedule
Full backups run weekly on Sundays at 2:00 AM and include all databases, file storage, and configurations. Full backups are retained for 12 weeks and stored in a geographically separate location.
Incremental backups run daily at 2:00 AM and include only changed files and database transactions since the last backup. Incremental backups are retained for 30 days and stored both locally and replicated off-site.
Continuous backups capture database transaction logs every 15 minutes and critical configuration changes immediately. These are retained for 7 days and enable point-in-time recovery to any moment within that window.
6.2 Backup Verification
Automated testing runs continuously. Daily tests verify backup completion. Weekly tests restore sample files. Monthly tests perform full database restoration to an isolated environment.
Manual testing occurs on a scheduled basis. Quarterly tests conduct full disaster recovery drills. Bi-annual tests perform complete system restoration to an alternate site. Annual tests execute a full business continuity exercise with stakeholders.
6.3 Recovery Procedures
Recovery Time Objectives (RTO) define maximum acceptable downtime. Critical systems must recover within 4 hours. Important systems must recover within 24 hours. Non-critical systems must recover within 72 hours.
Recovery Point Objectives (RPO) define maximum acceptable data loss. Critical data has an RPO of 15 minutes. Important data has an RPO of 24 hours. Non-critical data has an RPO of 1 week.
Recovery steps follow a systematic process. First, assess damage and determine recovery scope. Second, verify backup integrity before beginning restoration. Third, restore to an isolated environment first for validation. Fourth, validate data integrity and completeness. Fifth, test system functionality thoroughly. Sixth, switch users to recovered systems. Seventh, monitor for issues during the transition period. Eighth, document the recovery process and timing for future reference.
7. Change Management Procedures
7.1 Change Categories
Standard changes are pre-approved routine modifications. These include security patches applied within 48 hours of release and user account modifications. Standard changes require only manager sign-off without additional approval.
Normal changes are non-emergency modifications requiring testing. These include software updates, new features, and infrastructure modifications. Normal changes require Change Advisory Board approval before implementation.
Emergency changes address critical security issues or outages. These include critical security patches, system outage fixes, and active threat mitigation. Emergency changes receive expedited approval from the Security Director.
7.2 Change Request Process
The change process follows eight steps. Submission requires completing the change request form with full details. Risk assessment evaluates potential security impact. Approval is obtained based on change type requirements. Testing validates the change in a non-production environment. Scheduling places the change during an appropriate maintenance window. Implementation executes the change with a rollback plan ready. Verification confirms the change was successful. Documentation updates configuration records to reflect the change.
7.3 Change Testing Requirements
Test cases must cover functionality validation, security control verification, performance impact assessment, user acceptance testing, and rollback procedure verification.
Test environments progress through stages. Development supports individual developer testing. Staging handles integration and security testing. Pre-production hosts user acceptance testing. Production uses phased rollout with enhanced monitoring.
8. Security Incident Procedures
8.1 Reporting Security Incidents
Incidents can be reported through several channels. Email reports go to security@pragmatismo.com.br. Phone reports use the security hotline. Web reports use the internal incident reporting portal. In-person reports can be made directly to the IT department.
Reportable events include suspicious emails or phishing attempts, lost or stolen devices, unauthorized access or unusual system behavior, malware alerts, data leaks or exposures, policy violations, and any security concerns or vulnerabilities discovered.
Timing requirements specify immediate reporting for critical incidents, reporting within 1 hour for high-priority incidents, and same business day reporting for medium and low priority incidents.
8.2 Employee Response to Incidents
When an incident occurs, employees should report immediately to the security team, preserve evidence by not deleting suspicious emails, disconnect their device from the network if it may be compromised, document what happened while details are fresh, and follow instructions from the security team.
Employees should avoid trying to fix the problem themselves, deleting or modifying potential evidence, discussing the incident on social media, blaming others, or ignoring suspicious activity hoping it will resolve itself.
9. Data Breach Response Procedures
9.1 Immediate Response
Within the first 24 hours, the response team must contain the breach to stop ongoing data exposure, assess the situation to determine scope and data affected, notify the security team and management, preserve logs and forensic data as evidence, and begin documenting the incident timeline.
9.2 Investigation Phase
During the 1-3 day investigation phase, forensic specialists conduct detailed analysis of the breach. The scope determination identifies all affected systems and data. Root cause analysis determines how the breach occurred. Impact analysis assesses damage and ongoing risks. Legal review consults with the legal team on notification obligations.
9.3 Notification Requirements
Internal notification follows escalation timelines. Management is notified immediately. Legal is notified within 2 hours. PR and Communications are notified within 4 hours. Affected departments are notified within 8 hours.
External notification follows regulatory requirements. Data Protection Authorities must be notified within 72 hours per GDPR requirements. Affected individuals must be notified without undue delay. Business partners must be notified within 24 hours if their data is affected. Law enforcement is notified as required by jurisdiction.
9.4 Remediation and Prevention
Following a breach, the organization applies security patches and fixes to close vulnerabilities. Compromised credentials are reset across all affected systems. Monitoring and detection capabilities are enhanced to catch similar attacks. Security controls are updated based on lessons learned. Additional security training is provided to affected teams. Policies are reviewed and updated to address gaps. All lessons learned are implemented to prevent recurrence.
10. Regular Maintenance Tasks
10.1 Weekly Tasks
Security updates are reviewed and critical security patches are applied. Antivirus and antimalware signatures are updated. Security alerts and events are reviewed. Backup completion status is checked. System resource usage is monitored for anomalies.
Automated processes run continuously including vulnerability scans, log analysis and correlation, backup integrity checks, and certificate expiration monitoring.
10.2 Monthly Tasks
Access reviews examine new user accounts created during the month, audit privileged account usage, check for inactive accounts dormant for more than 30 days, review failed login attempts for patterns, and validate group memberships remain appropriate.
System maintenance applies non-critical patches, reviews system performance metrics, updates system documentation, tests disaster recovery procedures, and reviews incident reports from the month.
10.3 Quarterly Tasks
Compliance audits review security policy compliance, audit access controls and permissions, verify encryption implementations, check backup and recovery processes, and validate security configurations against baselines.
Security assessments conduct internal vulnerability assessments, run phishing simulation exercises, deliver security awareness training, review third-party security posture, and update risk assessments.
10.4 Annual Tasks
Penetration testing engages a certified firm for external penetration testing, conducts internal network penetration testing, performs application security testing, executes social engineering assessments, and remediates all findings within 90 days.
Disaster recovery testing conducts a full disaster recovery drill, tests alternate site failover, executes a business continuity exercise, updates recovery procedures based on results, and documents lessons learned.
Policy and documentation work includes annual policy review and updates, security training for all staff, updating security documentation, reviewing vendor security agreements, and strategic security planning for the coming year.
10.5 Bi-Annual Tasks
Disaster recovery testing at the semi-annual level includes complete system restoration to an alternate site, database recovery to a specific point-in-time, application functionality verification, network failover testing, and communication system testing.
Business continuity testing includes testing emergency communication procedures, verifying contact information is current, reviewing and updating the business continuity plan, testing backup data center capabilities, and validating recovery time objectives are achievable.
11. Employees Joining and Leaving
We provide comprehensive training to new staff and ongoing support for existing staff to implement this policy. Initial training covers an introduction to IT security including risks, basic security measures, company policies, and where to get help. Each employee completes appropriate security awareness training. Training covers how to use company systems and security software properly. Staff can request a security health check on their computer, tablet, or phone. Access to systems and resources is granted based on job role requirements. Appropriate security tools are assigned including VPN access, password manager, and MFA devices.
The onboarding security checklist ensures all steps are completed. Background checks are completed where applicable. The security policy acknowledgment is signed. Security training is completed. NDA and confidentiality agreements are signed. User accounts are created with appropriate permissions. MFA is configured for all accounts. Company devices are issued and configured. VPN access is configured if needed. A password manager account is created. Emergency contact information is collected.
When people leave a project or the company, we promptly revoke their access privileges to all systems.
The offboarding security checklist ensures thorough access removal. All user accounts are disabled within 2 hours of departure. VPN and remote access are revoked. The former employee is removed from all groups and distribution lists. Company devices including laptops, phones, and tokens are collected. Access cards and keys are collected. Any shared account passwords the person knew are reset. The person is removed from third-party systems such as GitHub and AWS. Ownership of documents and files is transferred. An exit interview covers ongoing security obligations. Documentation confirms all access revocation is completed.
12. Data Protection Officer Responsibilities
The company ensures the Data Protection Officer is given all appropriate resources to carry out their tasks and maintain their expert knowledge. The DPO reports directly to the highest level of management and must not carry out any other tasks that could result in a conflict of interest.
The DPO’s duties include monitoring compliance with GDPR and other privacy regulations, advising on data protection impact assessments, cooperating with supervisory authorities, acting as the contact point for data subjects exercising their rights, maintaining records of processing activities, providing data protection training to staff, conducting privacy audits, and reviewing privacy policies and procedures for adequacy.
13. Technical Documentation Requirements
13.1 Network Architecture Documentation
Required network documentation includes network topology diagrams showing both logical and physical layouts, IP address allocation schemes, firewall rules and security zone definitions, VPN configurations, DMZ architecture, network device inventory, VLAN configurations, and routing protocols and tables.
This documentation must be updated within 48 hours of any network change to remain accurate.
13.2 System Configuration Documentation
Required system documentation includes server inventory with roles and specifications, operating system versions and patch levels, installed software and versions, service configurations, database schemas and configurations, application architecture diagrams, API documentation, and integration points and dependencies.
This documentation must be updated within 24 hours of configuration changes.
13.3 Security Controls Documentation
Security control documentation covers access control lists, security group configurations, intrusion detection and prevention rules, data loss prevention policies, endpoint protection configurations, email security settings, web filtering rules, and security monitoring dashboards.
This documentation is reviewed monthly with a comprehensive review conducted quarterly.
13.4 Encryption Standards Documentation
Encryption documentation specifies encryption algorithms in use such as AES-256-GCM and TLS 1.3, key management procedures, certificate inventory and renewal schedule, data classification and encryption requirements, encryption at rest implementations, encryption in transit configurations, and cryptographic library versions.
This documentation must be updated immediately upon any encryption-related change.
13.5 Logging and Monitoring Documentation
Logging documentation covers log sources and types collected, log retention periods, log storage locations and capacity, log analysis tools and procedures, alert thresholds and escalation paths, monitoring dashboards and reports, and SIEM configuration and rules.
This documentation is reviewed quarterly with an annual comprehensive audit.
14. Compliance Records Management
14.1 Risk Assessment Reports
Risk assessments are conducted annually for comprehensive organizational assessment, quarterly for targeted assessments of new systems and services, and ad-hoc after significant incidents or changes.
Risk assessment reports contain identified assets and their value to the organization, threat identification and analysis, vulnerability assessment, risk likelihood and impact ratings, risk treatment plans, residual risk acceptance decisions, and review and approval signatures.
Risk assessment records are retained for 7 years.
14.2 Audit Logs
Log types collected include authentication and authorization events, administrative actions, data access operations including reads, writes, and deletes, configuration changes, security events and alerts, system errors and failures, and network traffic logs.
Retention periods vary by log type. Security logs are retained for 7 years. System logs are retained for 1 year. Application logs are retained for 90 days. Network logs are retained for 30 days.
Log protection requirements specify that logs are read-only after creation, encrypted in transit and at rest, backed up daily, and monitored for tampering.
14.3 Training Records
Training requirements include new hire security orientation within the first week of employment, annual security awareness training for all staff, role-specific security training as applicable to job function, phishing simulation exercises quarterly, and incident response training for the security team annually.
Training documentation includes training completion dates, training content and version delivered, assessment scores if applicable, certificates of completion, and refresher training schedules.
Training records are retained for the duration of employment plus 3 years.
14.4 Incident Reports
Incident reports must include the detection date and time, incident classification and severity, systems and data affected, timeline of events, response actions taken, root cause analysis, lessons learned, and corrective actions implemented.
Reports are distributed internally to management, the security team, and affected departments. External distribution follows regulatory and contractual requirements.
Incident reports are retained for 7 years.
14.5 Access Review Records
Review documentation includes the date of review, reviewer name and title, list of accounts reviewed, access changes made, justification for access granted, exceptions and approvals, and follow-up actions required.
Review schedules specify quarterly reviews for standard users, monthly reviews for privileged users, and bi-annual reviews for service accounts.
Access review records are retained for 3 years.
15. Compliance Framework
15.1 Applicable Regulations
GDPR compliance requires data protection impact assessments for high-risk processing, privacy by design and by default in all systems, user consent management, data subject rights fulfillment, and breach notification procedures.
SOC 2 compliance requires security controls documentation, availability monitoring, confidentiality protection measures, privacy practices documentation, and annual audit compliance verification.
ISO 27001 compliance requires an information security management system, risk assessment and treatment processes, security controls implementation, continuous improvement processes, and regular internal audits.
15.2 Compliance Monitoring
Automated monitoring tracks security control effectiveness, policy compliance through scanning, configuration drift detection, vulnerability management status, and patch compliance levels.
Manual reviews include quarterly compliance assessments, annual third-party audits, internal audit programs, management review meetings, and regulatory requirement updates.
16. Third-Party Security
16.1 Vendor Security Assessment
Pre-contract assessment requires security questionnaire completion, security certification review for SOC 2 and ISO 27001, data processing agreement execution, security requirements in the contract, and incident notification requirements.
Ongoing monitoring includes annual security re-assessment, review of security incidents involving the vendor, audit report review, performance measurement against SLAs, and security scorecard maintenance.
16.2 Data Sharing with Third Parties
Data sharing requirements include having a data processing agreement in place, sharing only the minimum necessary data, encryption for all data in transit, access controls and monitoring, and the right to audit vendor security practices.
The approval process requires security team review, legal review of agreements, privacy impact assessment, management approval for sensitive data sharing, and documentation in the vendor register.
17. Vulnerability Management
17.1 Vulnerability Identification
Vulnerabilities are identified through multiple sources including automated vulnerability scanning conducted weekly, annual penetration testing by external firms, security research and advisories from vendors and researchers, bug bounty program submissions, internal security testing, and third-party security assessments.
17.2 Vulnerability Remediation
Response times are based on severity. Critical vulnerabilities must be remediated within 24 hours. High severity vulnerabilities must be remediated within 7 days. Medium severity vulnerabilities must be remediated within 30 days. Low severity vulnerabilities must be remediated within 90 days or formally accepted as risk.
The remediation process follows a structured approach. First, the vulnerability is confirmed and documented. Second, impact and exploitability are assessed. Third, a remediation plan is developed. Fourth, the patch or fix is tested in non-production. Fifth, the change management process is followed. Sixth, the fix is deployed to production. Seventh, verification testing confirms the fix is effective. Eighth, documentation is updated.
17.3 Reporting a Vulnerability
External security researchers can report vulnerabilities by email to security@pragmatismo.com.br. A PGP key is available on the website for encrypted communication. Initial response is provided within 48 hours. A bug bounty program provides rewards for qualifying vulnerabilities.
Internal staff should report vulnerabilities via the internal security portal or email the security team directly for critical issues. Reports should include a description of the vulnerability, affected systems, and steps to reproduce the issue. Response is provided within 24 hours.
18. Security Metrics and KPIs
18.1 Key Performance Indicators
Security metrics track operational effectiveness. Mean time to detect (MTTD) incidents has a target of less than 15 minutes. Mean time to respond (MTTR) to incidents has a target of less than 4 hours. Percentage of systems with latest patches has a target of greater than 95%. Failed login attempts per day are baselined at less than 100. Security training completion rate has a target of 100%. Vulnerabilities remediated within SLA has a target of greater than 90%. Backup success rate has a target of 100%. Access review completion has a target of 100% on schedule.
Reporting occurs at multiple intervals. Weekly reports cover security incidents and critical metrics. Monthly reports provide a comprehensive security dashboard. Quarterly reports analyze metrics trends. Annual reports assess overall security posture.
19. Policy Enforcement
19.1 Policy Violations
Types of violations include unauthorized access attempts, password sharing, installation of unauthorized software, data exfiltration or leakage, policy non-compliance, and failure to report incidents.
Consequences follow progressive discipline. First offense results in a warning and mandatory retraining. Second offense results in a written warning and management review. Third offense results in suspension or termination. Severe violations result in immediate termination and potential legal action.
19.2 Exception Process
Exception requests require written justification, a completed risk assessment, identification of compensating controls, time-limited approval with a maximum of 90 days, approval from both management and the security team, and regular review while the exception remains active.
20. Document Control
This document is owned by Rodrigo Rodriguez, Security Director. The last update date and next review date are indicated in the document header. The current version is 2.0 with approved status.
The change history shows Version 1.0 as the initial policy creation and Version 2.0 as the comprehensive expansion with detailed procedures.
Distribution includes all employees via the internal portal, availability to clients upon request, and a summary published on the company website.
Approval authority, approval date, and next review date are recorded in the document management system.
Contact Information
The Security Team can be reached by email at security@pragmatismo.com.br, by phone at the emergency hotline maintained in internal systems, or through the internal security portal.
Specific inquiries should be directed to appropriate addresses. Security incidents go to security@pragmatismo.com.br. Privacy concerns go to privacy@pragmatismo.com.br. Compliance questions go to compliance@pragmatismo.com.br. General IT support requests go to support@pragmatismo.com.br.
Compliance Requirements Checklist
Overview
This document provides a comprehensive checklist for security and compliance requirements across multiple frameworks (GDPR, SOC 2, ISO 27001, HIPAA, LGPD) using the actual components deployed in General Bots.
Component Stack
| Component | Purpose | License |
|---|---|---|
| Caddy | Reverse proxy, TLS termination, web server | Apache 2.0 |
| PostgreSQL | Relational database | PostgreSQL License |
| General Bots Directory | Identity and access management (Zitadel/Keycloak) | Apache 2.0 |
| Drive | S3-compatible object storage | AGPLv3 |
| Stalwart | Mail server (SMTP/IMAP) | AGPLv3 |
| Qdrant | Vector database | Apache 2.0 |
| Cache (Valkey) | In-memory cache (Redis-compatible) | BSD 3-Clause |
| LiveKit | Video conferencing | Apache 2.0 |
| Ubuntu | Operating system | Various |
Compliance Requirements Matrix
Legend
- ✅ = Implemented and configured
- ⚠️ = Partially implemented, needs configuration
- ⬜ = Not yet implemented
- 🔄 = Automated process
- 📝 = Manual process required
Network & Web Server (Caddy)
| Status | Requirement | Component | Standard | Implementation |
|---|---|---|---|---|
| ✅ | TLS 1.3 Configuration | Caddy | All | Automatic TLS 1.3 with modern ciphers |
| ✅ | Access Logging | Caddy | All | JSON format logs to /var/log/caddy/access.log |
| ✅ | Rate Limiting | Caddy | ISO 27001 | Per-IP rate limiting in Caddyfile |
| ⚠️ | WAF Rules | Caddy | HIPAA | Consider Caddy security plugins or external WAF |
| ✅ | Security Headers | Caddy | All | HSTS, CSP, X-Frame-Options, X-Content-Type-Options |
| ✅ | Reverse Proxy Security | Caddy | All | Secure forwarding with real IP preservation |
| ✅ | Certificate Management | Caddy | All | Automatic Let’s Encrypt with auto-renewal |
| 🔄 | HTTPS Redirect | Caddy | All | Automatic HTTP to HTTPS redirect |
Configuration File: /etc/caddy/Caddyfile
app.example.com {
tls {
protocols tls1.3
ciphers TLS_AES_256_GCM_SHA384
}
header {
Strict-Transport-Security "max-age=31536000"
X-Frame-Options "SAMEORIGIN"
X-Content-Type-Options "nosniff"
Content-Security-Policy "default-src 'self'"
}
rate_limit {
zone static {
key {remote_host}
events 100
window 1m
}
}
reverse_proxy localhost:3000
}
Identity & Access Management (General Bots Directory)
| Status | Requirement | Component | Standard | Implementation |
|---|---|---|---|---|
| ✅ | MFA Implementation | Directory | All | TOTP/SMS/Hardware token support |
| ✅ | RBAC Configuration | Directory | All | Role-based access control with custom roles |
| ✅ | Password Policy | Directory | All | Min 12 chars, complexity requirements, history |
| ✅ | OAuth2/OIDC Setup | Directory | ISO 27001 | OAuth 2.0 and OpenID Connect flows |
| ✅ | Audit Logging | Directory | All | Comprehensive user activity logs |
| ✅ | Session Management | Directory | All | Configurable timeouts and invalidation |
| ✅ | SSO Support | Directory | Enterprise | SAML and OIDC SSO integration |
| ⚠️ | Password Rotation | Directory | HIPAA | Configure 90-day rotation policy |
| 📝 | Access Reviews | Directory | All | Quarterly manual review of user permissions |
Configuration: Directory Admin Console (http://localhost:8080)
Key Settings:
- Password min length: 12 characters
- MFA: Required for admins
- Session timeout: 8 hours
- Idle timeout: 30 minutes
Database (PostgreSQL)
| Status | Requirement | Component | Standard | Implementation |
|---|---|---|---|---|
| ✅ | Encryption at Rest | PostgreSQL | All | File-system level encryption (LUKS) |
| ✅ | Encryption in Transit | PostgreSQL | All | TLS/SSL connections enforced |
| ✅ | Access Control | PostgreSQL | All | Role-based database permissions |
| ✅ | Audit Logging | PostgreSQL | All | pgAudit extension for detailed logging |
| ✅ | Connection Pooling | PostgreSQL | All | Built-in connection management |
| ⚠️ | Row-Level Security | PostgreSQL | HIPAA | Configure RLS policies for sensitive tables |
| ⚠️ | Column Encryption | PostgreSQL | GDPR | Encrypt PII columns with pgcrypto |
| 🔄 | Automated Backups | PostgreSQL | All | Daily backups via pg_dump/pg_basebackup |
| ✅ | Point-in-Time Recovery | PostgreSQL | HIPAA | WAL archiving enabled |
Configuration: Installed and configured automatically via installer.rs
-- Enable SSL
ssl = on
ssl_cert_file = '/path/to/server.crt'
ssl_key_file = '/path/to/server.key'
ssl_ciphers = 'HIGH:MEDIUM:+3DES:!aNULL'
-- Enable audit logging
shared_preload_libraries = 'pgaudit'
pgaudit.log = 'write, ddl'
pgaudit.log_catalog = off
-- Connection settings
max_connections = 100
password_encryption = scram-sha-256
-- Logging
log_connections = on
log_disconnections = on
log_duration = on
log_statement = 'all'
Object Storage (Drive)
| Status | Requirement | Component | Standard | Implementation |
|---|---|---|---|---|
| ✅ | Encryption at Rest | Drive | All | Server-side encryption (SSE-S3) |
| ✅ | Encryption in Transit | Drive | All | TLS for all connections |
| ✅ | Bucket Policies | Drive | All | Fine-grained access control policies |
| ✅ | Object Versioning | Drive | HIPAA | Version control for data recovery |
| ✅ | Access Logging | Drive | All | Detailed audit logs for all operations |
| ⚠️ | Lifecycle Rules | Drive | LGPD | Configure data retention and auto-deletion |
| ✅ | Immutable Objects | Drive | Compliance | WORM (Write-Once-Read-Many) support |
| 🔄 | Replication | Drive | HIPAA | Multi-site replication for DR |
| ✅ | IAM Integration | Drive | All | Integration with Directory Service via OIDC |
Configuration: /conf/drive/config.env
Bucket Policy Example:
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Principal": {"AWS": ["arn:aws:iam::*:user/app-user"]},
"Action": ["s3:GetObject"],
"Resource": ["arn:aws:s3:::bucket-name/*"]
}
]
}
Email Server (Stalwart)
| Status | Requirement | Component | Standard | Implementation |
|---|---|---|---|---|
| ✅ | DKIM Signing | Stalwart | All | Domain key authentication |
| ✅ | SPF Records | Stalwart | All | Sender policy framework |
| ✅ | DMARC Policy | Stalwart | All | Domain-based message authentication |
| ✅ | Mail Encryption | Stalwart | All | TLS for SMTP/IMAP (STARTTLS + implicit) |
| ✅ | Content Filtering | Stalwart | All | Spam and malware filtering |
| ⚠️ | Mail Archiving | Stalwart | HIPAA | Configure long-term email archiving |
| ✅ | Sieve Filtering | Stalwart | All | Server-side mail filtering |
| ✅ | Authentication | Stalwart | All | OIDC integration with Directory Service |
| 📝 | Retention Policy | Stalwart | GDPR/LGPD | Define and implement email retention |
Configuration: /conf/mail/config.toml
[server.listener."smtp"]
bind = ["0.0.0.0:25"]
protocol = "smtp"
[server.listener."smtp-submission"]
bind = ["0.0.0.0:587"]
protocol = "smtp"
tls.implicit = false
[server.listener."smtp-submissions"]
bind = ["0.0.0.0:465"]
protocol = "smtp"
tls.implicit = true
[authentication]
mechanisms = ["plain", "login"]
directory = "oidc"
[directory."oidc"]
type = "oidc"
issuer = "http://localhost:8080"
DNS Records:
; SPF Record
example.com. IN TXT "v=spf1 ip4:203.0.113.0/24 -all"
; DKIM Record
default._domainkey.example.com. IN TXT "v=DKIM1; k=rsa; p=MIGfMA0GCS..."
; DMARC Record
_dmarc.example.com. IN TXT "v=DMARC1; p=quarantine; rua=mailto:dmarc@example.com"
Cache (Valkey)
| Status | Requirement | Component | Standard | Implementation |
|---|---|---|---|---|
| ✅ | Authentication | Valkey | All | Password-protected access |
| ✅ | TLS Support | Valkey | All | Encrypted connections |
| ✅ | Access Control | Valkey | All | ACL-based permissions |
| ⚠️ | Persistence | Valkey | Data Recovery | RDB/AOF for data persistence |
| ✅ | Memory Limits | Valkey | All | Maxmemory policies configured |
| 📝 | Data Expiration | Valkey | GDPR | Set TTL for cached personal data |
Configuration: /etc/valkey/valkey.conf
# Authentication
requirepass SecurePassword123!
# TLS
tls-port 6380
tls-cert-file /path/to/cert.pem
tls-key-file /path/to/key.pem
tls-protocols "TLSv1.3"
# ACL
aclfile /etc/valkey/users.acl
# Memory management
maxmemory 2gb
maxmemory-policy allkeys-lru
# Persistence
save 900 1
save 300 10
Vector Database (Qdrant)
| Status | Requirement | Component | Standard | Implementation |
|---|---|---|---|---|
| ✅ | API Authentication | Qdrant | All | API key authentication |
| ✅ | TLS Support | Qdrant | All | HTTPS enabled |
| ✅ | Access Control | Qdrant | All | Collection-level permissions |
| ⚠️ | Data Encryption | Qdrant | HIPAA | File-system level encryption |
| 🔄 | Backup Support | Qdrant | All | Snapshot-based backups |
| 📝 | Data Retention | Qdrant | GDPR | Implement collection cleanup policies |
Configuration: /etc/qdrant/config.yaml
service:
host: 0.0.0.0
http_port: 6333
grpc_port: 6334
security:
api_key: "your-secure-api-key"
read_only_api_key: "read-only-key"
storage:
storage_path: /var/lib/qdrant/storage
snapshots_path: /var/lib/qdrant/snapshots
telemetry:
enabled: false
Operating System (Ubuntu)
| Status | Requirement | Component | Standard | Implementation |
|---|---|---|---|---|
| ⚠️ | System Hardening | Ubuntu | All | Apply CIS Ubuntu Linux benchmarks |
| ✅ | Automatic Updates | Ubuntu | All | Unattended-upgrades for security patches |
| ⚠️ | Audit Daemon | Ubuntu | All | Configure auditd for system events |
| ✅ | Firewall Rules | Ubuntu | All | UFW configured with restrictive rules |
| ⚠️ | Disk Encryption | Ubuntu | All | LUKS full-disk encryption |
| ⚠️ | AppArmor | Ubuntu | All | Enable mandatory access control |
| 📝 | User Management | Ubuntu | All | Disable root login, use sudo |
| 📝 | SSH Hardening | Ubuntu | All | Key-based auth only, disable password auth |
Firewall Configuration:
# UFW firewall rules
ufw default deny incoming
ufw default allow outgoing
ufw allow 22/tcp # SSH
ufw allow 80/tcp # HTTP
ufw allow 443/tcp # HTTPS
ufw allow 25/tcp # SMTP
ufw allow 587/tcp # SMTP submission
ufw allow 993/tcp # IMAPS
ufw enable
Automatic Updates:
# /etc/apt/apt.conf.d/50unattended-upgrades
Unattended-Upgrade::Allowed-Origins {
"${distro_id}:${distro_codename}-security";
};
Unattended-Upgrade::Automatic-Reboot "true";
Unattended-Upgrade::Automatic-Reboot-Time "03:00";
Audit Rules: /etc/audit/rules.d/audit.rules
# Monitor authentication
-w /var/log/auth.log -p wa -k auth_log
-w /etc/passwd -p wa -k user_modification
-w /etc/group -p wa -k group_modification
# Monitor network
-a always,exit -F arch=b64 -S connect -k network_connect
# Monitor file access
-w /etc/shadow -p wa -k shadow_modification
Cross-Component Requirements
Monitoring & Logging
| Status | Requirement | Implementation | Standard |
|---|---|---|---|
| ✅ | Centralized Logging | All logs to /var/log/ with rotation | All |
| ⚠️ | Log Aggregation | ELK Stack or similar SIEM | ISO 27001 |
| ✅ | Health Monitoring | Prometheus + Grafana | All |
| 📝 | Alert Configuration | Set up alerts for security events | All |
| ✅ | Metrics Collection | Component-level metrics | All |
Backup & Recovery
| Status | Requirement | Implementation | Standard |
|---|---|---|---|
| 🔄 | Automated Backups | Daily automated backups | All |
| ✅ | Backup Encryption | AES-256 encrypted backups | All |
| ✅ | Off-site Storage | Drive replication to secondary site | HIPAA |
| 📝 | Backup Testing | Quarterly restore tests | All |
| ✅ | Retention Policy | 90 days for full, 30 for incremental | All |
Backup Script: /usr/local/bin/backup-system.sh
#!/bin/bash
BACKUP_DATE=$(date +%Y%m%d_%H%M%S)
# PostgreSQL backup
pg_dump -h localhost -U postgres generalbots | \
gzip | \
openssl enc -aes-256-cbc -salt -out /backup/pg_${BACKUP_DATE}.sql.gz.enc
# Drive backup
mc mirror drive/generalbots /backup/drive_${BACKUP_DATE}/
# Qdrant snapshot
curl -X POST "http://localhost:6333/collections/botserver/snapshots"
Network Security
| Status | Requirement | Implementation | Standard |
|---|---|---|---|
| ✅ | Network Segmentation | Component isolation via firewall | All |
| ✅ | Internal TLS | TLS between all components | ISO 27001 |
| ⚠️ | VPN Access | WireGuard VPN for admin access | All |
| ✅ | Rate Limiting | Caddy rate limiting | All |
| 📝 | DDoS Protection | CloudFlare or similar | Production |
Compliance-Specific Requirements
GDPR
| Status | Requirement | Implementation |
|---|---|---|
| ✅ | Data Encryption | AES-256 at rest, TLS 1.3 in transit |
| ✅ | Right to Access | API endpoints for data export |
| ✅ | Right to Deletion | Data deletion workflows implemented |
| ✅ | Right to Portability | JSON export functionality |
| ✅ | Consent Management | Zitadel consent flows |
| 📝 | Data Processing Records | Document all data processing activities |
| ✅ | Breach Notification | Incident response plan includes 72h notification |
SOC 2
| Status | Requirement | Implementation |
|---|---|---|
| ✅ | Access Controls | RBAC via Zitadel |
| ✅ | Audit Logging | Comprehensive logging across all components |
| ✅ | Change Management | Version control and deployment procedures |
| ✅ | Monitoring | Real-time monitoring with Prometheus |
| 📝 | Risk Assessment | Annual risk assessment required |
| ✅ | Encryption | Data encrypted at rest and in transit |
ISO 27001
| Status | Requirement | Implementation |
|---|---|---|
| ✅ | Asset Inventory | Documented component list |
| ✅ | Access Control | Zitadel RBAC |
| ✅ | Cryptography | Modern encryption standards |
| 📝 | Physical Security | Data center security documentation |
| ✅ | Operations Security | Automated patching and monitoring |
| 📝 | Incident Management | Documented incident response procedures |
| 📝 | Business Continuity | DR plan and testing |
HIPAA
| Status | Requirement | Implementation |
|---|---|---|
| ✅ | Encryption | PHI encrypted at rest and in transit |
| ✅ | Access Controls | Role-based access with MFA |
| ✅ | Audit Controls | Comprehensive audit logging |
| ⚠️ | Integrity Controls | Checksums and versioning |
| ✅ | Transmission Security | TLS 1.3 for all communications |
| 📝 | Business Associate Agreements | Required for third-party vendors |
| ⚠️ | Email Archiving | Stalwart archiving configuration needed |
LGPD (Brazilian GDPR)
| Status | Requirement | Implementation |
|---|---|---|
| ✅ | Data Encryption | Same as GDPR |
| ✅ | User Rights | Same as GDPR |
| ✅ | Consent | Zitadel consent management |
| 📝 | Data Protection Officer | Designate DPO |
| ⚠️ | Data Retention | Configure lifecycle policies in Drive |
| ✅ | Breach Notification | Same incident response as GDPR |
Implementation Priority
High Priority (Critical for Production)
- ✅ TLS 1.3 everywhere (Caddy, PostgreSQL, Drive, Stalwart)
- ✅ MFA for all admin accounts (Zitadel)
- ✅ Firewall configuration (UFW)
- ✅ Automated security updates (unattended-upgrades)
- 🔄 Automated encrypted backups
Medium Priority (Required for Compliance)
- ⚠️ Disk encryption (LUKS)
- ⚠️ Audit daemon (auditd)
- ⚠️ WAF rules (Caddy plugins or external)
- 📝 Access reviews (quarterly)
- ⚠️ Email archiving (Stalwart)
Lower Priority (Enhanced Security)
- ⚠️ VPN access (WireGuard)
- ⚠️ Log aggregation (ELK Stack)
- ⚠️ AppArmor/SELinux
- 📝 CIS hardening
- 📝 Penetration testing
Verification Checklist
Weekly Tasks
- Review security logs (Caddy, PostgreSQL, Zitadel)
- Check backup completion status
- Review failed authentication attempts
- Update security patches
Monthly Tasks
- Access review for privileged accounts
- Review audit logs for anomalies
- Test backup restoration
- Update vulnerability database
Quarterly Tasks
- Full access review for all users
- Compliance check (run automated checks)
- Security configuration audit
- Disaster recovery drill
Annual Tasks
- Penetration testing
- Full compliance audit
- Risk assessment update
- Security policy review
- Business continuity test
Quick Start Implementation
# 1. Enable firewall
sudo ufw enable
sudo ufw allow 22,80,443,25,587,993/tcp
# 2. Configure automatic updates
sudo apt install unattended-upgrades
sudo dpkg-reconfigure --priority=low unattended-upgrades
# 3. Enable PostgreSQL SSL
sudo -u postgres psql -c "ALTER SYSTEM SET ssl = 'on';"
sudo systemctl restart postgresql
# 4. Set Drive encryption
mc admin config set drive/ server-side-encryption-s3 on
# 5. Configure Zitadel MFA
# Via web console: Settings > Security > MFA > Require for admins
# 6. Enable Caddy security headers
# Add to Caddyfile (see Network & Web Server section)
# 7. Set up daily backups
sudo crontab -e
# Add: 0 2 * * * /usr/local/bin/backup-system.sh
Support & Resources
- Internal Security Team: security@pragmatismo.com.br
- Compliance Officer: compliance@pragmatismo.com.br
- Documentation: https://docs.pragmatismo.com.br
- Component Documentation: See “Component Security Documentation” in security-features.md
Document Control
- Version: 1.0
- Last Updated: 2024-01-15
- Next Review: 2024-07-15
- Owner: Security Team
- Approved By: CTO
Permissions Matrix
This chapter documents the permission system in General Bots, explaining how role-based access control governs what users can do within the platform. Understanding this permission model is essential for administrators configuring access policies and developers building applications that respect security boundaries.
Understanding the Permission Model
General Bots implements a role-based access control (RBAC) system that integrates with Zitadel, the platform’s Directory Service. The permission architecture consists of three interconnected layers that work together to determine what any given user can do.
At the highest level, realms establish permission boundaries that typically correspond to organizations. Within each realm, groups collect users who share common access needs. Permissions represent specific actions that can be granted to groups, and users inherit the combined permissions of all groups to which they belong.
This layered approach provides flexibility while maintaining manageability. Rather than assigning permissions directly to individual users, administrators create groups with appropriate permission sets and then add users to those groups. When a user’s responsibilities change, their access can be adjusted simply by modifying their group memberships.
User Context and System Context
APIs in General Bots operate in one of two security contexts, each with distinct characteristics and use cases.
User context operations execute on behalf of an authenticated user, using their OAuth token for authorization. When an API operates in user context, it sees and modifies only resources that belong to or are shared with that user. Reading files, sending messages, accessing calendars, managing tasks, and viewing email all occur in user context. The principle of least privilege applies naturally here—users can only access what they own or what has been explicitly shared with them.
System context operations execute on behalf of the bot or system itself, using a service account token. These operations have broader access because they serve cross-cutting concerns that don’t belong to any individual user. Bot-initiated messages, scheduled task execution, system monitoring, cross-user analytics, and backup operations all require system context to function properly.
The distinction between contexts ensures that normal user operations remain appropriately scoped while still allowing the system to perform necessary administrative functions.
File Operations
The drive system provides file storage and management capabilities with granular permission controls. Listing files through the API shows different results depending on context—in user context, only the user’s files appear, while system context reveals all files within the bot’s storage. Similarly, file uploads target the user’s folder in user context but can write to any location in the bot’s storage when operating in system context.
File deletion and sharing follow the same pattern. Users can delete and share their own files, while system context permits these operations on any file. The corresponding permissions are files:read for viewing and downloading, files:write for uploading and modifying, files:delete for removal, and files:share for granting access to others.
Communication Capabilities
Email functionality provides access to messaging through the organization’s mail system. Reading inbox contents and drafts requires the email:read permission and operates strictly in user context—there’s no meaningful system context for reading another user’s email. Sending messages requires email:send and can operate in either context, with user context messages appearing to come from the user and system context messages appearing to come from the bot.
Meeting integration enables video conferencing coordination. Viewing room information uses meet:read, with user context showing only visible rooms and system context revealing all rooms. Creating meetings requires meet:create, where user context establishes the creator as organizer while system context creates bot-organized meetings. Joining requires meet:join and inviting others requires meet:invite, with system context allowing invitations to any meeting regardless of ownership.
Calendar operations manage scheduling and appointments. Reading events with calendar:read shows user events in user context or bot calendar events in system context. Creating events requires calendar:write and targets the appropriate calendar based on context. Booking appointments with calendar:book makes the user an attendee in user context or establishes the bot as organizer in system context.
Task management follows similar patterns. The tasks:read permission shows user tasks in user context or all tasks in system context. Creating and modifying tasks with tasks:write assigns tasks appropriately based on context. Completing tasks with tasks:complete allows users to mark their own tasks complete or, in system context, to complete any task.
Administrative Functions
Administrative endpoints provide system management capabilities reserved for privileged users. Managing users requires admin:users, managing bot configurations requires admin:bots, modifying system configuration requires admin:config, and accessing monitoring data requires admin:monitor. All administrative operations execute in system context and require explicit administrative privileges.
These elevated permissions should be granted sparingly, typically only to IT staff responsible for system operation. The audit system tracks all administrative actions to maintain accountability.
Permission Definitions
The permission system defines specific capabilities organized by functional area. Core permissions govern fundamental platform features: chat:read allows viewing conversation history, chat:write enables sending messages, and the file permissions control document management as described above.
Communication permissions extend to the various messaging channels: email read and send capabilities, meeting room operations, and calendar management. Productivity permissions cover task management operations.
Administrative permissions form a separate category with broader impact: admin:users for user management, admin:groups for group administration, admin:bots for bot configuration, admin:config for system settings, admin:monitor for accessing operational metrics, and admin:backup for data protection operations.
Default Group Configuration
General Bots creates several default groups during initialization, each designed for common organizational roles.
The Administrators group receives all permissions, including the complete set of administrative capabilities. Members of this group can perform any operation in the system. This group should contain only trusted IT personnel responsible for platform operation.
The Managers group provides access to productivity features plus basic monitoring capabilities. Managers can fully utilize chat, files including sharing, email, meetings, calendar, and tasks. They can also view monitoring data to understand system usage but cannot modify system configuration or manage users.
The Users group establishes standard access for regular employees. Users can participate in chat, work with files without sharing capabilities, read and send email, view and join meetings, manage their calendars, and handle their tasks. This permission set enables full participation in daily work without administrative capabilities.
The Guests group provides minimal access for anonymous or temporary users. Guests can only participate in chat, without access to any other system features. This restricted access suits scenarios where external parties need limited interaction with bots.
Permission Configuration
Configuring permissions involves coordinating settings between Zitadel and the General Bots configuration.
In Zitadel, administrators access the admin console and navigate to Organization settings, then to Roles. Here they create roles that correspond to the permissions defined in General Bots. These roles are then assigned to groups, and users are added to appropriate groups based on their organizational responsibilities.
The config.csv file for each bot can map Zitadel roles to General Bots permissions. The permission mapping entries define which local permissions correspond to each Zitadel role. The default anonymous permission setting establishes what capabilities unauthenticated users receive.
Anonymous Access Considerations
The chat interface supports anonymous users who haven’t authenticated, though with significant restrictions. Anonymous users can chat with the default bot only, using a session that exists solely on the server. They cannot access conversation history, the drive, email, tasks, meetings, or any settings. Essentially, anonymous access provides a preview of bot capabilities without exposing organizational resources.
Organizations can customize the default anonymous permissions if they want to provide different capabilities to unauthenticated users, though most deployments restrict anonymous access to basic chat functionality.
Permission Checking in Scripts
BASIC scripts can query user roles to implement conditional logic based on permissions. By retrieving the role from the session, scripts can present different options or perform different actions depending on the user’s access level.
For example, a script might offer administrative functions only to users with the admin role, provide reporting features to managers, and present standard assistance to regular users. This capability allows bots to adapt their behavior to each user’s organizational context.
Audit Trail
All permission checks are logged, creating a comprehensive audit trail of access attempts. Administrators can query these logs through the admin API to review permission-related events. Each log entry captures the timestamp, user identifier, attempted action, accessed resource, result indicating whether access was allowed or denied, and when denied, the reason for denial.
This audit capability supports security reviews, compliance requirements, and troubleshooting access issues. Organizations with regulatory obligations can demonstrate that appropriate access controls are in place and functioning correctly.
Related Documentation
For deeper understanding of the authentication and authorization system, the User Authentication chapter explains the login and session management processes. The User Context vs System Context chapter provides detailed exploration of how context affects API behavior. The Security Policy chapter establishes guidelines for secure platform operation. The API Endpoints chapter documents the full API surface including permission requirements for each endpoint.
User Context vs System Context
This chapter explains the two execution contexts in General Bots: User Context and System Context. Understanding these contexts is essential for building secure, properly scoped bot interactions.
Overview
Every API call and BASIC script execution happens in one of two contexts:
| Context | Identity | Use Case |
|---|---|---|
| User Context | Logged-in user | Interactive operations on user’s behalf |
| System Context | Bot service account | Automated/scheduled operations |
User Context
Definition
User Context means the operation is performed as the authenticated user, using their identity and permissions.
Characteristics
- Identity: The logged-in user’s ID
- Permissions: Limited to what the user can access
- Scope: Only user’s own resources
- Token: User’s OAuth access token
When User Context Applies
- Interactive Chat: User sends a message
- File Operations: User uploads/downloads files
- Email Access: User reads their inbox
- Calendar: User views their schedule
- Tasks: User manages their task list
Example Flow
User logs in → OAuth token issued → User asks bot to send email
↓
Bot sends email AS the user
↓
Email "From:" shows user's address
BASIC Script Example
' This runs in User Context when triggered by user interaction
' The email is sent from the logged-in user's account
TALK "Who should I email?"
recipient = HEAR
TALK "What's the subject?"
subject = HEAR
TALK "What's the message?"
body = HEAR
SEND MAIL recipient, subject, body
TALK "Email sent from your account to " + recipient
Access Boundaries
In User Context, the bot can only access:
| Resource | Access Level |
|---|---|
| Files | User’s files and shared files |
| User’s mailbox only | |
| Calendar | User’s calendar only |
| Tasks | User’s tasks only |
| Contacts | User’s contacts |
| Meet | Meetings user is invited to |
System Context
Definition
System Context means the operation is performed by the bot system itself, using a service account with elevated permissions.
Characteristics
- Identity: Bot’s service account
- Permissions: Defined by admin configuration
- Scope: Cross-user or system-wide resources
- Token: Service account credentials
When System Context Applies
- Scheduled Tasks: Cron-based script execution via SET SCHEDULE
- Event Handlers: ON keyword triggers
- Admin Operations: User management
- Analytics: Cross-user reporting
- Backups: System-wide data export
- Bot-Initiated Messages: Proactive notifications
Example Flow
Schedule triggers at 9:00 AM → System context activated
↓
Bot sends summary to all managers
↓
Email "From:" shows bot's address
BASIC Script Example
' This runs in System Context (scheduled task)
' The bot sends emails from its own account
SET SCHEDULE "0 9 * * 1" ' Every Monday at 9 AM
' Bot processes data and sends notifications
summary = LLM "Generate weekly summary"
SEND MAIL "team@example.com", "Weekly Summary", summary
PRINT "Weekly summary sent"
Access Boundaries
In System Context, the bot can access:
| Resource | Access Level |
|---|---|
| Files | All bot storage |
| Send as bot identity | |
| Calendar | Bot’s calendar, create events |
| Tasks | Create/assign to any user |
| Users | Read user directory |
| Meet | Join any meeting (if configured) |
| Config | Read bot configuration |
Determining Context
Automatic Detection
General Bots automatically determines context based on how the script is triggered:
| Trigger | Context |
|---|---|
| User sends message | User Context |
| SET SCHEDULE execution | System Context |
| ON event handler | System Context |
| HTTP API with user token | User Context |
| Internal service call | System Context |
Context in Scripts
The context is determined by the trigger, not by keywords in the script:
' User-triggered script (User Context)
' - Runs when user interacts
' - Uses user's permissions
name = HEAR "What's your name?"
TALK "Hello, " + name
' Scheduled script (System Context)
' - Runs on schedule
' - Uses bot's permissions
SET SCHEDULE "0 8 * * *" ' Daily at 8 AM
TALK "Good morning! Here's your daily briefing."
Security Implications
User Context Security
| Benefit | Consideration |
|---|---|
| Limited blast radius | Cannot access others’ data |
| Audit trail to user | User responsible for actions |
| Respects user permissions | May limit bot functionality |
System Context Security
| Benefit | Consideration |
|---|---|
| Full bot capabilities | Must be carefully controlled |
| Cross-user operations | Audit critical for compliance |
| Scheduled automation | Service account must be secured |
Configuration
Service Account Setup
The bot’s system identity is managed through the Directory service (Zitadel). Configure in config.csv:
key,value
system-account-email,bot@yourdomain.com
system-context-permissions,files:read|email:send|calendar:write
Context Restrictions
Limit what System Context can do:
key,value
system-allow-email,true
system-allow-file-delete,false
system-allow-user-create,false
system-allow-config-change,false
Audit Logging
All operations are logged with context:
{
"timestamp": "2024-01-15T10:30:00Z",
"context": "user",
"user_id": "user-123",
"action": "email:send",
"resource": "email to client@example.com",
"result": "success"
}
{
"timestamp": "2024-01-15T09:00:00Z",
"context": "system",
"service_account": "bot-service-account",
"action": "email:send",
"resource": "weekly-summary to 47 recipients",
"trigger": "schedule:weekly-summary",
"result": "success"
}
Best Practices
Use User Context When
- User initiates the action
- Operation affects only the user
- Audit trail should point to user
- Respecting user permissions is required
Use System Context When
- Scheduled or automated tasks
- Cross-user operations needed
- Bot needs elevated permissions
- System-wide actions required
Security Guidelines
- Minimize System Context: Use only when necessary
- Audit Everything: Log all system context operations
- Rotate Credentials: Change service account tokens regularly
- Limit Scope: Grant minimal permissions to service account
- Review Access: Periodically audit system context usage
Troubleshooting
“Permission Denied” Errors
Check if the operation is running in the expected context:
- User-triggered actions run in User Context with user permissions
- Scheduled actions run in System Context with bot permissions
If a scheduled task fails with permission errors, verify the bot’s service account has the required permissions in Zitadel.
Unexpected “From” Address in Emails
The sender depends on context:
- User Context: Sends as logged-in user
- System Context: Sends as bot account
Ensure your script is triggered in the intended way for the correct sender.
See Also
- Permissions Matrix - Full permission reference
- Bot Authentication - Service account setup
- Security Policy - Security guidelines
- SET SCHEDULE - Scheduled execution
Chapter 13: Contributing
Join the General Bots community and help improve the platform.
Quick Links
| Resource | Purpose |
|---|---|
| GitHub | Source code, issues |
| Discussions | Q&A, ideas |
| Blog | Updates, tutorials |
How to Contribute
Code Contributions
- Fork the repository
- Create a feature branch
- Make your changes
- Write tests
- Submit a pull request
Documentation
- Fix typos and errors
- Add examples
- Improve clarity
- Translate content
Community Support
- Answer questions in discussions
- Share your bots and templates
- Report bugs with reproduction steps
- Suggest features
Development Setup
git clone https://github.com/GeneralBots/botserver
cd botserver
cargo build
./target/debug/botserver
What We Accept
✅ Bug fixes with tests
✅ Performance improvements
✅ New BASIC keywords (if broadly useful)
✅ Documentation improvements
✅ Security enhancements
What We Don’t Accept
❌ Vendor-specific integrations
❌ Undocumented code
❌ Code without tests
❌ Features achievable with existing BASIC + LLM
Chapter Contents
- Development Setup - Build environment
- Testing Guide - Running tests
- Documentation - Writing docs
- Pull Requests - PR process
- Community Guidelines - Code of conduct
- IDEs - Editor support
See Also
- Architecture - System design
- BASIC Reference - Scripting language
Development Setup
This guide covers setting up a development environment for contributing to General Bots.
Prerequisites
Required Software
-
Rust: 1.70 or later
curl --proto '=https' --tlsv1.2 -sSf https://sh.rustup.rs | sh -
PostgreSQL: Installed automatically during bootstrap
-
Git: For version control
git --version # Should be 2.0 or later
Optional Components
- Drive: For S3-compatible storage (auto-installed by bootstrap)
- Cache (Valkey): For caching (auto-installed by bootstrap)
- LXC: For containerized development
Getting Started
1. Clone the Repository
git clone https://github.com/GeneralBots/BotServer.git
cd botserver
2. Environment Setup
The .env file is created automatically during bootstrap with secure random credentials. No manual configuration needed.
# Bootstrap creates everything automatically
./botserver
DRIVE_SECRET=minioadmin
SERVER_HOST=127.0.0.1
SERVER_PORT=8080
3. Install Rust Dependencies
cargo fetch
4. Run Bootstrap
The bootstrap process installs and configures all required services:
cargo run
On first run, bootstrap will:
- Install PostgreSQL (if needed)
- Install drive (S3-compatible storage)
- Install cache (Valkey)
- Create database schema
- Upload bot templates
- Generate secure credentials
Development Workflow
Building the Project
# Debug build (faster compilation)
cargo build
# Release build (optimized)
cargo build --release
Running Tests
# Run all tests
cargo test
# Run specific test
cargo test test_name
# Run with output
cargo test -- --nocapture
Code Formatting
# Format all code
cargo fmt
# Check formatting without changes
cargo fmt -- --check
Linting
# Run clippy for lint checks
cargo clippy -- -D warnings
Project Structure
botserver/
├── src/
│ ├── main.rs # Application entry point
│ ├── lib.rs # Library root
│ ├── api_router.rs # API routes
│ ├── core/ # Core functionality
│ │ ├── bootstrap/ # System initialization
│ │ ├── bot/ # Bot management
│ │ ├── config/ # Configuration
│ │ ├── session/ # Session management
│ │ └── shared/ # Shared utilities
│ ├── basic/ # BASIC interpreter
│ │ ├── compiler/ # Script compilation
│ │ └── keywords/ # Keyword implementations
│ ├── drive/ # Storage integration
│ └── llm/ # LLM providers
├── templates/ # Bot templates
├── migrations/ # Database migrations
├── web/ # Web interface
└── Cargo.toml # Dependencies
Database Setup
Manual Database Creation
If bootstrap doesn’t create the database:
# Connect to PostgreSQL
psql -U postgres
# Create user and database
CREATE USER gbuser WITH PASSWORD 'SecurePassword123!';
CREATE DATABASE generalbots OWNER gbuser;
\q
Running Migrations
Migrations run automatically, but can be run manually:
# Install diesel CLI
cargo install diesel_cli --no-default-features --features postgres
# Run migrations
diesel migration run
Common Development Tasks
Adding a New Keyword
- Create new file in
src/basic/keywords/ - Implement the keyword function
- Register in
src/basic/keywords/mod.rs - Add tests
Adding an API Endpoint
- Define handler in appropriate module
- Add route in
src/api_router.rs - Update OpenAPI documentation
- Write integration tests
Modifying Database Schema
- Create migration:
diesel migration generate migration_name - Edit
up.sqlanddown.sql - Run migration:
diesel migration run - Update models in
src/core/shared/models.rs
Remote Development Setup
SSH Configuration for Stable Connections
When developing on remote Linux servers, configure SSH for stable monitoring connections:
Edit ~/.ssh/config:
Host *
ServerAliveInterval 60
ServerAliveCountMax 5
This configuration:
- ServerAliveInterval 60: Sends keepalive packets every 60 seconds
- ServerAliveCountMax 5: Allows up to 5 missed keepalives before disconnecting
- Prevents SSH timeouts during long compilations or debugging sessions
- Maintains stable connections for monitoring logs and services
Remote Monitoring Tips
# Monitor BotServer logs in real-time
ssh user@server 'tail -f botserver.log'
# Watch compilation progress
ssh user@server 'cd /path/to/botserver && cargo build --release'
# Keep terminal session alive
ssh user@server 'tmux new -s botserver'
Debugging
Debug Mode
Run with verbose output to troubleshoot issues:
RUST_LOG=trace cargo run
Check logs in the console output for debugging information.
Using VS Code
.vscode/launch.json:
{
"version": "0.2.0",
"configurations": [
{
"type": "lldb",
"request": "launch",
"name": "Debug BotServer",
"cargo": {
"args": ["build"],
"filter": {
"name": "botserver",
"kind": "bin"
}
},
"args": [],
"cwd": "${workspaceFolder}"
}
]
}
Performance Profiling
Using Flamegraph
# Install flamegraph
cargo install flamegraph
# Profile the application
cargo flamegraph --bin botserver
Memory Profiling
# Install valgrind (Linux)
sudo apt-get install valgrind
# Run with memory profiling
valgrind --tool=memcheck cargo run
Testing with Different Features
Feature Flags
# Build with specific features
cargo build --features "llm,drive"
# Build without default features
cargo build --no-default-features
# Test with all features
cargo test --all-features
Troubleshooting
Common Issues
-
Database Connection Failed
- Check PostgreSQL is running
- Verify DATABASE_URL is correct
- Check user permissions
-
Drive Connection Failed
- Ensure drive is running on port 9000
- Check DRIVE_ACCESSKEY and DRIVE_SECRET
-
Port Already in Use
- Change SERVER_PORT in .env
- Kill existing process:
lsof -i :8080
-
Compilation Errors
- Update Rust:
rustup update - Clean build:
cargo clean - Check dependencies:
cargo tree
- Update Rust:
LXC Development
Using LXC Containers
# Create development containers
lxc-create -n botserver-dev-db -t download -- -d alpine -r 3.18 -a amd64
lxc-create -n botserver-dev-drive -t download -- -d alpine -r 3.18 -a amd64
lxc-create -n botserver-dev-cache -t download -- -d alpine -r 3.18 -a amd64
# Configure PostgreSQL container
lxc-start -n botserver-dev-db
lxc-attach -n botserver-dev-db -- sh -c "
apk add postgresql14 postgresql14-client
rc-service postgresql setup
rc-service postgresql start
psql -U postgres -c \"CREATE USER gbuser WITH PASSWORD 'password';\"
psql -U postgres -c \"CREATE DATABASE botserver OWNER gbuser;\"
"
# Configure MinIO (Drive) container
lxc-start -n botserver-dev-drive
lxc-attach -n botserver-dev-drive -- sh -c "
wget https://dl.min.io/server/minio/release/linux-amd64/minio
chmod +x minio
MINIO_ROOT_USER=driveadmin MINIO_ROOT_PASSWORD=driveadmin ./minio server /data --console-address ':9001' &
"
# Configure Redis (Cache) container
lxc-start -n botserver-dev-cache
lxc-attach -n botserver-dev-cache -- sh -c "
apk add redis
rc-service redis start
"
# Get container IPs
DB_IP=$(lxc-info -n botserver-dev-db -iH)
DRIVE_IP=$(lxc-info -n botserver-dev-drive -iH)
CACHE_IP=$(lxc-info -n botserver-dev-cache -iH)
echo "Database: $DB_IP:5432"
echo "Drive: $DRIVE_IP:9000"
echo "Cache: $CACHE_IP:6379"
Start all services:
lxc-start -n botserver-dev-db
lxc-start -n botserver-dev-drive
lxc-start -n botserver-dev-cache
Contributing Guidelines
See Contributing Guidelines for:
- Code style requirements
- Commit message format
- Pull request process
- Code review expectations
Getting Help
- Check existing issues on GitHub
- Join the community discussions
- Review the documentation
- Ask questions in pull requests
Next Steps
- Read the Architecture Overview
- Explore the BASIC Language Reference
- Review Code Standards
- Start with a good first issue
Testing
BotServer follows comprehensive testing practices to ensure reliability, performance, and maintainability of the codebase.
Overview
Testing in BotServer covers:
- Unit tests for individual functions
- Integration tests for components
- End-to-end tests for workflows
- Performance benchmarks
- BASIC script testing
Test Organization
Directory Structure
src/
├── module/
│ ├── mod.rs # Module code
│ └── mod.test.rs # Module tests
├── basic/keywords/
│ ├── keyword.rs # Keyword implementation
│ └── keyword.test.rs # Keyword tests
tests/
├── integration/ # Integration tests
└── e2e/ # End-to-end tests
Test Files
Tests are colocated with source code:
module.rs- Implementationmodule.test.rs- Tests- Or inline
#[cfg(test)]modules
Running Tests
All Tests
# Run all tests
cargo test
# Run with output
cargo test -- --nocapture
# Run specific test
cargo test test_name
# Run tests in module
cargo test module_name::
Test Coverage
# Install tarpaulin
cargo install cargo-tarpaulin
# Generate coverage report
cargo tarpaulin --out Html
# View coverage
open tarpaulin-report.html
Unit Testing
Basic Test Structure
#![allow(unused)] fn main() { #[cfg(test)] mod tests { use super::*; #[test] fn test_function_success() { // Arrange let input = "test"; // Act let result = function_under_test(input); // Assert assert_eq!(result, expected); } #[test] #[should_panic(expected = "error message")] fn test_function_failure() { function_that_panics(); } } }
Async Tests
#![allow(unused)] fn main() { #[tokio::test] async fn test_async_function() { let result = async_function().await; assert!(result.is_ok()); } }
Integration Testing
Database Tests
#![allow(unused)] fn main() { #[test] fn test_database_operation() { // Use test database let conn = establish_test_connection(); // Run migrations run_pending_migrations(&conn).unwrap(); // Test operation let result = create_user(&conn, "test_user"); assert!(result.is_ok()); // Cleanup rollback_transaction(&conn); } }
API Tests
#![allow(unused)] fn main() { #[tokio::test] async fn test_api_endpoint() { // Create test app let app = create_test_app().await; // Make request let response = app .oneshot( Request::builder() .uri("/api/health") .body(Body::empty()) .unwrap(), ) .await .unwrap(); // Assert response assert_eq!(response.status(), StatusCode::OK); } }
BASIC Script Testing
Testing Keywords
#![allow(unused)] fn main() { #[test] fn test_custom_keyword() { let mut engine = Engine::new(); let state = create_test_state(); // Register keyword register_keyword(&state, &mut engine); // Execute script let script = r#" let result = MY_KEYWORD("input"); result "#; let result: String = engine.eval(script).unwrap(); assert_eq!(result, "expected output"); } }
Testing Script Compilation
#![allow(unused)] fn main() { #[test] fn test_script_compilation() { let compiler = BasicCompiler::new(test_state(), test_bot_id()); let script_path = "test.bas"; let result = compiler.compile_file(script_path, "work_dir"); assert!(result.is_ok()); assert!(result.unwrap().mcp_tool.is_some()); } }
Test Utilities
Test Fixtures
#![allow(unused)] fn main() { // test_utils.rs pub fn create_test_state() -> Arc<AppState> { Arc::new(AppState { conn: create_test_pool(), config: test_config(), // ... other fields }) } pub fn create_test_user() -> User { User { id: Uuid::new_v4(), username: "test_user".to_string(), email: "test@example.com".to_string(), // ... } } }
Mock Objects
#![allow(unused)] fn main() { use mockall::*; #[automock] trait EmailService { fn send_email(&self, to: &str, subject: &str, body: &str) -> Result<()>; } #[test] fn test_with_mock() { let mut mock = MockEmailService::new(); mock.expect_send_email() .times(1) .returning(|_, _, _| Ok(())); // Use mock in test } }
Performance Testing
Benchmarks
#![allow(unused)] #![feature(test)] fn main() { extern crate test; #[cfg(test)] mod bench { use super::*; use test::Bencher; #[bench] fn bench_function(b: &mut Bencher) { b.iter(|| { function_to_benchmark() }); } } }
Load Testing
# Using cargo-stress
cargo install cargo-stress
cargo stress --test load_test
# Custom load test
#[test]
#[ignore] // Run with --ignored flag
fn test_high_load() {
let handles: Vec<_> = (0..100)
.map(|_| {
thread::spawn(|| {
// Simulate load
})
})
.collect();
for handle in handles {
handle.join().unwrap();
}
}
Test Best Practices
Test Naming
#![allow(unused)] fn main() { // Good: Descriptive names #[test] fn test_user_creation_with_valid_email_succeeds() {} #[test] fn test_user_creation_with_invalid_email_fails() {} // Bad: Generic names #[test] fn test1() {} }
Test Independence
#![allow(unused)] fn main() { // Each test should be independent #[test] fn test_independent_1() { let state = create_fresh_state(); // Test logic } #[test] fn test_independent_2() { let state = create_fresh_state(); // Fresh state // Test logic } }
Test Data
#![allow(unused)] fn main() { // Use builders for test data struct UserBuilder { username: String, email: String, } impl UserBuilder { fn new() -> Self { Self { username: "test_user".to_string(), email: "test@example.com".to_string(), } } fn with_username(mut self, username: &str) -> Self { self.username = username.to_string(); self } fn build(self) -> User { User { username: self.username, email: self.email, // ... } } } }
Continuous Integration
GitHub Actions
name: Tests
on: [push, pull_request]
jobs:
test:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v2
- uses: actions-rs/toolchain@v1
- run: cargo test --all-features
- run: cargo clippy -- -D warnings
- run: cargo fmt -- --check
Test Documentation
Document Test Purpose
#![allow(unused)] fn main() { /// Tests that user creation fails when email is invalid. /// /// This test ensures that the email validation logic /// properly rejects malformed email addresses. #[test] fn test_invalid_email_rejection() { // Test implementation } }
Common Testing Patterns
Arrange-Act-Assert
#![allow(unused)] fn main() { #[test] fn test_pattern() { // Arrange let input = prepare_test_data(); let expected = "expected result"; // Act let result = function_under_test(input); // Assert assert_eq!(result, expected); } }
Given-When-Then
#![allow(unused)] fn main() { #[test] fn test_user_story() { // Given: A user with valid credentials let user = create_valid_user(); // When: The user attempts to login let result = login(user.username, user.password); // Then: The login should succeed assert!(result.is_ok()); } }
Summary
Comprehensive testing ensures BotServer’s reliability and makes refactoring safe. Focus on writing clear, independent tests that cover both success and failure cases, and maintain good test coverage across the codebase.
Documentation
Good documentation is essential for maintaining and growing BotServer. This guide covers documentation standards and practices for contributors.
Overview
BotServer documentation includes code documentation through inline comments and doc comments, API documentation, user guides, the BASIC language reference, architecture documentation, and README files throughout the repository.
Documentation Structure
Repository Documentation
The repository follows a structured documentation layout. The root contains README.md for the project overview and CHANGELOG.md for version history. The docs/ directory contains mdBook documentation with source files in docs/src/. Each template directory also includes its own README file explaining that specific template.
mdBook Documentation
The main documentation lives in docs/src/ and covers user guides, developer guides, API references, architecture documentation, and the BASIC language reference.
Code Documentation
Rust Doc Comments
Use triple slashes for public items to generate documentation that integrates with Rust’s documentation system:
#![allow(unused)] fn main() { /// Creates a new user session for the specified bot. /// /// # Arguments /// * `user_id` - The unique identifier of the user /// * `bot_id` - The bot instance to connect to /// /// # Returns /// * `Result<Session>` - The created session or an error /// /// # Example /// ``` /// let session = create_session(user_id, bot_id)?; /// println!("Session created: {}", session.id); /// ``` pub fn create_session(user_id: Uuid, bot_id: Uuid) -> Result<Session> { // Implementation } }
Module Documentation
Document modules with //! at the top of the file to provide context for the entire module:
#![allow(unused)] fn main() { //! # Session Management Module //! //! This module handles user sessions and bot interactions. //! //! ## Features //! - Session creation and validation //! - Token management //! - Session persistence //! //! ## Usage //! ``` //! use crate::session::{Session, create_session}; //! ``` // Module code follows }
Inline Comments
Use inline comments for complex logic where the code’s purpose isn’t immediately obvious:
#![allow(unused)] fn main() { // Calculate the exponential backoff delay // Using the formula: delay = base * 2^attempt let delay = Duration::from_millis(100 * 2_u64.pow(attempt)); // Check if we've exceeded max retries // This prevents infinite loops in case of permanent failures if attempt > MAX_RETRIES { return Err("Max retries exceeded"); } }
API Documentation
Endpoint Documentation
Document REST endpoints clearly with the HTTP method, path, purpose, request format, response format, and possible error codes:
## Create User
**POST** `/api/users`
Creates a new user account.
### Request
```json
{
"username": "john_doe",
"email": "john@example.com"
}
Response
{
"user_id": "user-123",
"created_at": "2024-01-20T10:00:00Z"
}
Errors
400- Invalid input data409- Username already exists
### WebSocket Documentation
Document WebSocket protocols with connection details, message formats for both directions, and any special handling requirements:
```markdown
## WebSocket Protocol
### Connection
ws://localhost:8080/ws
### Message Format
Client → Server:
```json
{
"type": "message",
"content": "Hello",
"session_id": "session-123"
}
Server → Client:
{
"type": "response",
"content": "Bot response",
"is_complete": true
}
## BASIC Script Documentation
### Keyword Documentation
Document BASIC keywords with syntax, parameters, and working examples:
```markdown
## TALK Keyword
Sends a message to the user.
### Syntax
```basic
TALK message
Parameters
message- The text to send to the user
Examples
TALK "Hello, how can I help?"
let greeting = "Welcome!"
TALK greeting
### Script Examples
Provide complete working examples that demonstrate real-world usage patterns:
```basic
# greeting.bas
# A simple greeting bot that personalizes messages
# Get user's name
TALK "What's your name?"
let name = HEAR
# Create personalized greeting
let greeting = "Hello, " + name + "!"
TALK greeting
# Store for future use
SET_BOT_MEMORY "user_name", name
Markdown Best Practices
Structure
Use clear hierarchy with headings that progress logically from broad concepts to specific details. Start with a main title using a single hash, then use second-level headings for major sections, third-level for subsections, and so on.
Code Blocks
Always specify the language for syntax highlighting in code blocks. Use rust for Rust code, bash for shell commands, basic for BASIC scripts, json for JSON data, and toml for configuration files.
Tables
Use tables for structured data where comparison or quick reference is useful, such as parameter lists, feature comparisons, or API endpoints.
Links
Use relative links for internal documentation to ensure links work regardless of where the documentation is hosted. For example, link to authentication documentation as ../chapter-11/README.md rather than using absolute URLs.
Writing Style
Be Clear and Concise
Write directly and avoid unnecessary words. Instead of “The system employs a sophisticated relational database management system, specifically PostgreSQL, for the purpose of persisting structured information,” simply write “BotServer uses PostgreSQL for structured data storage.”
Use Active Voice
Prefer active voice over passive voice for clarity. Write “The function returns an error if validation fails” rather than “An error is returned by the function if validation is failed.”
Provide Context
Explain not just what something does, but why it matters. Instead of only stating “Sessions expire after 24 hours,” add the reasoning: “Sessions expire after 24 hours to balance security with user convenience.”
Documentation Process
When to Document
Document before coding to clarify design and API structure. Add inline comments while coding to explain complex logic. After coding, update documentation with any learnings and add examples. During code review, ensure documentation is complete and accurate.
Documentation Checklist
Before submitting a pull request, verify that all public functions have doc comments, complex logic has inline comments explaining the reasoning, README files are updated if the PR affects them, examples are provided for new features, API documentation reflects any changes, breaking changes are noted prominently, and the CHANGELOG is updated.
Tools
Documentation Generation
Generate Rust documentation with cargo doc --open, which builds and opens the documentation in your browser.
Documentation Serving
Serve mdBook documentation locally during development:
cd docs
mdbook serve
Spell Checking
Install and use cargo-spellcheck to catch spelling errors:
cargo install cargo-spellcheck
cargo spellcheck check
Common Mistakes
Missing Context
Avoid comments that merely restate the code. Instead of commenting “Increment counter” above counter += 1, explain why: “Increment retry counter to track failed attempts. This is used for exponential backoff calculation.”
Outdated Documentation
Always update documentation when code changes. This includes parameter changes, behavior modifications, new error conditions, and deprecated features. Outdated documentation is often worse than no documentation.
Unclear Examples
Examples should be complete and demonstrate realistic usage. Instead of terse, unclear examples with generic variable names, provide full examples with meaningful names, comments explaining each step, and realistic use cases.
Contributing Documentation
Where to Contribute
Documentation contributions are welcome in many forms. Fix typos and errors anywhere you find them. Add examples to existing documentation. Clarify unclear sections. Document undocumented features. Translate documentation to other languages.
Documentation PRs
Documentation-only pull requests are welcome and valuable. They can be merged quickly, don’t require extensive testing, help new users get started, and improve overall project quality.
Summary
Good documentation makes BotServer accessible to users and maintainable for developers. Always consider documentation as part of the development process, not an afterthought. Clear, accurate, and up-to-date documentation is as valuable as the code itself.
Pull Requests
This guide covers the pull request process for contributing to BotServer, from creation to merge. Understanding this process helps ensure your contributions are reviewed efficiently and merged successfully.
Overview
Pull requests are the primary method for contributing code to BotServer. They provide a structured way to propose changes, enabling code review, discussion, and automated testing before changes are merged into the main codebase. Every contribution, whether a bug fix, new feature, or documentation update, follows this process.
Before Creating a PR
Check Existing Work
Before starting work on a contribution, search existing pull requests to avoid duplicating effort. Check the issue tracker for related discussions that might affect your approach. For major changes, open an issue first to discuss the design with maintainers and get feedback before investing significant time in implementation.
Prepare Your Branch
Create a feature branch from the main branch for your work. Keep your branch updated by regularly fetching from origin and rebasing on the latest main. This practice reduces merge conflicts and ensures your changes work with the most recent codebase.
git checkout -b feature/your-feature
git fetch origin
git rebase origin/main
Make Your Changes
Follow the established code standards documented in the standards guide. Write tests for any new functionality you add. Update documentation to reflect your changes. Keep commits atomic and logical, with each commit representing a single coherent change.
Creating a Pull Request
PR Title
Use clear, descriptive titles that follow the conventional commit format. Good titles include prefixes like “feat:” for new features, “fix:” for bug fixes, “docs:” for documentation updates, and “refactor:” for code restructuring. Examples of good titles include “feat: Add email notification support” and “fix: Resolve session timeout issue”. Avoid vague titles like “Fix bug” or “Update code” that do not convey what the PR actually does.
PR Description
The description should explain what the PR does and why. Start with a brief description of the change. Indicate the type of change whether it is a bug fix, new feature, breaking change, documentation update, performance improvement, or refactoring. List specific changes made with technical details and any side effects. Document testing performed including unit tests, integration tests, and manual testing. Note any documentation updates made. Link related issues using keywords like “Fixes #123” to automatically close issues when the PR merges. Include screenshots for UI changes.
PR Best Practices
Keep It Small
Focus each PR on one feature or fix rather than bundling multiple changes together. Aim for fewer than 500 lines changed when possible. Split large changes into multiple smaller PRs that can be reviewed independently. Smaller PRs are easier and faster to review, leading to quicker merge times and higher quality feedback.
Commit Organization
Organize commits logically with each commit representing a complete, working change. Good commit organization might include separate commits for adding a feature, adding tests for that feature, and updating documentation. Avoid mixing unrelated changes in a single commit. Well-organized commits make it easier to understand the progression of changes and to bisect issues if problems arise later.
Self-Review First
Before requesting review from others, review your own changes thoroughly. Check for any debug code or temporary changes that should not be committed. Verify there are no accidental changes to unrelated files. Ensure formatting is consistent with the codebase style. Test edge cases that the CI might not catch. This self-review catches obvious issues before they consume reviewer time.
Code Review Process
Requesting Review
When your PR is ready for review, mark it as ready if it was previously a draft. Request specific reviewers if you know who has relevant expertise. Add appropriate labels to categorize the PR. Link related issues in the description. Add comments on particularly complex areas of code to help reviewers understand your approach.
Responding to Feedback
Engage constructively with review feedback. Acknowledge feedback and note when you have addressed it with a commit reference. Explain your decisions when you chose a particular approach for good reasons. Ask for clarification when feedback is unclear. If you disagree with feedback, express your perspective respectfully and be open to discussion.
Making Changes
Address review comments promptly to keep the review process moving. Commit changes that address feedback with clear commit messages. You can amend commits if the changes are small corrections. Use force push with lease to update your PR branch safely while preserving the force push safety check.
Review Guidelines
For Reviewers
When reviewing PRs, examine code correctness to ensure the implementation is sound. Check test coverage to verify new code is properly tested. Verify documentation is updated to reflect changes. Consider performance implications of the changes. Evaluate security considerations especially for code handling user input or authentication. Ensure code style consistency with the rest of the codebase.
Provide constructive feedback with specific suggestions. Include code examples when they would clarify your point. Recognize good work when you see it. Remember that the goal is to improve the code while supporting the contributor.
Review Comments
Good review feedback is specific and actionable. Instead of saying “This is wrong,” explain what the issue is and suggest a solution. For example, you might suggest using a more idiomatic Rust pattern and show what the improved code would look like. This approach helps contributors learn and makes it clear how to address the feedback.
CI/CD Checks
Required Checks
All PRs must pass the automated CI checks before merging. These include cargo build for compilation verification, cargo test for unit tests, cargo fmt check for code formatting, cargo clippy for linting, and documentation builds. The CI runs automatically when you push changes to your PR branch.
Fixing Failed Checks
When CI checks fail, fix the issues locally before pushing updates. Run cargo fmt to fix formatting issues. Run cargo clippy with the fix flag to automatically fix many linting issues. Run cargo test locally to debug test failures with the nocapture flag to see output. Fix all issues and push updates to trigger a new CI run.
Merge Process
Merge Requirements
Before a PR can be merged, all CI checks must pass, at least one maintainer must approve the changes, all review conversations must be resolved, the branch must be up-to-date with main, and there must be no merge conflicts.
Merge Methods
Squash and merge is the preferred method for most PRs. This combines all commits into a single commit on main, keeping the history clean and making it easy to revert changes if needed. Rebase and merge preserves the individual commit history and is appropriate for PRs with well-organized, meaningful commits. Merge commits are rarely used and reserved for special circumstances.
After Merge
Clean Up
After your PR is merged, delete your local feature branch. GitHub automatically deletes the remote branch if configured to do so. Update your local main branch by checking out main and pulling the latest changes. This keeps your local repository clean and up-to-date.
Follow Up
Monitor the codebase after your changes merge to catch any issues that emerge. Respond to questions from other contributors about your changes. Update related documentation if you discover gaps. Close any related issues that were not automatically closed by the PR.
Common Issues
Merge Conflicts
When merge conflicts occur, update your branch with the latest main by fetching and rebasing. Git will pause at each conflict, allowing you to resolve it. Edit the conflicted files to resolve the conflicts, add the resolved files, and continue the rebase. If the conflicts become too complex, you can abort the rebase and try a different approach.
Large PR
If a PR becomes too large during development, consider closing it and splitting the work into smaller PRs. Create a tracking issue to coordinate the smaller PRs. Link all related PRs together so reviewers understand the bigger picture. Smaller, focused PRs are more likely to receive thorough review and merge quickly.
Stale PR
If a PR goes without activity for an extended period, ping the reviewers with a comment. Rebase on the latest main to ensure the changes still apply cleanly. Add a comment explaining the current status. If the PR is no longer relevant, close it with an explanation so others know not to wait for it.
Tips for Success
Communicate early about what you plan to implement to avoid wasted effort and get valuable design feedback. Test thoroughly rather than relying solely on CI since you understand your changes better than automated tests can. Be patient because reviewers have limited time and thorough review takes effort. Be responsive to feedback to keep the review process moving efficiently. Learn from reviews by treating feedback as an opportunity to improve your skills. Help others by reviewing other PRs when you have time, which builds goodwill and helps you learn the codebase.
Summary
Successful pull requests are well-prepared with a clear purpose, properly documented and tested, responsive to feedback, and focused on a single change. Following these guidelines helps maintain code quality and makes the review process smooth for everyone involved. The time invested in creating a good PR pays off in faster reviews, fewer revision cycles, and a better end result.
Community Guidelines
Welcome to the General Bots community! These guidelines ensure a welcoming environment.
Our Values
- Inclusivity: Welcome all contributors
- Respect: Professional interactions
- Collaboration: Work together effectively
- Quality: Maintain high standards
- Learning: Share knowledge openly
Expected Behavior
- Be respectful and considerate
- Welcome newcomers
- Give constructive feedback
- Focus on community benefit
- Show empathy
Communication
- GitHub Issues for bugs/features
- GitHub Discussions for questions
- Pull Requests for contributions
See Also
IDEs
General Bots supports development with any text editor or IDE. Choose the one that works best for your workflow.
Zed Editor (Best for Rust Development)
Zed is a high-performance, collaborative code editor that excels at Rust development and is recommended for working with General Bots core. The editor provides native Rust support with excellent syntax highlighting, delivers fast performance with minimal resource usage, includes built-in collaboration features, and offers a modern, clean interface.
Installation
# Install Zed
curl https://zed.dev/install.sh | sh
Other Popular IDEs
You can use any IDE or text editor you prefer. Visual Studio Code offers an extensive extension marketplace, good BASIC syntax highlighting with custom extensions, an integrated terminal for running General Bots, and Git integration. IntelliJ IDEA and RustRover provide excellent Rust support, powerful refactoring tools, and database tools for PostgreSQL integration. Neovim appeals to developers who prefer a lightweight, fast, highly customizable, terminal-based workflow. Sublime Text is known for being fast and responsive, with multiple cursors, powerful search capabilities, and customizable syntax highlighting.
BASIC Script Support
For editing .bas files (General Bots dialog scripts), you can configure your editor with custom key bindings and project settings.
Key Bindings Configuration
{
"bindings": {
"cmd-shift-b": "botserver:run-script",
"cmd-shift-d": "botserver:deploy-bot",
"cmd-shift-l": "botserver:view-logs"
}
}
Project Settings
Create .zed/settings.json in your bot project:
{
"file_types": {
"BASIC": ["*.bas", "*.gbdialog"],
"Config": ["*.csv", "*.gbot"]
},
"format_on_save": true,
"tab_size": 2
}
Vim/Neovim Plugin
Installation
The Vim plugin can be installed using vim-plug by adding the following to your configuration:
" ~/.vimrc or ~/.config/nvim/init.vim
Plug 'botserver/vim-botserver'
For Neovim users preferring lazy.nvim, use this Lua configuration:
-- ~/.config/nvim/lua/plugins/botserver.lua
return {
'botserver/nvim-botserver',
config = function()
require('botserver').setup({
server_url = 'http://localhost:8080',
default_bot = 'edu'
})
end
}
Features
The plugin includes syntax files for BASIC highlighting:
" ~/.vim/syntax/basic.vim
syn keyword basicKeyword TALK HEAR SET GET LLM
syn keyword basicConditional IF THEN ELSE END
syn keyword basicRepeat FOR EACH NEXT
syn match basicComment "^REM.*$"
syn match basicComment "'.*$"
The plugin provides several commands for interacting with BotServer. Use :BotDeploy to deploy the current bot, :BotRun to run the current script, :BotLogs to view server logs, and :BotConnect to connect to the server.
Emacs Mode
Installation
Add the BotServer mode to your Emacs configuration:
;; ~/.emacs.d/init.el
(add-to-list 'load-path "~/.emacs.d/botserver-mode")
(require 'botserver-mode)
(add-to-list 'auto-mode-alist '("\\.bas\\'" . botserver-mode))
Features
The major mode definition provides BASIC script editing support:
(define-derived-mode botserver-mode prog-mode "BotServer"
"Major mode for editing BotServer BASIC scripts."
(setq-local comment-start "REM ")
(setq-local comment-end "")
(setq-local indent-line-function 'botserver-indent-line))
The mode includes convenient key bindings: C-c C-c runs the current script, C-c C-d deploys the bot, and C-c C-l displays the logs.
Sublime Text Package
Installation
The package can be installed via Package Control by opening the command palette with Cmd+Shift+P, selecting “Package Control: Install Package”, and searching for “BotServer”. For manual installation, clone the repository directly:
cd ~/Library/Application\ Support/Sublime\ Text/Packages
git clone https://github.com/botserver/sublime-botserver BotServer
The package provides BASIC syntax highlighting, a build system for running scripts, snippets for common patterns, and project templates.
TextMate Bundle
Installation
Clone the bundle to your TextMate bundles directory:
cd ~/Library/Application\ Support/TextMate/Bundles
git clone https://github.com/botserver/botserver.tmbundle
The bundle includes a language grammar for BASIC, commands for deployment, and tab triggers for snippets.
Language Server Protocol (LSP)
BotServer includes an LSP server that works with any LSP-compatible editor. This enables a consistent development experience across different editors and platforms.
Starting the LSP Server
botserver --lsp --stdio
The LSP server provides completion suggestions, hover documentation, go to definition, find references, diagnostics for error detection, and code actions for quick fixes.
Configuration Example
For any LSP client, use this configuration:
{
"command": ["botserver", "--lsp", "--stdio"],
"filetypes": ["basic", "bas"],
"rootPatterns": [".gbai", "config.csv"],
"initializationOptions": {
"bot": "default"
}
}
Common Features Across All Editors
Snippets
All editor integrations include useful snippets to speed up development. The tool definition snippet creates parameter blocks:
PARAM ${name} AS ${type} LIKE "${example}" DESCRIPTION "${description}"
DESCRIPTION "${tool_description}"
${body}
The dialog flow snippet sets up conversation structures:
TALK "${greeting}"
HEAR response
IF response = "${expected}" THEN
${action}
END IF
The knowledge base snippet configures KB access:
USE KB "${collection}"
# System AI now has access to the KB
TALK "How can I help you with ${collection}?"
CLEAR KB
File Associations
| Extension | File Type | Purpose |
|---|---|---|
.bas | BASIC Script | Dialog logic |
.gbdialog | Dialog Package | Contains .bas files |
.gbkb | Knowledge Base | Document collections |
.gbot | Bot Config | Contains config.csv |
.gbtheme | Theme Package | CSS themes |
.gbai | Bot Package | Root container |
Debugging Support
Breakpoints
Set breakpoints in BASIC scripts by adding a comment marker:
TALK "Before breakpoint"
' BREAKPOINT
TALK "After breakpoint"
Watch Variables
Monitor variable values during execution by adding watch comments:
' WATCH: user_name
' WATCH: greeting
user_name = GET "name"
greeting = "Hello " + user_name
Step Execution
The debugger supports several execution control modes. Step Over executes the current line and moves to the next. Step Into enters function calls to debug their internals. Step Out exits the current function and returns to the caller. Continue resumes normal execution until the next breakpoint.
Best Practices
Effective IDE configuration significantly improves development productivity. Enable format on save to keep code consistently formatted across your project. Configure linting to catch errors early in the development cycle. Set up keyboard shortcuts for common tasks like deployment and script execution to speed up your workflow. Create and use snippets to reduce repetitive typing when writing common patterns. Finally, keep your extensions updated to benefit from the latest features and bug fixes.
Troubleshooting
When the LSP server fails to start, verify that the botserver binary is in your PATH, confirm the server is running on the expected port, and review the LSP logs in your editor’s output panel.
If syntax highlighting is missing, ensure file extensions are properly associated with the BASIC language mode, restart your editor after installing the extension, and check that the language mode is correctly set for open files.
When commands are not working, verify your server connection settings are correct, check API credentials if authentication is required, and review the editor console for error messages that might indicate the cause.
Chapter 14: Migration Guide
Migrate from cloud services to self-hosted General Bots with complete data sovereignty.
Why Migrate?
| Cloud Services | General Bots |
|---|---|
| Data on vendor servers | Data on YOUR servers |
| $40-60/user/month | ~$7/user/month |
| Vendor-controlled AI | Transparent, traceable logic |
| Black box processing | Extensible via BASIC |
| Subscription forever | One-time deployment |
Core Principles
Component Architecture
Install only what you need:
./botserver package install mail # Email
./botserver package install drive # Storage
./botserver package install directory # Users
./botserver package install meet # Video
Standard Protocols
- Storage: S3 API (MinIO)
- Email: SMTP/IMAP/JMAP
- Auth: OIDC/SAML/LDAP
- Video: WebRTC
Knowledge Base Integration
USE KB "company_docs"
USE WEBSITE "https://sharepoint.company.com/docs"
' Documents now searchable via natural language
Migration Timeline
| Phase | Duration | Activities |
|---|---|---|
| Assessment | Week 1-2 | Inventory services, identify dependencies |
| Infrastructure | Week 2-3 | Deploy BotServer, configure auth/storage |
| Data Migration | Week 3-6 | Users, email, files, documents |
| Process Migration | Week 6-8 | Convert workflows to .gbdialog |
| Validation | Week 8-10 | Testing, training, documentation |
| Cutover | Week 10-12 | User migration, decommission old |
Migration Paths
| Source | Guide |
|---|---|
| Microsoft 365 | M365 Migration |
| Google Workspace | Google Migration |
| Dialogflow | Dialogflow Migration |
| Botpress | Botpress Migration |
| n8n / Zapier / Make | Automation Migration |
| Notion | Notion Migration |
Prerequisites Checklist
- Executive sponsorship
- Infrastructure provisioned
- Backup strategy defined
- Rollback plan documented
- User communication ready
Success Metrics
- Performance: Response times, availability
- Adoption: User login frequency
- Cost: TCO reduction (target: 80%+)
- Security: Compliance achievement
See Also
- Common Concepts - Shared migration patterns
- Comparison Matrix - Feature mapping
- KB Migration - Document conversion
- Validation - Testing procedures
Migration Overview
This page has been consolidated with Migration Concepts.
Please see the main chapter page for:
- Migration paradigms and philosophy
- Architecture translation concepts
- Assessment and planning frameworks
- Success metrics
For specific migration guides:
Platform Comparison Matrix
This comprehensive comparison helps organizations evaluate General Bots against major productivity, automation, and AI platforms.
Executive Summary
General Bots uniquely combines self-hosted deployment, open source licensing, native AI integration, and powerful BASIC scripting—capabilities that typically require multiple expensive subscriptions across competing platforms.
Complete Platform Comparison
Deployment & Licensing
| Capability | General Bots | Microsoft 365 | Google Workspace | n8n | Notion | Perplexity | Claude | Make/Zapier |
|---|---|---|---|---|---|---|---|---|
| Self-hosted | ✅ Full | ❌ Cloud only | ❌ Cloud only | ✅ Available | ❌ Cloud only | ❌ Cloud only | ❌ Cloud only | ❌ Cloud only |
| Open source | ✅ AGPL | ❌ Proprietary | ❌ Proprietary | ✅ Fair-code | ❌ Proprietary | ❌ Proprietary | ❌ Proprietary | ❌ Proprietary |
| Data sovereignty | ✅ Your servers | ❌ Microsoft servers | ❌ Google servers | ✅ Self-host option | ❌ AWS/GCP | ❌ Their servers | ❌ Anthropic servers | ❌ Their servers |
| Per-user licensing | ✅ None | ❌ $12-57/user/mo | ❌ $6-18/user/mo | ⚠️ Cloud version | ❌ $10-15/user/mo | ❌ $20/mo | ❌ $20/mo | ❌ Per-task pricing |
| Source code access | ✅ Full | ❌ None | ❌ None | ✅ Available | ❌ None | ❌ None | ❌ None | ❌ None |
| Modify & extend | ✅ Unlimited | ❌ API only | ❌ API only | ✅ Possible | ❌ API only | ❌ None | ❌ None | ❌ None |
Productivity Suite
| Capability | General Bots | Microsoft 365 | Google Workspace | n8n | Notion | Perplexity | Claude | Make/Zapier |
|---|---|---|---|---|---|---|---|---|
| ✅ Stalwart | ✅ Exchange | ✅ Gmail | ❌ None | ❌ None | ❌ None | ❌ None | ❌ None | |
| Calendar | ✅ CalDAV | ✅ Outlook | ✅ Calendar | ❌ None | ❌ Basic | ❌ None | ❌ None | ❌ None |
| File storage | ✅ SeaweedFS | ✅ OneDrive | ✅ Drive | ❌ None | ⚠️ Limited | ❌ None | ❌ None | ❌ None |
| Tasks/Projects | ✅ Full | ✅ Planner | ✅ Tasks | ❌ None | ✅ Strong | ❌ None | ❌ None | ❌ None |
| Video meetings | ✅ LiveKit | ✅ Teams | ✅ Meet | ❌ None | ❌ None | ❌ None | ❌ None | ❌ None |
| Team chat | ✅ Multi-channel | ✅ Teams | ✅ Chat | ❌ None | ⚠️ Comments | ❌ None | ❌ None | ❌ None |
| Document editing | ✅ Available | ✅ Office apps | ✅ Docs/Sheets | ❌ None | ✅ Pages | ❌ None | ❌ None | ❌ None |
| Identity/SSO | ✅ Zitadel | ✅ Entra ID | ✅ Identity | ❌ None | ⚠️ Basic | ❌ None | ❌ None | ❌ None |
AI & Intelligence
| Capability | General Bots | Microsoft 365 | Google Workspace | n8n | Notion | Perplexity | Claude | Make/Zapier |
|---|---|---|---|---|---|---|---|---|
| LLM integration | ✅ Any provider | ⚠️ Copilot ($30/user) | ⚠️ Gemini (extra) | ⚠️ Via nodes | ⚠️ Limited | ✅ Built-in | ✅ Built-in | ⚠️ Via connectors |
| Custom prompts | ✅ Full control | ⚠️ Limited | ⚠️ Limited | ✅ Available | ⚠️ Basic | ⚠️ Limited | ✅ Available | ⚠️ Limited |
| RAG/Knowledge base | ✅ Built-in | ⚠️ Extra cost | ⚠️ Extra cost | ⚠️ Custom build | ⚠️ Page search | ⚠️ Pro only | ⚠️ Projects | ❌ None |
| Image generation | ✅ Local SD | ⚠️ Designer | ⚠️ Limited | ⚠️ Via API | ❌ None | ⚠️ Limited | ✅ Available | ⚠️ Via API |
| Video generation | ✅ Zeroscope | ❌ None | ❌ None | ⚠️ Via API | ❌ None | ❌ None | ❌ None | ⚠️ Via API |
| Speech-to-text | ✅ Whisper | ⚠️ Extra | ⚠️ Extra | ⚠️ Via API | ❌ None | ❌ None | ❌ None | ⚠️ Via API |
| Vision/OCR | ✅ BLIP2 | ⚠️ Extra | ⚠️ Extra | ⚠️ Via API | ❌ None | ❌ None | ✅ Available | ⚠️ Via API |
| Local/offline AI | ✅ Full support | ❌ None | ❌ None | ⚠️ Possible | ❌ None | ❌ None | ❌ None | ❌ None |
| AI cost | ✅ Bring your key | ❌ $30/user/mo | ❌ $20/user/mo | ⚠️ API costs | ❌ $10/user/mo | ❌ $20/mo | ❌ $20/mo | ⚠️ Per operation |
Automation & Integration
| Capability | General Bots | Microsoft 365 | Google Workspace | n8n | Notion | Perplexity | Claude | Make/Zapier |
|---|---|---|---|---|---|---|---|---|
| Workflow automation | ✅ BASIC scripts | ⚠️ Power Automate ($) | ⚠️ AppSheet ($) | ✅ Visual builder | ⚠️ Basic | ❌ None | ❌ None | ✅ Visual builder |
| Scheduled tasks | ✅ Cron + natural | ⚠️ Extra license | ⚠️ Limited | ✅ Available | ❌ None | ❌ None | ❌ None | ✅ Available |
| Webhooks | ✅ Instant creation | ⚠️ Complex setup | ⚠️ Limited | ✅ Available | ⚠️ Limited | ❌ None | ❌ None | ✅ Available |
| Custom APIs | ✅ One line | ❌ Azure required | ❌ GCP required | ✅ Possible | ❌ None | ❌ None | ✅ API available | ❌ None |
| Database access | ✅ Direct SQL | ⚠️ Dataverse ($) | ⚠️ BigQuery ($) | ✅ Multiple DBs | ⚠️ Notion DBs | ❌ None | ❌ None | ⚠️ Limited |
| REST API calls | ✅ GET/POST/etc | ⚠️ Premium connectors | ⚠️ Limited | ✅ HTTP nodes | ❌ None | ❌ None | ❌ None | ✅ HTTP module |
| GraphQL | ✅ Native | ❌ None | ❌ None | ✅ Available | ❌ None | ❌ None | ❌ None | ⚠️ Limited |
| SOAP/Legacy | ✅ Supported | ⚠️ Limited | ❌ None | ✅ Available | ❌ None | ❌ None | ❌ None | ⚠️ Limited |
| Automation pricing | ✅ Unlimited | ❌ Per-flow fees | ❌ Per-run fees | ⚠️ Execution limits | ❌ None | ❌ None | ❌ None | ❌ Per-task fees |
Multi-Channel Communication
| Capability | General Bots | Microsoft 365 | Google Workspace | n8n | Notion | Perplexity | Claude | Make/Zapier |
|---|---|---|---|---|---|---|---|---|
| Web chat | ✅ Built-in | ⚠️ Bot Framework | ❌ None | ❌ None | ❌ None | ✅ Web only | ✅ Web only | ❌ None |
| ✅ Native | ⚠️ Extra setup | ❌ None | ⚠️ Via nodes | ❌ None | ❌ None | ❌ None | ⚠️ Connector | |
| Teams | ✅ Native | ✅ Native | ❌ None | ⚠️ Via nodes | ❌ None | ❌ None | ❌ None | ⚠️ Connector |
| Slack | ✅ Native | ⚠️ Connector | ⚠️ Limited | ⚠️ Via nodes | ⚠️ Integration | ❌ None | ⚠️ Integration | ⚠️ Connector |
| Telegram | ✅ Native | ❌ None | ❌ None | ⚠️ Via nodes | ❌ None | ❌ None | ❌ None | ⚠️ Connector |
| SMS | ✅ Native | ⚠️ Extra | ❌ None | ⚠️ Via nodes | ❌ None | ❌ None | ❌ None | ⚠️ Connector |
| Email bot | ✅ Native | ⚠️ Complex | ⚠️ Limited | ⚠️ Via nodes | ❌ None | ❌ None | ❌ None | ⚠️ Connector |
| Voice | ✅ LiveKit | ⚠️ Extra | ⚠️ Extra | ❌ None | ❌ None | ❌ None | ❌ None | ❌ None |
Developer Experience
| Capability | General Bots | Microsoft 365 | Google Workspace | n8n | Notion | Perplexity | Claude | Make/Zapier |
|---|---|---|---|---|---|---|---|---|
| Scripting language | ✅ BASIC (simple) | ⚠️ Power Fx | ⚠️ Apps Script | ✅ JavaScript | ❌ None | ❌ None | ❌ None | ❌ Visual only |
| No-code option | ✅ Conversational | ⚠️ Power Apps | ⚠️ AppSheet | ✅ Visual builder | ✅ Pages | ✅ Chat | ✅ Chat | ✅ Visual builder |
| Custom keywords | ✅ Rust extensible | ❌ None | ❌ None | ✅ Custom nodes | ❌ None | ❌ None | ❌ None | ❌ None |
| API-first | ✅ Full REST | ✅ Graph API | ✅ Workspace API | ✅ REST API | ⚠️ Limited | ⚠️ Limited | ✅ Full API | ⚠️ Limited |
| Debugging | ✅ Console + logs | ⚠️ Complex | ⚠️ Complex | ✅ Execution logs | ❌ None | ❌ None | ❌ None | ⚠️ Limited |
| Version control | ✅ File-based | ⚠️ Limited | ⚠️ Limited | ✅ Git support | ⚠️ Page history | ❌ None | ❌ None | ⚠️ Limited |
Security & Compliance
| Capability | General Bots | Microsoft 365 | Google Workspace | n8n | Notion | Perplexity | Claude | Make/Zapier |
|---|---|---|---|---|---|---|---|---|
| Data residency control | ✅ Your choice | ⚠️ Limited regions | ⚠️ Limited regions | ✅ Self-host | ❌ US/EU only | ❌ No control | ❌ No control | ❌ No control |
| GDPR compliance | ✅ Self-managed | ✅ Available | ✅ Available | ✅ Self-host | ⚠️ Depends | ⚠️ Limited | ⚠️ Limited | ⚠️ Limited |
| HIPAA capable | ✅ Self-managed | ⚠️ Extra cost | ⚠️ Extra cost | ✅ Self-host | ❌ No | ❌ No | ❌ No | ❌ No |
| Audit logs | ✅ Full control | ✅ Available | ✅ Available | ✅ Available | ⚠️ Limited | ❌ Limited | ❌ Limited | ⚠️ Limited |
| Encryption at rest | ✅ Configurable | ✅ Standard | ✅ Standard | ✅ Configurable | ✅ Standard | ✅ Standard | ✅ Standard | ✅ Standard |
| SSO/OIDC | ✅ Zitadel | ✅ Entra | ✅ Identity | ⚠️ Enterprise | ⚠️ Business | ❌ Basic | ⚠️ Enterprise | ⚠️ Enterprise |
| MFA | ✅ Built-in | ✅ Built-in | ✅ Built-in | ⚠️ Configure | ⚠️ Basic | ⚠️ Basic | ⚠️ Basic | ⚠️ Basic |
Cost Analysis (100 Users, Annual)
| Platform | Base License | AI Features | Automation | Storage | Total Annual |
|---|---|---|---|---|---|
| General Bots | $0 | $0 (bring key) | $0 | Included | $3,000-12,000* |
| Microsoft 365 E3 + Copilot | $43,200 | $36,000 | $12,000+ | Included | $91,200+ |
| Google Workspace Business + Gemini | $21,600 | $24,000 | $6,000+ | Included | $51,600+ |
| n8n Cloud + separate tools | $0-6,000 | API costs | Included | None | $20,000+ |
| Notion Team + AI | $12,000 | $12,000 | None | Limited | $24,000 |
| Multiple point solutions | Varies | Varies | Varies | Varies | $50,000+ |
*General Bots cost = infrastructure + optional LLM API usage
Feature Availability by Use Case
Customer Service Bot
| Requirement | General Bots | Microsoft | n8n | Notion | AI Assistants | |
|---|---|---|---|---|---|---|
| Knowledge base | ✅ | ⚠️ Extra | ⚠️ Extra | ⚠️ Build | ⚠️ Limited | ⚠️ Limited |
| WhatsApp channel | ✅ | ⚠️ Complex | ❌ | ⚠️ Build | ❌ | ❌ |
| Web widget | ✅ | ⚠️ Complex | ❌ | ❌ | ❌ | ❌ |
| Ticket creation | ✅ | ⚠️ Extra | ⚠️ Extra | ✅ | ⚠️ Manual | ❌ |
| Human handoff | ✅ | ⚠️ Extra | ❌ | ⚠️ Build | ❌ | ❌ |
| Analytics | ✅ | ⚠️ Extra | ⚠️ Extra | ⚠️ Build | ❌ | ❌ |
Internal Automation
| Requirement | General Bots | Microsoft | n8n | Notion | AI Assistants | |
|---|---|---|---|---|---|---|
| Scheduled reports | ✅ | ⚠️ Extra | ⚠️ Extra | ✅ | ❌ | ❌ |
| Database sync | ✅ | ⚠️ Extra | ⚠️ Extra | ✅ | ❌ | ❌ |
| API orchestration | ✅ | ⚠️ Premium | ⚠️ Limited | ✅ | ❌ | ❌ |
| Document processing | ✅ | ⚠️ Extra | ⚠️ Extra | ⚠️ Build | ❌ | ⚠️ Limited |
| Email automation | ✅ | ✅ | ✅ | ✅ | ❌ | ❌ |
| Custom logic | ✅ | ⚠️ Limited | ⚠️ Limited | ✅ | ❌ | ❌ |
Team Collaboration
| Requirement | General Bots | Microsoft | n8n | Notion | AI Assistants | |
|---|---|---|---|---|---|---|
| Project management | ✅ | ✅ | ✅ | ❌ | ✅ | ❌ |
| Team chat | ✅ | ✅ | ✅ | ❌ | ⚠️ | ❌ |
| File sharing | ✅ | ✅ | ✅ | ❌ | ⚠️ | ❌ |
| Video meetings | ✅ | ✅ | ✅ | ❌ | ❌ | ❌ |
| AI assistant | ✅ | ⚠️ Extra | ⚠️ Extra | ⚠️ Build | ⚠️ Extra | ✅ |
| Self-hosted | ✅ | ❌ | ❌ | ✅ | ❌ | ❌ |
Migration Complexity
| From Platform | To General Bots | Effort | Data Portability | Tool Support |
|---|---|---|---|---|
| Microsoft 365 | Full migration | Medium | Good (APIs) | Scripts provided |
| Google Workspace | Full migration | Medium | Good (APIs) | Scripts provided |
| n8n | Automation only | Low | Easy (JSON) | Direct import |
| Notion | Content migration | Low | Good (Export) | Scripts provided |
| Zapier/Make | Workflow rebuild | Medium | Manual | Templates available |
| Custom solution | Varies | Varies | Depends | API compatible |
Decision Matrix
Choose General Bots when you need:
- ✅ Complete data sovereignty and self-hosting
- ✅ No per-user licensing costs at scale
- ✅ Native AI without additional subscriptions
- ✅ Full productivity suite in one platform
- ✅ Multi-channel chatbot deployment
- ✅ Powerful automation without limits
- ✅ Open source transparency and extensibility
- ✅ Custom integrations and modifications
Consider alternatives when:
- You require specific certifications only available from large vendors
- Your organization mandates a particular cloud provider
- You have no infrastructure or IT capacity for self-hosting
- You need only a single narrow feature (e.g., just document editing)
Summary
General Bots provides the most comprehensive feature set for organizations seeking:
| Advantage | Impact |
|---|---|
| 75-95% cost reduction | Eliminate per-user fees, AI add-ons, automation limits |
| Complete data control | Self-hosted, your infrastructure, your rules |
| Unified platform | Email, files, chat, automation, AI in one system |
| No artificial limits | Unlimited users, workflows, API calls, storage |
| Full transparency | Open source code, audit everything |
| Future-proof | No vendor lock-in, standard formats, portable data |
The combination of enterprise productivity features, native AI, powerful automation, and self-hosted deployment makes General Bots unique in the market—delivering capabilities that would otherwise require subscriptions to multiple expensive platforms.
See Also
- Migration Overview - Getting started
- Migration Resources - Tools and templates
- Enterprise Platform Migration - Detailed migration guide
- Quick Start - Deploy in minutes
Migration Resources
General Bots provides comprehensive tools and resources for organizations transitioning from cloud-based productivity platforms to self-hosted infrastructure.
Migration Toolkit
Data Import Utilities
General Bots includes import tools for common enterprise data formats.
For email migration, the toolkit supports IMAP sync for mailbox migration, PST file import, calendar import via ICS format, and contact import through VCF and CardDAV.
File migration capabilities include bulk file upload via the S3 API, folder structure preservation, metadata retention, and version history import where the source system provides it.
User migration supports SCIM provisioning, LDAP directory sync, CSV user import, and bulk credential generation.
BASIC Migration Scripts
Template scripts simplify common migration tasks. The file migration script connects to external storage and transfers files:
' migrate-files.bas
PARAM source_api AS string
PARAM auth_token AS string
DESCRIPTION "Migrate files from external storage"
SET HEADER "Authorization", "Bearer " + auth_token
files = GET source_api + "/files"
FOR EACH file IN files
content = DOWNLOAD file.download_url
WRITE "/" + file.path, content
TALK "Migrated: " + file.name
NEXT file
TALK "Migration complete: " + LEN(files) + " files"
The user migration script imports users from a CSV export:
' migrate-users.bas
PARAM csv_path AS string
DESCRIPTION "Import users from CSV export"
users = READ csv_path
FOR EACH row IN users
CREATE USER row.email WITH NAME row.name
NEXT row
API Compatibility
REST API Mapping
General Bots REST APIs follow familiar patterns that map to common operations:
| Common Operation | General Bots Endpoint |
|---|---|
| List files | GET /api/files/list |
| Upload file | POST /api/files/write |
| Download file | GET /api/files/{path} |
| List users | GET /api/users |
| Create user | POST /api/users |
| Send email | POST /api/email/send |
| List calendar events | GET /api/calendar/events |
| Create task | POST /api/tasks |
Identity Federation
Support SSO during migration with identity federation. This enables OIDC provider integration, SAML support via Zitadel, hybrid authentication during transition periods, and gradual user migration without disrupting access.
Configure federation in config.csv:
key,value
oidc-provider-url,https://identity.example.com
oidc-client-id,general-bots-client
oidc-client-secret,your-secret
Industry Templates
Pre-built configurations address common industry requirements.
Healthcare templates provide HIPAA-compliant configuration, patient communication templates, appointment scheduling workflows, and secure document handling.
Financial services templates include SOC 2 aligned settings, secure data handling, audit logging enabled by default, and compliance reporting.
Education templates offer student enrollment flows, course management, parent communication channels, and assignment tracking.
Professional services templates cover client onboarding, project management workflows, time tracking integration, and invoice generation.
Deployment Guides
Infrastructure Sizing
| Organization Size | CPU | RAM | Storage | Users |
|---|---|---|---|---|
| Small | 2 cores | 4 GB | 100 GB | 1-50 |
| Medium | 4 cores | 8 GB | 500 GB | 50-500 |
| Large | 8 cores | 16 GB | 2 TB | 500-5000 |
| Enterprise | 16+ cores | 32+ GB | 10+ TB | 5000+ |
High Availability
For production deployments requiring high availability, configure PostgreSQL replication for database resilience, load-balanced botserver instances for horizontal scaling, distributed SeaweedFS storage for file redundancy, and Redis/Valkey clustering for cache availability.
Backup Strategy
Configure automated backups to protect your data:
SET SCHEDULE "every day at 2am"
' Database backup
result = POST "https://backup.internal/postgres", #{database: "botserver"}
' File storage backup
result = POST "https://backup.internal/seaweedfs", #{bucket: "all"}
' Notify on completion
SEND MAIL TO "ops@company.com" SUBJECT "Backup Complete" BODY result
Training Resources
Administrator Training
Administrator training covers initial setup and configuration, user management, security settings, and monitoring and maintenance procedures.
Developer Training
Developer training includes BASIC scripting fundamentals, API integration patterns, custom keyword development, and package creation.
End User Training
End user training addresses chat interface usage, file management, calendar and tasks, and mobile access.
ROI Calculator
Estimate savings with self-hosted deployment:
| Factor | Cloud (100 users) | General Bots |
|---|---|---|
| Annual licensing | $15,000-60,000 | $0 |
| AI assistant add-on | $36,000 | $0 |
| Infrastructure | Included | $2,400-6,000 |
| LLM API costs | Included | $600-6,000 |
| Total Annual | $51,000-96,000 | $3,000-12,000 |
Typical savings range from 75-95% reduction in annual costs.
Support Resources
Documentation
Documentation resources include the complete keyword reference, API documentation, configuration guides, and troubleshooting guides.
Community
Community support is available through GitHub discussions, issue tracking, feature requests, and community contributions.
Professional Services
For enterprise deployments requiring additional support, professional services include migration planning, custom development, training programs, and support contracts.
Contact: partners@pragmatismo.com.br
Migration Checklist
Pre-Migration
Before beginning migration, inventory current services and usage, identify data to migrate, plan user communication, set up a test environment, and configure identity federation if needed.
Migration
During migration, deploy General Bots infrastructure, import users and groups, migrate files and documents, transfer email if applicable, and set up integrations.
Post-Migration
After migration, verify data integrity, test all workflows, train users, update DNS and routing, decommission old services, and monitor and optimize the new environment.
Case Study Template
Document your migration for internal reference using this structure.
The organization profile section captures size and industry, previous platform, and key requirements.
The migration scope section documents services migrated, data volume, and timeline.
The results section records cost savings achieved, performance improvements, and user feedback.
The lessons learned section captures challenges encountered, solutions implemented, and recommendations for future migrations.
See Also
Review the Migration Overview for getting started with migration concepts. See Validation and Testing to verify migration success. The Enterprise Platform Migration guide provides detailed feature mapping. Start with the Quick Start guide for initial deployment.
Common Migration Concepts
This chapter establishes the foundational concepts that apply across all migration scenarios, regardless of which cloud platform you’re leaving or which specific services you’re transitioning. Understanding these common patterns helps plan effective migrations and avoid pitfalls that derail projects.
Understanding the Fundamental Shift
Migrating from cloud services to self-hosted infrastructure represents more than a technical change—it’s a philosophical shift in how your organization relates to its data and systems.
With cloud services, your data resides on vendor servers under their terms of service. Monthly subscription costs accumulate indefinitely, and you have limited control over when updates occur or which features change. Your integrations depend on vendor-specific APIs that can evolve without your input.
Self-hosted infrastructure inverts this relationship. Your data lives on infrastructure you control, whether physical servers, your own cloud accounts, or hybrid arrangements. Setup costs replace ongoing subscriptions, and you decide when to update and which versions to run. Standard protocols replace proprietary APIs, giving you freedom to swap components without rewriting integrations.
This shift brings responsibility alongside freedom. You become accountable for security, backups, and availability. The trade-off is worthwhile for organizations that value data sovereignty, predictable costs, and independence from vendor decisions.
Component Equivalencies
Understanding how cloud services map to self-hosted alternatives helps plan migrations systematically. Cloud storage services translate to S3-compatible object storage like MinIO, which implements the same API that applications expect. Email services map to self-hosted mail servers using standard SMTP and IMAP protocols. Identity providers correspond to authentication servers implementing OIDC and SAML standards.
These mappings matter because they define what changes and what stays the same. Applications using standard protocols often work unchanged after migration—you simply point them at new endpoints. Applications tightly coupled to vendor-specific features require more adaptation.
General Bots leverages this standardization extensively. Its components communicate through standard protocols, making it compatible with various backend implementations. This design philosophy means migrating to General Bots doesn’t lock you into another proprietary ecosystem.
The Migration Process
Successful migrations follow a predictable sequence of stages, each building on the previous one.
The assessment stage inventories what exists in your current environment. Which services are in use? How much data do they contain? What integrations depend on them? Who uses them and how? This inventory becomes the foundation for all subsequent planning.
Planning translates the assessment into actionable steps. For each service and dataset, you determine how it will move, in what order, and with what dependencies. This stage identifies risks, establishes timelines, and allocates resources. Thorough planning prevents the chaos that results from ad-hoc migration attempts.
Testing validates your approach before committing to it. Migrate sample data and verify it arrived correctly. Connect applications to test instances and confirm they function. Identify issues while stakes are low and corrections are easy.
Execution performs the actual migration according to your plan. Depending on your situation, this might happen all at once during a maintenance window or gradually over weeks as different components transition. The plan determines the approach; execution follows it.
Validation confirms that everything works correctly in the new environment. Users can access their data. Applications function normally. No content was lost or corrupted. This stage provides confidence that the migration succeeded and that you can decommission source systems.
Common Challenges
Certain challenges appear across virtually all migration projects, regardless of source platform or destination infrastructure.
Data volume creates logistical complexity. Large datasets take significant time to transfer, especially when bandwidth is limited or costs apply. Storage must be provisioned in advance to receive the data. Planning must account for the reality that moving terabytes takes time, and some services remain unavailable during transfer.
Authentication presents a particular challenge because passwords cannot be exported from cloud providers. Users will need to establish new credentials in your self-hosted identity system, either through password reset flows or by setting up federation between old and new systems during a transition period.
Dependencies between services complicate migration sequencing. If Service B depends on Service A, you can’t migrate B before A is ready. Complex environments have webs of such dependencies that constrain migration order. Identifying these dependencies during assessment prevents blocked migrations during execution.
Custom workflows built on cloud-specific features need attention. Automations using proprietary APIs, integrations with cloud-native services, and customizations that assume cloud infrastructure all require evaluation and potentially reconstruction using self-hosted alternatives.
Tools and Approaches
Different migration scenarios call for different tools, but categories remain consistent across platforms.
File migration tools handle moving documents and media. Some sync directly between cloud storage and your new object storage. Others export to intermediate formats for later import. Bulk download utilities retrieve everything for offline transfer when direct sync isn’t available.
Email migration requires specialized attention due to the complexity of mailbox data. IMAP synchronization tools can copy messages while preserving folder structure. Export utilities produce archive formats that import tools can consume. The specific tools depend on both source and destination platforms.
User migration extracts identity information for recreation in your new system. Directory export tools produce CSV or LDIF files containing usernames, email addresses, group memberships, and other attributes. APIs enable programmatic extraction when bulk exports aren’t available.
Managing Risk
Migration inherently involves risk—the possibility of data loss, extended downtime, or failed transitions. Thoughtful risk management makes these possibilities manageable rather than catastrophic.
Always create backups before beginning migration activities. Even if you trust your tools and process, having verified backups means that mistakes are recoverable. Test backup restoration to confirm backups actually work.
Start with small datasets to validate your approach before scaling up. Migrate one user or one department, verify success, then expand. This incremental approach catches problems early when impact is limited.
Keep source data intact until migration is completely validated. The ability to access original data prevents a migration problem from becoming a data loss disaster. Only decommission source systems after thorough validation and an appropriate waiting period.
Document everything about your migration—the process, the decisions, the exceptions, the issues encountered. This documentation helps troubleshoot problems, supports auditing requirements, and creates institutional knowledge for future projects.
Maintain rollback plans even if you hope never to use them. Know how you would restore service if migration fails partway through. Having this plan reduces pressure during execution and provides a safety net that enables confident decision-making.
Moving Forward
With these common concepts established, subsequent chapters address platform-specific migration guidance. The Microsoft 365 Migration chapter details extracting data from Microsoft’s ecosystem. The Google Workspace Migration chapter covers Google-specific considerations. The Knowledge Base Migration chapter explains how to transform documents from any source into searchable bot knowledge.
Each platform-specific guide builds on the concepts covered here, applying them to particular tools, APIs, and data formats while following the same fundamental migration philosophy.
Knowledge Base Migration
Converting documents from cloud storage to General Bots knowledge bases.
Overview
Knowledge base migration involves moving documents from various sources into .gbkb packages where they become searchable through General Bots.
Source Systems
Common document sources include SharePoint document libraries, Google Drive folders, OneDrive and Dropbox storage, Confluence spaces, and traditional file servers.
Document Types
General Bots supports a variety of document formats for knowledge base ingestion. These include PDF files, Office documents such as Word, Excel, and PowerPoint, plain text files, Markdown files, and HTML pages.
Migration Process
1. Export
Begin by downloading documents from the source system. Preserve the folder structure to maintain organizational context, and retain metadata where possible for future reference.
2. Organize
Group related documents into logical collections. Create meaningful organizational structures and remove any duplicate documents that would clutter the knowledge base.
3. Import
Place the organized documents in .gbkb folders within your bot package. General Bots indexes these documents automatically, making them searchable for RAG-powered responses.
Considerations
Volume
Large document sets require additional time to index. Consider staging the migration in batches rather than importing everything at once. Monitor disk space throughout the process to ensure adequate storage remains available.
Quality
Before migration, clean up outdated content that no longer reflects current information. Remove duplicate documents to avoid confusing the AI with conflicting information. Fix any broken or corrupted files that would fail during indexing.
Structure
Maintain logical organization within your knowledge base. Use meaningful folder names that describe the content within. Group documents by topic or department to improve retrieval accuracy.
Format Conversion
Some formats require conversion before import. Web pages should be converted to PDF or Markdown for reliable indexing. Database content should be exported to CSV format. Proprietary formats from specialized applications need conversion to standard formats that the indexing system can process.
Testing
After migration, verify the knowledge base functions correctly. Test that search works across the imported documents. Check that users can access all migrated content. Run sample queries to ensure the AI provides accurate responses based on the imported knowledge.
Next Steps
Review the Overview for general migration concepts. See Validation for detailed testing procedures to verify your migration succeeded.
Google Workspace Migration Guide
Migrating from Google Workspace to self-hosted infrastructure.
Overview
Google Workspace (formerly G Suite) provides integrated cloud services that need to be migrated to various self-hosted alternatives.
Service Mapping
| Google Workspace Service | Self-Hosted Alternative | Migration Tool |
|---|---|---|
| Gmail | Mail server (Stalwart, etc.) | imapsync, Got Your Back |
| Google Drive | MinIO or Nextcloud | rclone |
| Google Docs/Sheets/Slides | LibreOffice, OnlyOffice | Export to Office formats |
| Google Calendar | CalDAV server | ical export/import |
| Google Meet | Jitsi, LiveKit | No migration (ephemeral) |
| Google Chat | General Bots | API export |
Migration Steps
1. Email Migration (Gmail)
Before beginning email migration, ensure IMAP is enabled in Google Admin, you have app-specific passwords or OAuth configured, and your target mail server is ready to receive data.
The migration process uses imapsync or Got Your Back (GYB) to transfer mailboxes. Migrate labels as folders since Gmail’s labeling system differs from traditional folder structures. Preserve read and unread status to maintain inbox organization.
Consider that Gmail labels don’t map perfectly to folders, which may require some reorganization. Some users may have 15 or more years of email history, making this a time-intensive process. Attachments can consume significant storage space on the target system.
2. File Migration (Google Drive)
Prerequisites include Google Drive API access, service account or OAuth credentials, and proper storage capacity planning on your target system.
Use rclone with the Google Drive backend for the migration process. Export Google Docs to portable formats since they exist as pointers rather than actual files. Maintain the folder structure during transfer to preserve organizational context.
Keep in mind that Google Docs need format conversion to work offline. Shared drives require separate handling from personal drives. Comments and suggestions on documents are typically lost in the conversion process.
3. User Migration (Google Directory)
You’ll need Google Admin SDK access and your target identity provider configured before starting.
Export users via the Admin SDK or GAM tool. Transform the exported data to your target format such as LDIF or JSON. Import the transformed data to your new identity management system.
Note that passwords cannot be exported from Google, so all users will need to set new passwords. Two-factor authentication settings need reconfiguration on the new system. Groups and organizational units need mapping to equivalent structures.
Google-Specific Challenges
Format Conversion
Google’s proprietary formats require conversion to standard formats. Google Docs should be converted to .docx or .odt files. Google Sheets become .xlsx or .ods files. Google Slides convert to .pptx or .odp format. Google Drawings export as .svg or .png images.
API Quotas
Google enforces strict quotas on API usage. The Drive API allows 1,000 queries per 100 seconds. The Gmail API permits 250 quota units per user per second. The Admin SDK has various limits depending on which specific API you’re accessing. Plan your migration to work within these constraints.
Data Takeout
Google Takeout provides a user-initiated bulk export option that includes most Google services. However, the resulting ZIP files can be enormous, sometimes exceeding 100GB. This approach is not suitable for organization-wide migration but can help individual users verify their data transferred correctly.
Tools and Utilities
Google Admin Tools
GAM (Google Apps Manager) provides a command-line interface for managing Google Workspace. GAMADV-XTD is an enhanced version with additional capabilities. The Google Admin console offers manual export options for smaller migrations.
Got Your Back (GYB)
GYB is a Python-based Gmail backup tool that supports full mailbox export and can restore to different accounts, making it useful for migration scenarios.
rclone Configuration
rclone supports team drives, handles Google Photos separately from Drive, and can preserve modification times during transfer.
Common Issues
Large Attachments
Gmail allows attachments up to 25MB, but some mail servers have lower limits. You may need to store large attachments separately or adjust your target server’s configuration.
Shared Resources
Shared drives need owner reassignment before migration. Calendar sharing must be recreated on the new system. Document collaboration links will break and need updating.
Google Photos
Google Photos is not part of standard Google Drive storage and needs a separate migration approach. Consider whether you want original quality or compressed versions.
Migration Strategy
Phased Approach
Start with a pilot group to identify issues before the broader migration. Migrate inactive users first to reduce impact if problems occur. Schedule department by department to manage support load. Keep Google active during the transition period for rollback capability.
Hybrid Period
MX records can split email delivery between old and new systems during transition. Users can access both systems simultaneously. Gradual cutover reduces risk compared to a single migration event.
Data Validation
After migration, compare file counts between source and destination. Verify email folders transferred correctly. Check that user access permissions work as expected.
Post-Migration
User Training
Document key differences for users. Explain that real-time collaboration like Google Docs may work differently. Walk through the changed UI and UX in alternative applications. Demonstrate new sharing workflows.
Feature Gaps
Some features may be lost in migration. Smart Compose in Gmail won’t transfer to other mail clients. Google Assistant integration is Google-specific. Automatic photo organization depends on Google’s ML systems. Version history in Docs may not fully transfer.
Maintaining Archive Access
For historical data access, you might keep a reduced Google license for archive purposes, export everything to static storage for reference, or convert all documents to standard formats for long-term preservation.
Cost Factors
Google Workspace Pricing
Business Starter costs $6 per user per month. Business Standard costs $12 per user per month. Business Plus costs $18 per user per month. You’ll need to maintain these subscriptions during the migration period.
Data Export Costs
There are no direct egress fees from Google, but API quotas may extend your timeline. Consider bandwidth costs on your receiving infrastructure.
Timeline Estimates
Migration duration depends on several factors including number of users, data volume per user, available bandwidth, and conversion requirements.
Typical timelines range from 2-3 weeks for small organizations under 50 users, 1-3 months for medium organizations between 50-500 users, and 3-6 months for large organizations with over 500 users.
Best Practices
Inventory your existing environment first by documenting what you have before starting. Test thoroughly by piloting with your IT team before broader rollout. Communicate often to keep users informed throughout the process. Plan for rollback by having contingency plans if issues arise. Archive everything by keeping backups of original data in case you need to reference it later.
Next Steps
Review Common Concepts for general migration principles. See Validation for testing procedures to verify your migration succeeded.
Microsoft 365 Migration Guide
Migrating from Microsoft 365 to self-hosted infrastructure.
Overview
Microsoft 365 (formerly Office 365) includes multiple services that need to be migrated to different self-hosted components.
Service Mapping
| Microsoft 365 Service | Self-Hosted Alternative | Migration Tool |
|---|---|---|
| Exchange Online | Mail server (Stalwart, etc.) | imapsync |
| OneDrive | MinIO or Nextcloud | rclone |
| SharePoint | MinIO + Wiki/CMS | rclone + export tools |
| Teams | Mattermost, General Bots, etc. | Export/Import APIs |
| Azure AD | Keycloak, Authentik, Zitadel | LDAP export |
Migration Steps
1. Email Migration (Exchange Online)
Before beginning the email migration, ensure IMAP access is enabled in Exchange Online, your target mail server is configured, and you have user credentials or app passwords available.
The migration process uses imapsync for mailbox migration. Migrate in batches to avoid throttling from Microsoft’s servers, and preserve folder structure and flags during transfer.
Keep in mind that Exchange uses proprietary features such as categories that may not transfer cleanly. Calendar and contacts require separate migration using CalDAV and CardDAV protocols. Shared mailboxes require special handling and may need to be migrated individually.
2. File Migration (OneDrive/SharePoint)
Prerequisites include having the OneDrive sync client or API access configured, sufficient storage on the target system, and adequate network bandwidth for the transfer.
Use rclone with the OneDrive backend for the migration process. Maintain folder structure during transfer and preserve timestamps where possible.
Be aware that SharePoint metadata won’t transfer automatically and may need manual recreation. Version history is typically lost during migration. Permissions need to be recreated on the target system.
3. User Migration (Azure AD)
Prepare for user migration by setting up Azure AD Connect or API access, and ensure your target identity provider is ready to receive users.
Export users via PowerShell or Graph API, transform the data to the target format such as LDIF or JSON, then import to your new identity provider.
Important considerations include that passwords cannot be exported from Azure AD, so users will need to reset their passwords. MFA settings require reconfiguration on the new system. Group memberships need mapping to equivalent structures in the target system.
Common Challenges
API Throttling
Microsoft throttles API calls to protect their infrastructure. Plan for a slow, steady migration rather than attempting bulk transfers. Use batch operations where possible and consider running migrations during off-peak hours.
Data Volume
Large organizations may have accumulated terabytes of OneDrive and SharePoint data, years of email history, and thousands of users. Factor this into your timeline and resource planning.
Feature Parity
Some M365 features have no direct equivalent in self-hosted solutions. Power Automate workflows will need to be recreated using different automation tools. SharePoint lists and forms require alternative solutions. Teams channel history may be difficult to preserve in its original format.
Tools and Utilities
PowerShell for Export
The Azure AD PowerShell module handles user export operations. Exchange Online PowerShell provides mailbox information. SharePoint Online PowerShell helps with site inventory and metadata export.
Graph API
The Graph API provides programmatic access to most M365 services and is useful for custom migration scripts. Using it requires app registration and appropriate permissions in your Azure tenant.
Third-Party Tools
Commercial options include BitTitan MigrationWiz and Sharegate, which provide guided migration experiences. Various open-source scripts are available on GitHub for more customized approaches.
Post-Migration
DNS Changes
Update MX records to point to your new email server. Update autodiscover records for email client configuration. Consider keeping a hybrid setup temporarily to catch any missed emails during the transition.
User Communication
Provide new login credentials to all users. Document any changed procedures and differences from the M365 experience. Offer training sessions on the new tools to ensure smooth adoption.
Validation
Verify email delivery works correctly in both directions. Test file access to ensure permissions transferred properly. Confirm authentication works for all migrated users.
Cost Considerations
Subscription Overlap
You may need to maintain M365 subscriptions during the migration period. Consider read-only licenses for archive access if you need to retain access to historical data.
Data Transfer Costs
Factor in egress charges from Microsoft when transferring large amounts of data. Account for bandwidth costs if transferring over the internet rather than dedicated connections.
Timeline Estimates
Small organizations with fewer than 50 users typically complete migration in 1-2 weeks. Medium organizations with 50-500 users usually require 1-2 months. Large organizations with more than 500 users should plan for 2-6 months.
Factors affecting timeline include data volume, network speed, complexity of the existing setup, and user training needs.
Next Steps
Review the Common Concepts guide for general migration principles. See Validation for detailed testing procedures to verify your migration succeeded.
n8n Migration Guide
Migrating workflows and automations from n8n to General Bots.
Overview
n8n is a workflow automation platform with a visual node-based editor. General Bots provides equivalent automation capabilities through BASIC scripting, offering more flexibility and integrated features without execution limits.
Why Migrate
| Aspect | n8n | General Bots |
|---|---|---|
| Automation | Visual workflows | BASIC scripts (more powerful) |
| Pricing | Per-execution limits | Unlimited executions |
| AI Integration | Via API nodes | Native LLM keywords |
| Chat/Bot | Not included | Full multi-channel |
| Productivity Suite | Not included | Email, calendar, files, tasks |
| Knowledge Base | Not included | Built-in RAG |
| Self-hosting | Available | Available |
Workflow Mapping
Triggers
| n8n Trigger | General Bots Equivalent |
|---|---|
| Schedule Trigger | SET SCHEDULE |
| Webhook | WEBHOOK |
| Email Trigger (IMAP) | ON "email:received" |
| Database Trigger | ON "table:tablename:insert" |
| Manual Trigger | Direct script execution |
| Cron | SET SCHEDULE "cron expression" |
Common Nodes
| n8n Node | General Bots Equivalent |
|---|---|
| HTTP Request | GET, POST, PUT, DELETE |
| Set | Variable assignment |
| IF | IF/THEN/ELSE/END IF |
| Switch | SWITCH/CASE/END SWITCH |
| Code (JavaScript) | BASIC script |
| Function | BASIC subroutines |
| Merge | Array operations |
| Split In Batches | FOR EACH |
| Wait | WAIT |
| Send Email | SEND MAIL |
| Slack | POST to Slack webhook |
| Discord | POST to Discord webhook |
| Google Sheets | GET/POST to Sheets API |
| Airtable | GET/POST to Airtable API |
| MySQL/PostgreSQL | FIND, INSERT, UPDATE, DELETE |
| MongoDB | GET/POST to MongoDB API |
Migration Examples
Scheduled Data Sync
n8n workflow:
Schedule Trigger → HTTP Request → IF → Google Sheets
General Bots equivalent:
SET SCHEDULE "every hour"
data = GET "https://api.example.com/data"
IF data.status = "active" THEN
FOR EACH item IN data.items
INSERT "synced_data", #{
id: item.id,
name: item.name,
value: item.value,
synced_at: NOW()
}
NEXT item
END IF
TALK "Synced " + LEN(data.items) + " items"
Webhook Processing
n8n workflow:
Webhook → Set → IF → Send Email + Slack
General Bots equivalent:
WEBHOOK "order-received"
order = body
customer_name = order.customer.name
order_total = order.total
IF order_total > 1000 THEN
SEND MAIL TO "sales@company.com" SUBJECT "Large Order" BODY "Order from " + customer_name + ": $" + order_total
POST "https://hooks.slack.com/services/xxx", #{
text: "Large order received: $" + order_total
}
END IF
Multi-Step API Orchestration
n8n workflow:
Webhook → HTTP Request (API 1) → Code → HTTP Request (API 2) → IF → Multiple outputs
General Bots equivalent:
WEBHOOK "process-lead"
lead = body
' Step 1: Enrich lead data
enriched = POST "https://api.clearbit.com/enrich", #{email: lead.email}
' Step 2: Score the lead
WITH lead_data
.email = lead.email
.company = enriched.company.name
.industry = enriched.company.industry
.size = enriched.company.employees
END WITH
score = SCORE LEAD lead_data
' Step 3: Route based on score
IF score.status = "hot" THEN
POST "https://api.salesforce.com/leads", lead_data
SEND MAIL TO "sales@company.com" SUBJECT "Hot Lead" BODY lead_data
ELSEIF score.status = "warm" THEN
POST "https://api.hubspot.com/contacts", lead_data
ELSE
INSERT "cold_leads", lead_data
END IF
Error Handling
n8n approach: Error Trigger node
General Bots equivalent:
SET SCHEDULE "every 5 minutes"
TRY
result = GET "https://api.example.com/health"
IF result.status <> "healthy" THEN
THROW "Service unhealthy: " + result.message
END IF
CATCH
SEND MAIL TO "ops@company.com" SUBJECT "Alert: Service Down" BODY ERROR_MESSAGE
POST "https://hooks.slack.com/services/xxx", #{text: "Service alert: " + ERROR_MESSAGE}
END TRY
Exporting n8n Workflows
Export Process
- In n8n, select the workflow
- Click the three-dot menu → Download
- Save the JSON file
- Analyze nodes and connections
- Translate to BASIC script
JSON Structure Analysis
n8n exports workflows as JSON:
{
"nodes": [
{"type": "n8n-nodes-base.httpRequest", "parameters": {...}},
{"type": "n8n-nodes-base.if", "parameters": {...}}
],
"connections": {...}
}
Map each node type to the equivalent BASIC keyword.
Feature Comparison
What You Gain
Native AI integration:
USE KB "company-docs"
response = LLM "Analyze this data and provide insights: " + data
Multi-channel chat:
TALK "How can I help you?"
HEAR question
answer = LLM question
TALK answer
Built-in productivity:
CREATE TASK "Follow up with " + customer_name DUE DATEADD(NOW(), 3, "day")
BOOK "Meeting with " + customer_name AT meeting_time
SEND MAIL TO customer_email SUBJECT "Confirmation" BODY message
Knowledge base:
USE KB "product-docs"
USE KB "pricing-info"
answer = LLM customer_question
What Changes
| n8n Approach | General Bots Approach |
|---|---|
| Visual drag-and-drop | Text-based BASIC scripts |
| Node connections | Sequential code flow |
| Credentials UI | config.csv settings |
| Execution history UI | Log files + monitoring |
| Community nodes | HTTP keywords + custom code |
Credentials Migration
n8n Credentials
n8n stores credentials separately. Export and configure in General Bots:
config.csv:
key,value
slack-webhook-url,https://hooks.slack.com/services/xxx
api-key-clearbit,your-api-key
salesforce-token,your-token
Usage in BASIC:
slack_url = GET CONFIG "slack-webhook-url"
POST slack_url, #{text: "Message"}
Migration Checklist
Pre-Migration
- Export all n8n workflows as JSON
- Document active schedules and triggers
- List all credentials and API keys
- Identify critical workflows for priority migration
- Set up General Bots environment
Migration
- Translate workflows to BASIC scripts
- Configure credentials in config.csv
- Set up webhooks with same endpoints
- Configure schedules
- Test each workflow individually
Post-Migration
- Run parallel execution for verification
- Compare outputs between systems
- Monitor for errors
- Decommission n8n workflows
- Document new BASIC scripts
Common Patterns
Batch Processing
n8n: Split In Batches node
General Bots:
items = GET "https://api.example.com/items"
batch_size = 10
total = LEN(items)
FOR i = 0 TO total - 1 STEP batch_size
batch = SLICE(items, i, i + batch_size)
FOR EACH item IN batch
PROCESS_ITEM(item)
NEXT item
WAIT 1000 ' Rate limiting
NEXT i
Conditional Branching
n8n: IF node with multiple branches
General Bots:
SWITCH status
CASE "new"
HANDLE_NEW()
CASE "pending"
HANDLE_PENDING()
CASE "complete"
HANDLE_COMPLETE()
DEFAULT
HANDLE_UNKNOWN()
END SWITCH
Data Transformation
n8n: Set node or Code node
General Bots:
' Transform data
WITH transformed
.full_name = data.first_name + " " + data.last_name
.email = LOWER(data.email)
.created = NOW()
.source = "api"
END WITH
Best Practices
Start with simple workflows. Migrate straightforward automations first to build familiarity with BASIC syntax.
Use descriptive variable names. BASIC scripts are more readable than node graphs when well-written.
Add comments. Document your scripts for future maintenance:
' Daily sales report - sends summary to management
' Runs at 6 PM on weekdays
SET SCHEDULE "0 18 * * 1-5"
Leverage native features. Don’t just replicate n8n workflows—take advantage of General Bots’ integrated AI, chat, and productivity features.
Test incrementally. Verify each migrated workflow before moving to the next.
See Also
- SET SCHEDULE - Scheduling reference
- WEBHOOK - Webhook creation
- HTTP Keywords - API integration
- Platform Comparison - Full feature comparison
Notion Migration Guide
Migrating content and workflows from Notion to General Bots.
Overview
Notion is a collaborative workspace combining notes, databases, and project management. General Bots provides equivalent functionality through its knowledge base, task management, and AI-powered conversation features—with the added benefit of self-hosting and native automation.
Why Migrate
| Aspect | Notion | General Bots |
|---|---|---|
| Hosting | Cloud only | Self-hosted |
| Pricing | $10-15/user/month | No per-user fees |
| AI Features | $10/user/month add-on | Native (any LLM) |
| Automation | Limited | Full BASIC scripting |
| Chat/Bot | Not included | Multi-channel |
| API Creation | Not available | Instant webhooks |
| Data Sovereignty | Their servers | Your infrastructure |
Content Mapping
Notion to General Bots
| Notion Feature | General Bots Equivalent |
|---|---|
| Pages | Knowledge base documents (.gbkb) |
| Databases | Tables (CSV/database) |
| Kanban boards | Task boards |
| Calendar | Calendar API |
| Comments | Conversation history |
| Templates | Bot templates |
| Integrations | BASIC scripts + webhooks |
| Notion AI | LLM keyword |
Migration Process
Step 1: Export Notion Content
Navigate to Settings & Members, then Settings, and scroll to Export content. Choose the Markdown & CSV export format and download the ZIP file.
The export includes pages as Markdown files, databases as CSV files, and attachments in folders.
Step 2: Prepare Knowledge Base
Organize exported content for General Bots:
my-bot.gbkb/
├── company-info/
│ ├── about.md
│ ├── policies.md
│ └── procedures.md
├── products/
│ ├── catalog.md
│ └── pricing.md
└── support/
├── faq.md
└── troubleshooting.md
Step 3: Import Documents
Place Markdown files in your .gbkb folder. General Bots automatically indexes them for RAG:
USE KB "company-info"
USE KB "products"
USE KB "support"
TALK "How can I help you?"
HEAR question
answer = LLM question
TALK answer
Step 4: Convert Databases
Transform Notion CSV exports to General Bots tables:
' Import Notion database export
data = READ "notion-export/Projects.csv"
FOR EACH row IN data
INSERT "projects", #{
name: row["Name"],
status: row["Status"],
due_date: row["Due Date"],
assignee: row["Assignee"],
imported_at: NOW()
}
NEXT row
TALK "Imported " + LEN(data) + " projects"
Step 5: Recreate Task Boards
Convert Notion Kanban boards to General Bots task boards:
' Create project for Notion board
project_id = CREATE PROJECT "Product Roadmap" WITH DESCRIPTION "Migrated from Notion"
' Import tasks
tasks = READ "notion-export/Roadmap.csv"
FOR EACH task IN tasks
status = SWITCH task["Status"]
CASE "Not Started" : "todo"
CASE "In Progress" : "in_progress"
CASE "Done" : "done"
DEFAULT : "todo"
END SWITCH
CREATE TASK task["Name"] IN PROJECT project_id WITH STATUS status
NEXT task
Notion AI to General Bots
Document Summarization
Notion AI allows highlighting text and selecting “Summarize” but is limited to Notion content. General Bots provides broader capability:
USE KB "documents"
summary = LLM "Summarize the key points from our Q3 report"
TALK summary
Content Generation
Where Notion AI uses the /ai command for basic prompting, General Bots offers full control:
SET CONTEXT "You are a technical writer. Write clear, concise documentation."
TALK "What would you like me to write?"
HEAR topic
content = LLM "Write comprehensive documentation about: " + topic
WRITE "/docs/" + SLUGIFY(topic) + ".md", content
TALK "Documentation created!"
Q&A on Documents
Notion AI asks questions about single page content. General Bots searches across your entire knowledge base:
' Load entire knowledge base
USE KB "all-docs"
USE KB "wiki"
USE KB "procedures"
' Answer questions across all content
TALK "Ask me anything about our documentation"
HEAR question
answer = LLM question
TALK answer
Automation Migration
Notion Automations (Limited)
Notion has basic automations for status changes, due date reminders, and Slack notifications.
General Bots Equivalent
Status change automation:
ON "table:projects:update"
IF params.old_status <> params.new_status THEN
IF params.new_status = "complete" THEN
SEND MAIL TO params.owner_email SUBJECT "Project Completed" BODY "Your project " + params.name + " is now complete!"
END IF
END IF
END ON
Due date reminders:
SET SCHEDULE "every day at 9am"
upcoming = FIND "tasks", "due_date = DATEADD(NOW(), 1, 'day') AND status <> 'done'"
FOR EACH task IN upcoming
SEND MAIL TO task.assignee_email SUBJECT "Task Due Tomorrow" BODY "Reminder: " + task.name + " is due tomorrow"
NEXT task
Slack notifications:
ON "table:tasks:insert"
POST "https://hooks.slack.com/services/xxx", #{
text: "New task created: " + params.name,
channel: "#tasks"
}
END ON
Database Migration
Notion Database Properties
| Notion Property | General Bots Equivalent |
|---|---|
| Title | TEXT column |
| Text | TEXT column |
| Number | NUMERIC column |
| Select | TEXT with validation |
| Multi-select | JSONB array |
| Date | DATE/TIMESTAMP column |
| Person | User reference |
| Files | File path references |
| Checkbox | BOOLEAN column |
| URL | TEXT column |
| TEXT column | |
| Phone | TEXT column |
| Formula | Computed in BASIC |
| Relation | Foreign key |
| Rollup | AGGREGATE queries |
Formula Migration
Notion formulas like prop("Price") * prop("Quantity") translate to BASIC calculations:
' Calculate on insert/update
total = price * quantity
INSERT "orders", #{item: item, price: price, quantity: quantity, total: total}
' Or query with calculation
SELECT "*, price * quantity as total FROM orders"
Relation Migration
Notion relations link databases together. General Bots uses foreign keys:
' Create related tables
CREATE TABLE "projects" (id, name, status)
CREATE TABLE "tasks" (id, project_id, name, assignee)
' Query with join
tasks = FIND "tasks", "project_id = '" + project_id + "'"
' Or use JOIN keyword
result = JOIN "projects", "tasks", "projects.id = tasks.project_id"
Template Migration
Notion Templates
Notion templates are pre-filled pages. Convert to General Bots templates as BASIC scripts.
Meeting notes template:
' meeting-notes.bas
PARAM meeting_title AS string
PARAM attendees AS string
PARAM date AS date
DESCRIPTION "Create meeting notes document"
template = "# " + meeting_title + "
**Date:** " + FORMAT(date, "MMMM d, yyyy") + "
**Attendees:** " + attendees + "
## Agenda
1.
2.
3.
## Discussion Notes
## Action Items
- [ ]
- [ ]
## Next Meeting
"
WRITE "/meetings/" + FORMAT(date, "yyyy-MM-dd") + "-" + SLUGIFY(meeting_title) + ".md", template
TALK "Meeting notes created: " + meeting_title
Project Template
' new-project.bas
PARAM project_name AS string
PARAM owner AS string
DESCRIPTION "Create new project with standard structure"
project_id = CREATE PROJECT project_name WITH DESCRIPTION "Created by template"
ADD USER TO PROJECT project_id, owner, "owner"
' Create standard tasks
CREATE TASK "Define requirements" IN PROJECT project_id
CREATE TASK "Create timeline" IN PROJECT project_id
CREATE TASK "Assign resources" IN PROJECT project_id
CREATE TASK "Kickoff meeting" IN PROJECT project_id
CREATE TASK "First milestone review" IN PROJECT project_id
TALK "Project '" + project_name + "' created with 5 starter tasks"
What You Gain
Self-Hosting
Your data stays on your infrastructure. No concerns about Notion’s data practices or service availability.
Native AI Without Extra Cost
Notion charges $10/user/month for AI features. General Bots includes AI at no additional cost—use any LLM provider.
Full Automation
Go beyond Notion’s limited automations with complete BASIC scripting:
SET SCHEDULE "every monday at 9am"
' Generate weekly report
projects = FIND "projects", "status = 'active'"
tasks_completed = AGGREGATE "tasks", "COUNT", "id", "completed_at > DATEADD(NOW(), -7, 'day')"
SET CONTEXT "You are a project manager. Create a concise weekly summary."
report = LLM "Summarize: " + LEN(projects) + " active projects, " + tasks_completed + " tasks completed this week"
SEND MAIL TO "team@company.com" SUBJECT "Weekly Project Summary" BODY report
Multi-Channel Access
Access your knowledge base through any channel:
' Same bot works on web, WhatsApp, Teams, Slack
TALK "How can I help you today?"
HEAR question
USE KB "company-wiki"
answer = LLM question
TALK answer
Custom APIs
Create APIs instantly—something not possible in Notion:
WEBHOOK "project-status"
project = FIND "projects", "id = '" + params.id + "'"
tasks = FIND "tasks", "project_id = '" + params.id + "'"
WITH response
.project = project
.task_count = LEN(tasks)
.completed = LEN(FILTER(tasks, "status = 'done'"))
END WITH
Migration Checklist
Pre-Migration
Before starting, export all Notion content in Markdown & CSV format. Inventory your databases and their properties. Document active integrations. Identify critical templates that need recreation. Set up your General Bots environment.
Migration
During the migration, organize Markdown files into the .gbkb structure. Import database CSVs to tables. Convert formulas to BASIC calculations. Recreate task boards as projects. Migrate templates to BASIC scripts. Set up automations to replace Notion workflows.
Post-Migration
After migration, verify all content is searchable in the knowledge base. Test database queries. Confirm automations work correctly. Train your team on the new interface. Redirect any Notion integrations to General Bots.
Best Practices
Organize your knowledge base thoughtfully by grouping related documents in collections for better RAG results.
Simplify database structures because Notion encourages complex relations while General Bots works best with cleaner schemas.
Leverage AI for migration by using General Bots’ LLM to help transform and summarize Notion content:
content = READ "notion-export/long-document.md"
summary = LLM "Create a concise summary of this document: " + content
WRITE "/summaries/document-summary.md", summary
Keep templates as scripts since BASIC templates are more powerful than Notion’s static templates.
See Also
- Knowledge Base - KB configuration
- Projects - Project management
- Template Variables - Dynamic content
- Platform Comparison - Full feature comparison
Perplexity Migration Guide
Migrating from Perplexity to General Bots for AI-powered search and knowledge retrieval.
Overview
Perplexity is an AI-powered search assistant that answers questions with web citations. General Bots provides equivalent and expanded capabilities through its knowledge base, RAG system, and LLM integration—with the advantage of using your own documents, self-hosting, and full customization.
Why Migrate
| Aspect | Perplexity | General Bots |
|---|---|---|
| Hosting | Cloud only | Self-hosted |
| Pricing | $20/month Pro | No subscription |
| Knowledge Source | Web search | Your documents + optional web |
| Customization | None | Full BASIC scripting |
| Data Privacy | Queries logged | Complete privacy |
| API Access | Limited | Full REST API |
| Multi-channel | Web only | Web, WhatsApp, Teams, etc. |
| Automation | None | Full workflow automation |
| Integration | None | Any system via API |
Feature Comparison
Search and Q&A
| Perplexity Feature | General Bots Equivalent |
|---|---|
| Web search | USE WEBSITE + LLM |
| Document Q&A (Pro) | USE KB + LLM |
| Citation generation | RAG with sources |
| Focus modes | SET CONTEXT |
| Collections (Pro) | Multiple .gbkb folders |
| File upload | Knowledge base indexing |
What Perplexity Does
- Searches the web for relevant information
- Synthesizes answers from multiple sources
- Provides citations and links
- Allows follow-up questions
What General Bots Does
- Searches your private knowledge base
- Optionally fetches web content
- Synthesizes answers with full context
- Provides source references
- Allows conversation and follow-ups
- Automates actions based on answers
- Deploys to any channel
Migration Approach
Step 1: Build Your Knowledge Base
Instead of relying on web search, create a curated knowledge base:
my-bot.gbkb/
├── company/
│ ├── policies.pdf
│ ├── procedures.md
│ └── org-chart.pdf
├── products/
│ ├── catalog.pdf
│ ├── specifications.xlsx
│ └── pricing.csv
├── support/
│ ├── faq.md
│ ├── troubleshooting.md
│ └── known-issues.md
└── industry/
├── regulations.pdf
└── best-practices.md
Step 2: Configure RAG
Enable retrieval-augmented generation:
' Load knowledge collections
USE KB "company"
USE KB "products"
USE KB "support"
' Set assistant behavior
SET CONTEXT "You are a knowledgeable assistant. Answer questions based on the provided documents. Always cite your sources."
' Handle questions
TALK "What would you like to know?"
HEAR question
answer = LLM question
TALK answer
Step 3: Add Web Search (Optional)
For real-time information, add website sources:
USE KB "internal-docs"
USE WEBSITE "https://docs.example.com"
USE WEBSITE "https://industry-news.com"
answer = LLM "What are the latest updates on " + topic
Recreating Perplexity Features
Focus Modes
Perplexity Focus: Academic
SET CONTEXT "You are an academic research assistant. Provide scholarly, well-cited responses based on peer-reviewed sources and academic literature. Be precise and thorough."
USE KB "research-papers"
USE KB "academic-journals"
answer = LLM question
Perplexity Focus: Writing
SET CONTEXT "You are a professional writing assistant. Help with content creation, editing, and improving text. Focus on clarity, style, and engagement."
answer = LLM "Help me write: " + topic
Perplexity Focus: Code
SET CONTEXT "You are an expert programmer. Provide accurate, well-documented code examples. Explain your reasoning and suggest best practices."
USE KB "code-documentation"
USE KB "api-references"
answer = LLM question
Collections
Perplexity Collections organize related searches.
General Bots equivalent:
' Create specialized search contexts
WEBHOOK "search-products"
USE KB "products"
SET CONTEXT "You are a product specialist."
answer = LLM body.query
END WEBHOOK
WEBHOOK "search-support"
USE KB "support"
SET CONTEXT "You are a support technician."
answer = LLM body.query
END WEBHOOK
WEBHOOK "search-legal"
USE KB "legal"
SET CONTEXT "You are a legal advisor. Always include disclaimers."
answer = LLM body.query
END WEBHOOK
Pro Search (Deep Research)
Perplexity Pro Search performs multi-step research.
General Bots equivalent:
' Deep research workflow
PARAM topic AS string
DESCRIPTION "Perform comprehensive research on a topic"
SET CONTEXT "You are a research analyst. Conduct thorough analysis with multiple perspectives."
USE KB "all-documents"
' Step 1: Initial analysis
initial = LLM "Provide an overview of: " + topic
' Step 2: Deep dive
details = LLM "Now provide detailed analysis with specific examples for: " + topic
' Step 3: Alternative perspectives
alternatives = LLM "What are alternative viewpoints or counterarguments regarding: " + topic
' Step 4: Synthesis
WITH research_prompt
.instruction = "Synthesize a comprehensive report"
.overview = initial
.details = details
.alternatives = alternatives
END WITH
final_report = LLM "Create a comprehensive report combining: " + research_prompt
TALK final_report
Citation and Sources
Perplexity shows numbered citations with links.
General Bots provides source references through RAG:
USE KB "documents"
SET CONTEXT "When answering, always cite which document your information comes from. Format citations as [Source: document name]."
answer = LLM question
TALK answer
What You Gain
Private Knowledge Base
Your proprietary documents stay private:
USE KB "confidential-data"
USE KB "internal-reports"
' All queries against your own data
' Nothing sent to external search engines
answer = LLM sensitive_question
Custom AI Behavior
Fine-tune responses for your specific needs:
SET CONTEXT "You are the customer service assistant for Acme Corp.
- Always be friendly and professional
- If you don't know something, offer to connect with a human
- Never discuss competitor products
- Emphasize our satisfaction guarantee"
answer = LLM customer_question
Multi-Channel Deployment
Access your AI assistant anywhere:
' Same knowledge base, any channel
' Web chat, WhatsApp, Teams, Slack, SMS, Email
TALK "How can I help you?"
HEAR question
USE KB "company-knowledge"
answer = LLM question
TALK answer
Automation Beyond Q&A
Take action based on queries:
USE KB "products"
TALK "What are you looking for?"
HEAR query
answer = LLM query
' If user wants to order, take action
IF CONTAINS(LOWER(query), "order") OR CONTAINS(LOWER(query), "buy") THEN
TALK "Would you like me to start an order?"
HEAR confirm AS BOOLEAN
IF confirm THEN
CREATE TASK "Follow up on order inquiry" DUE DATEADD(NOW(), 1, "day")
SEND MAIL TO "sales@company.com" SUBJECT "Order Inquiry" BODY "Customer asked: " + query
END IF
END IF
TALK answer
API for Integration
Create search APIs for your applications:
WEBHOOK "search"
USE KB params.collection
SET CONTEXT params.context
answer = LLM params.query
WITH response
.answer = answer
.query = params.query
.timestamp = NOW()
END WITH
Call from any application:
POST /api/search
{
"collection": "products",
"context": "You are a product expert",
"query": "What's the best option for enterprise?"
}
Migration Checklist
Pre-Migration
- Identify information sources you frequently search
- Gather documents to build knowledge base
- Determine required focus modes/contexts
- Plan deployment channels
- Set up General Bots environment
Migration
- Organize documents into .gbkb collections
- Create context configurations
- Build specialized search endpoints
- Test with common queries
- Configure multi-channel access
Post-Migration
- Compare answer quality
- Train team on new interface
- Monitor and refine contexts
- Add automation workflows
- Expand knowledge base as needed
Example: Complete Search Assistant
' search-assistant.bas
' A Perplexity-like search experience with General Bots
' Load knowledge bases
USE KB "company-docs"
USE KB "product-info"
USE KB "industry-knowledge"
' Configure AI behavior
SET CONTEXT "You are an intelligent search assistant.
Provide accurate, well-sourced answers.
When citing information, mention the source document.
If you're uncertain, acknowledge the limitations.
Be concise but comprehensive."
' Main conversation loop
TALK "Hello! I can search our knowledge base and help answer your questions. What would you like to know?"
LOOP
HEAR query
IF LOWER(query) = "exit" OR LOWER(query) = "quit" THEN
TALK "Goodbye!"
EXIT LOOP
END IF
' Generate response with sources
answer = LLM query
TALK answer
' Offer follow-up
TALK "Would you like to know more about any aspect of this?"
LOOP
Best Practices
Curate your knowledge base. Quality documents produce better answers than random web search.
Use specific contexts. Tailor the AI’s behavior for different use cases rather than using generic settings.
Iterate on prompts. Refine your SET CONTEXT instructions based on the quality of responses.
Combine sources strategically. Mix internal documents with curated external sources for comprehensive coverage.
Add automation. Go beyond Q&A—let your assistant take actions, create tasks, and integrate with workflows.
See Also
- Knowledge Base - Building effective KBs
- USE KB - Knowledge base keyword
- SET CONTEXT - AI configuration
- Platform Comparison - Full feature comparison
Zapier and Make Migration Guide
Migrating workflows from Zapier or Make (formerly Integromat) to General Bots.
Overview
Zapier and Make are visual automation platforms connecting apps through triggers and actions. General Bots provides equivalent automation through BASIC scripting, offering more power and flexibility without per-task pricing.
Why Migrate
| Aspect | Zapier/Make | General Bots |
|---|---|---|
| Pricing | Per-task/operation | Unlimited executions |
| Automation | Visual workflows | BASIC scripts |
| AI Integration | Via paid apps | Native LLM keywords |
| Chat/Bot | Not included | Multi-channel |
| Productivity Suite | Not included | Email, calendar, files |
| Custom Logic | Limited | Full programming |
| Self-hosting | Not available | Full control |
| API Creation | Not available | Instant webhooks |
Cost Comparison
Zapier Pricing
| Plan | Tasks/Month | Cost |
|---|---|---|
| Free | 100 | $0 |
| Starter | 750 | $19.99 |
| Professional | 2,000 | $49 |
| Team | 50,000 | $69/user |
| Company | 100,000+ | Custom |
Make Pricing
| Plan | Operations/Month | Cost |
|---|---|---|
| Free | 1,000 | $0 |
| Core | 10,000 | $9 |
| Pro | 10,000 | $16 |
| Teams | 10,000 | $29/user |
| Enterprise | Custom | Custom |
General Bots
| Plan | Operations | Cost |
|---|---|---|
| Self-hosted | Unlimited | Infrastructure only |
Trigger Mapping
| Zapier/Make Trigger | General Bots Equivalent |
|---|---|
| Schedule | SET SCHEDULE |
| Webhook | WEBHOOK |
| New Email | ON "email:received" |
| New Row (Sheets) | ON "table:name:insert" |
| Form Submission | ON FORM SUBMIT |
| New File | ON "file:created" |
| RSS Feed | Scheduled GET |
| App-specific | API polling or webhooks |
Action Mapping
| Zapier/Make Action | General Bots Equivalent |
|---|---|
| Send Email | SEND MAIL |
| HTTP Request | GET, POST, PUT, DELETE |
| Create Row | INSERT |
| Update Row | UPDATE |
| Filter | IF/THEN/ELSE |
| Formatter | String/date functions |
| Delay | WAIT |
| Paths | IF branches |
| Loop | FOR EACH |
| Code (JS/Python) | BASIC script |
| Slack Message | POST to Slack webhook |
| Create Task | CREATE TASK |
| Send SMS | SMS integration |
Migration Examples
Simple Zap: Form to Email
Zapier:
Typeform → Gmail (Send Email)
General Bots:
ON FORM SUBMIT "contact-form"
name = fields.name
email = fields.email
message = fields.message
SEND MAIL TO "support@company.com" SUBJECT "New Contact: " + name BODY "From: " + email + "\n\nMessage:\n" + message
END ON
Multi-Step Zap: Lead Processing
Zapier:
Webhook → Filter → Clearbit Enrich → Salesforce (Create Lead) → Slack (Send Message)
General Bots:
WEBHOOK "new-lead"
lead = body
' Filter
IF lead.email = "" OR NOT CONTAINS(lead.email, "@") THEN
RETURN #{status: "invalid", reason: "Invalid email"}
END IF
' Enrich
SET HEADER "Authorization", "Bearer " + GET CONFIG "clearbit-key"
enriched = GET "https://person.clearbit.com/v2/people/find?email=" + lead.email
' Create in CRM
WITH salesforce_lead
.Email = lead.email
.FirstName = enriched.name.givenName
.LastName = enriched.name.familyName
.Company = enriched.employment.name
.Title = enriched.employment.title
END WITH
SET HEADER "Authorization", "Bearer " + GET CONFIG "salesforce-token"
result = POST "https://yourinstance.salesforce.com/services/data/v52.0/sobjects/Lead", salesforce_lead
' Notify Slack
POST GET CONFIG "slack-webhook", #{
text: "New lead: " + lead.email + " from " + enriched.employment.name
}
RETURN #{status: "success", salesforce_id: result.id}
Scheduled Sync
Make Scenario:
Schedule → HTTP Request → Iterator → Google Sheets (Add Row)
General Bots:
SET SCHEDULE "every hour"
data = GET "https://api.example.com/new-orders"
FOR EACH order IN data.orders
INSERT "orders", #{
order_id: order.id,
customer: order.customer_name,
total: order.total,
status: order.status,
synced_at: NOW()
}
NEXT order
TALK "Synced " + LEN(data.orders) + " orders"
Error Handling
Zapier: Error handling path or retry
General Bots:
SET SCHEDULE "every 5 minutes"
TRY
result = POST "https://api.example.com/sync", data
IF result.status <> 200 THEN
THROW "API returned " + result.status
END IF
CATCH
' Log error
INSERT "error_log", #{
error: ERROR_MESSAGE,
timestamp: NOW(),
data: data
}
' Alert
SEND MAIL TO "ops@company.com" SUBJECT "Sync Error" BODY ERROR_MESSAGE
POST GET CONFIG "slack-alerts", #{text: "Sync failed: " + ERROR_MESSAGE}
END TRY
Conditional Paths
Zapier Paths:
Trigger → Path A (if condition) → Actions
→ Path B (else) → Actions
General Bots:
WEBHOOK "order-status"
order = body
IF order.total > 1000 THEN
' High-value order path
SEND MAIL TO "vip-team@company.com" SUBJECT "High-Value Order" BODY order
POST GET CONFIG "slack-vip", #{text: "VIP Order: $" + order.total}
priority = "high"
ELSEIF order.is_rush = true THEN
' Rush order path
SEND MAIL TO "fulfillment@company.com" SUBJECT "RUSH Order" BODY order
priority = "rush"
ELSE
' Standard order path
priority = "normal"
END IF
INSERT "orders", #{
id: order.id,
total: order.total,
priority: priority,
created: NOW()
}
Data Transformation
Make/Zapier Formatter:
- Split text
- Format dates
- Math operations
- Lookup tables
General Bots:
' String operations
full_name = first_name + " " + last_name
email_domain = SPLIT(email, "@")[1]
slug = LOWER(REPLACE(title, " ", "-"))
' Date formatting
formatted_date = FORMAT(created_at, "MMMM d, yyyy")
due_date = DATEADD(NOW(), 7, "day")
days_ago = DATEDIFF("day", created_at, NOW())
' Math
subtotal = price * quantity
tax = subtotal * 0.08
total = subtotal + tax
discount = IIF(total > 100, total * 0.1, 0)
' Lookup
status_label = SWITCH status
CASE "new" : "New Order"
CASE "processing" : "In Progress"
CASE "shipped" : "On the Way"
CASE "delivered" : "Completed"
DEFAULT : "Unknown"
END SWITCH
App-Specific Migrations
Gmail/Email
Zapier: Gmail trigger/action
General Bots:
' Send email
SEND MAIL TO recipient SUBJECT subject BODY body
' With attachments
SEND MAIL TO recipient SUBJECT subject BODY body ATTACH "/files/report.pdf"
' Process incoming (via Stalwart webhook)
ON "email:received"
IF CONTAINS(params.subject, "Order") THEN
PROCESS_ORDER(params)
END IF
END ON
Slack
Zapier: Slack app
General Bots:
' Simple message
POST "https://hooks.slack.com/services/xxx", #{text: "Hello!"}
' Rich message
WITH slack_message
.channel = "#general"
.blocks = [
#{type: "header", text: #{type: "plain_text", text: "New Order"}},
#{type: "section", text: #{type: "mrkdwn", text: "*Customer:* " + customer_name}},
#{type: "section", text: #{type: "mrkdwn", text: "*Total:* $" + total}}
]
END WITH
POST GET CONFIG "slack-webhook", slack_message
Google Sheets
Zapier: Google Sheets app
General Bots:
' Read from sheet
SET HEADER "Authorization", "Bearer " + GET CONFIG "google-token"
data = GET "https://sheets.googleapis.com/v4/spreadsheets/{spreadsheetId}/values/Sheet1!A1:D100"
' Append row
POST "https://sheets.googleapis.com/v4/spreadsheets/{spreadsheetId}/values/Sheet1!A1:append?valueInputOption=USER_ENTERED", #{
values: [[name, email, phone, NOW()]]
}
' Or use General Bots tables directly
INSERT "contacts", #{name: name, email: email, phone: phone}
Airtable
Zapier: Airtable app
General Bots:
SET HEADER "Authorization", "Bearer " + GET CONFIG "airtable-key"
' Read records
records = GET "https://api.airtable.com/v0/{baseId}/{tableName}"
' Create record
POST "https://api.airtable.com/v0/{baseId}/{tableName}", #{
fields: #{
Name: name,
Email: email,
Status: "New"
}
}
HubSpot
Zapier: HubSpot app
General Bots:
SET HEADER "Authorization", "Bearer " + GET CONFIG "hubspot-token"
' Create contact
POST "https://api.hubapi.com/crm/v3/objects/contacts", #{
properties: #{
email: email,
firstname: first_name,
lastname: last_name,
company: company
}
}
' Create deal
POST "https://api.hubapi.com/crm/v3/objects/deals", #{
properties: #{
dealname: deal_name,
amount: amount,
pipeline: "default",
dealstage: "appointmentscheduled"
}
}
What You Gain
No Operation Limits
' Process thousands of records without worrying about limits
SET SCHEDULE "every hour"
records = GET "https://api.example.com/all-records"
FOR EACH record IN records
PROCESS_RECORD(record) ' No per-operation cost
NEXT record
Native AI Integration
' AI-powered automation
USE KB "company-docs"
incoming_email = params.body
category = LLM "Categorize this email as: support, sales, billing, or other: " + incoming_email
IF category = "support" THEN
response = LLM "Draft a helpful support response to: " + incoming_email
SEND MAIL TO params.from SUBJECT "Re: " + params.subject BODY response
END IF
Multi-Channel Chat
' Same automation works across channels
TALK "How can I help you?"
HEAR request
USE KB "help-docs"
answer = LLM request
TALK answer
' Available on Web, WhatsApp, Teams, Slack, Telegram, SMS
Built-in Productivity
' No need for separate calendar, task, email apps
CREATE TASK "Follow up with " + customer_name DUE DATEADD(NOW(), 3, "day")
BOOK "Call with " + customer_name AT meeting_time
SEND MAIL TO customer_email SUBJECT "Confirmation" BODY message
Migration Checklist
Pre-Migration
- Export Zap/Scenario descriptions
- Document all triggers and schedules
- List all connected apps and credentials
- Identify critical automations
- Set up General Bots environment
Migration
- Create BASIC scripts for each workflow
- Configure credentials in config.csv
- Set up webhooks with same URLs
- Configure schedules
- Test each automation
Post-Migration
- Run parallel for verification
- Compare execution results
- Monitor for errors
- Disable Zapier/Make automations
- Cancel subscriptions
Best Practices
Start with simple Zaps. Migrate basic workflows first to learn BASIC syntax.
Combine multiple Zaps. Often several Zaps can become one General Bots script.
Use native features. Don’t replicate Zapier patterns—leverage AI, chat, and productivity features.
Add error handling. BASIC provides better error handling than visual builders.
Document your scripts. Add comments explaining what each script does.
' Daily sales report
' Runs at 6 PM on weekdays
' Aggregates daily orders and sends summary to management
SET SCHEDULE "0 18 * * 1-5"
' ... implementation
See Also
- SET SCHEDULE - Scheduling
- WEBHOOK - Webhooks
- HTTP Keywords - API calls
- Platform Comparison - Full comparison
Intercom Migration Guide
Migrating customer messaging and support from Intercom to General Bots.
Overview
Intercom is a customer messaging platform with live chat, chatbots, and help desk features. General Bots provides equivalent capabilities with self-hosting, no per-seat pricing, and native AI integration.
Why Migrate
| Aspect | Intercom | General Bots |
|---|---|---|
| Pricing | $39-139/seat/month | No per-seat fees |
| Hosting | Cloud only | Self-hosted |
| AI Features | Fin AI ($0.99/resolution) | Native LLM (any provider) |
| Channels | Web, email, mobile | Web, WhatsApp, Teams, Slack, SMS, more |
| Automation | Limited workflows | Full BASIC scripting |
| Knowledge Base | Included | Built-in RAG |
| Data Ownership | Their servers | Your infrastructure |
| Customization | Limited | Full source access |
Cost Comparison
Intercom Pricing (per seat/month)
| Plan | Cost | Features |
|---|---|---|
| Essential | $39 | Basic chat, inbox |
| Advanced | $99 | Automation, reporting |
| Expert | $139 | Full platform |
| Fin AI | $0.99/resolution | AI answers |
For a team of 10 support agents, Intercom costs between $990-1,390 per month plus AI costs.
General Bots
| Component | Cost |
|---|---|
| Software | $0 |
| Infrastructure | $50-200/month |
| LLM API (optional) | Usage-based |
The same 10-agent team would spend approximately $100-300 per month total with General Bots.
Feature Mapping
Core Features
| Intercom Feature | General Bots Equivalent |
|---|---|
| Messenger | Web chat widget |
| Inbox | Conversation management |
| Help Center | Knowledge base (.gbkb) |
| Bots | BASIC dialog scripts |
| Product Tours | Guided conversations |
| Outbound Messages | Automated messaging |
Bot Capabilities
| Intercom Bots | General Bots Equivalent |
|---|---|
| Custom Bots | BASIC scripts |
| Resolution Bot | LLM + USE KB |
| Task Bots | Automated workflows |
| Qualification Bots | HEAR AS + lead scoring |
| Article Suggestions | RAG responses |
Migration Process
Step 1: Export Intercom Data
Begin by exporting your data from Intercom. Navigate to Settings, then Data Management, and export conversations, contacts, and articles. Download your Help Center articles separately and export any custom attributes and tags you’ve configured.
Step 2: Migrate Knowledge Base
Convert your Help Center articles to a General Bots knowledge base structure:
my-bot.gbkb/
├── getting-started/
│ ├── quick-start.md
│ └── setup-guide.md
├── features/
│ ├── feature-overview.md
│ └── tutorials.md
├── troubleshooting/
│ ├── common-issues.md
│ └── faq.md
└── billing/
├── plans.md
└── payments.md
Step 3: Create Support Bot
' support-bot.bas
' Main customer support entry point
USE KB "getting-started"
USE KB "features"
USE KB "troubleshooting"
USE KB "billing"
SET CONTEXT "You are a friendly customer support assistant.
- Be helpful and concise
- If you cannot answer, offer to connect with a human
- Always maintain a professional, positive tone"
TALK "Hi! I'm here to help. What can I assist you with today?"
LOOP
HEAR question
' Check for handoff request
IF CONTAINS(LOWER(question), "human") OR CONTAINS(LOWER(question), "agent") OR CONTAINS(LOWER(question), "person") THEN
CALL REQUEST_HUMAN_HANDOFF()
EXIT LOOP
END IF
answer = LLM question
TALK answer
TALK "Is there anything else I can help you with?"
LOOP
Step 4: Implement Human Handoff
SUB REQUEST_HUMAN_HANDOFF()
TALK "I'll connect you with a support agent. Let me gather some information first."
TALK "What's your email address?"
HEAR email AS EMAIL
TALK "Please briefly describe your issue:"
HEAR issue_summary
' Create support ticket
ticket_id = INSERT "support_tickets", #{
customer_email: email,
summary: issue_summary,
conversation_id: session.id,
status: "pending",
created_at: NOW()
}
' Notify support team
SEND MAIL TO "support@company.com" SUBJECT "New Support Request #" + ticket_id BODY "Customer: " + email + "\n\nIssue: " + issue_summary
POST GET CONFIG "slack-support", #{
text: "New support request from " + email + ": " + issue_summary
}
TALK "Thanks! A support agent will reach out to you at " + email + " shortly. Your ticket number is #" + ticket_id
END SUB
Recreating Intercom Features
Messenger Widget
General Bots provides embeddable chat widgets that you can add to your website:
<!-- Embed in your website -->
<script src="https://your-bot-server/widget.js"></script>
<script>
GeneralBots.init({
botId: 'your-bot-id',
position: 'bottom-right',
greeting: 'Hi! How can we help?'
});
</script>
Qualification Bot
Where Intercom uses a qualification workflow, General Bots achieves the same result through BASIC scripts:
' lead-qualification.bas
PARAM source AS string
DESCRIPTION "Qualify incoming leads"
TALK "Welcome! I'd love to learn more about you."
TALK "What's your name?"
HEAR name AS NAME
TALK "And your work email?"
HEAR email AS EMAIL
TALK "What company are you with?"
HEAR company
TALK "What's your role?"
HEAR role AS "Executive", "Manager", "Individual Contributor", "Student", "Other"
TALK "What brings you here today?"
HEAR interest AS "Product Demo", "Pricing", "Support", "Partnership", "Just Exploring"
' Score the lead
WITH lead_data
.name = name
.email = email
.company = company
.role = role
.interest = interest
.source = source
END WITH
score = SCORE LEAD lead_data
' Route based on qualification
IF score.status = "hot" OR interest = "Product Demo" THEN
TALK "Great! Let me schedule a demo for you."
TALK "When works best?"
HEAR preferred_time
SEND MAIL TO "sales@company.com" SUBJECT "Hot Lead - Demo Request" BODY lead_data
CREATE TASK "Demo call with " + name DUE DATEADD(NOW(), 1, "day")
TALK "Our team will reach out within 24 hours to confirm your demo!"
ELSEIF interest = "Pricing" THEN
USE KB "pricing"
pricing_info = LLM "Provide a brief pricing overview"
TALK pricing_info
TALK "Would you like to speak with someone about your specific needs?"
ELSE
USE KB "getting-started"
TALK "Here's what you can do to get started..."
answer = LLM "Give a brief getting started guide"
TALK answer
END IF
INSERT "leads", lead_data
Proactive Messages
Intercom’s outbound messages translate to scheduled BASIC scripts in General Bots:
' proactive-engagement.bas
SET SCHEDULE "every hour"
' Find users who might need help
inactive_sessions = FIND "sessions", "last_activity < DATEADD(NOW(), -5, 'minute') AND page_views > 3 AND not contacted"
FOR EACH session IN inactive_sessions
' Send proactive message
SEND TO session.id MESSAGE "Need any help? I'm here if you have questions!"
UPDATE "sessions", "id = '" + session.id + "'", #{contacted: true}
NEXT session
Resolution Bot (AI Answers)
While Intercom’s Fin charges $0.99 per resolution, General Bots provides the same capability at no additional cost:
' ai-resolution.bas
USE KB "help-center"
USE KB "product-docs"
USE KB "faq"
SET CONTEXT "You are a helpful support assistant. Answer questions accurately based on the knowledge base. If you're not confident in the answer, say so and offer to connect with a human."
TALK "How can I help you today?"
HEAR question
answer = LLM question
' Check confidence (you can implement confidence scoring)
IF CONTAINS(answer, "I'm not sure") OR CONTAINS(answer, "I don't have") THEN
TALK answer
TALK "Would you like me to connect you with a support agent?"
HEAR wants_human AS BOOLEAN
IF wants_human THEN
CALL REQUEST_HUMAN_HANDOFF()
END IF
ELSE
TALK answer
' Track resolution
INSERT "resolutions", #{
question: question,
answer: answer,
resolved: true,
timestamp: NOW()
}
END IF
Customer Segments
Intercom’s user segments become database queries and scheduled scripts in General Bots:
' segment-customers.bas
SET SCHEDULE "every day at 6am"
customers = FIND "customers", "1=1"
FOR EACH customer IN customers
segment = "standard"
IF customer.total_spent > 10000 THEN
segment = "enterprise"
ELSEIF customer.total_spent > 1000 THEN
segment = "premium"
ELSEIF customer.signup_date > DATEADD(NOW(), -30, "day") THEN
segment = "new"
ELSEIF customer.last_activity < DATEADD(NOW(), -90, "day") THEN
segment = "at-risk"
END IF
UPDATE "customers", "id = '" + customer.id + "'", #{segment: segment}
NEXT customer
Targeted Campaigns
' win-back-campaign.bas
SET SCHEDULE "every monday at 10am"
' Find at-risk customers
at_risk = FIND "customers", "segment = 'at-risk' AND not win_back_sent"
FOR EACH customer IN at_risk
USE KB "product-updates"
personalized_message = LLM "Write a brief, friendly win-back message for " + customer.name + " who hasn't used our product in 3 months. Mention recent improvements."
SEND MAIL TO customer.email SUBJECT "We miss you, " + customer.name + "!" BODY personalized_message
UPDATE "customers", "id = '" + customer.id + "'", #{win_back_sent: true, win_back_date: NOW()}
NEXT customer
Multi-Channel Support
Intercom Channels
Intercom supports Web Messenger, Mobile SDK, Email, and SMS as an add-on.
General Bots Channels
All channels use the same BASIC scripts, making development and maintenance simpler:
' Same bot works everywhere
USE KB "support"
TALK "How can I help?"
HEAR question
answer = LLM question
TALK answer
' Channel-specific handling if needed
IF channel = "whatsapp" THEN
' WhatsApp-specific features
ELSEIF channel = "email" THEN
' Email formatting
END IF
General Bots supports web chat, WhatsApp Business, Teams, Slack, Telegram, SMS, Email, and voice through LiveKit.
Reporting and Analytics
Conversation Metrics
' daily-metrics.bas
SET SCHEDULE "every day at 11pm"
today = FORMAT(NOW(), "yyyy-MM-dd")
conversations = AGGREGATE "conversations", "COUNT", "id", "DATE(created_at) = '" + today + "'"
resolutions = AGGREGATE "resolutions", "COUNT", "id", "DATE(timestamp) = '" + today + "' AND resolved = true"
avg_response_time = AGGREGATE "conversations", "AVG", "first_response_seconds", "DATE(created_at) = '" + today + "'"
WITH daily_report
.date = today
.total_conversations = conversations
.ai_resolutions = resolutions
.resolution_rate = ROUND(resolutions / conversations * 100, 1)
.avg_response_time = ROUND(avg_response_time / 60, 1)
END WITH
INSERT "daily_metrics", daily_report
SEND MAIL TO "support-lead@company.com" SUBJECT "Daily Support Metrics - " + today BODY daily_report
Migration Checklist
Pre-Migration
Before beginning the migration, export all Intercom data including conversations, contacts, and articles. Document your custom bot workflows so you can recreate them in BASIC. List all integrations that connect to Intercom. Note any custom attributes and tags you use. Set up your General Bots environment with the necessary infrastructure.
Migration
During the migration phase, convert your Help Center content to the .gbkb structure. Create support bot scripts that replicate your Intercom workflows. Implement the human handoff flow for seamless escalation. Set up notification channels for your support team. Configure the chat widget for your website. Import customer data from your Intercom export.
Post-Migration
After migration, test all conversation flows to ensure they work correctly. Verify knowledge base accuracy by asking common questions. Train your support team on the new interface. Run parallel support briefly by keeping both systems active. Once validated, redirect the widget embed code to General Bots and cancel your Intercom subscription.
What You Gain
Migrating to General Bots provides several significant advantages. There is no per-seat pricing, so you can add unlimited agents without increasing costs. Native AI comes without per-resolution fees since you can use any LLM provider. Full customization is possible because you have complete source access to modify any aspect of the system. Data ownership means all conversations stay on your infrastructure. Automation power lets you go beyond simple workflows with full BASIC scripting. Multi-channel support is native, meaning the same bot works across all channels without add-ons.
See Also
- Projects - Organizing support queues
- HEAR Validation - Input validation
- Lead Scoring - Qualification
- Platform Comparison - Full feature comparison
Dialogflow Migration Guide
Migrating chatbots and conversational agents from Dialogflow to General Bots.
Overview
Dialogflow is Google’s conversational AI platform for building chatbots with intent-based NLU. General Bots provides a simpler, more powerful approach using LLM-based understanding and BASIC scripting—without cloud lock-in or complex intent management.
Why Migrate
| Aspect | Dialogflow | General Bots |
|---|---|---|
| Hosting | Google Cloud only | Self-hosted |
| Pricing | Per-request fees | No per-request costs |
| NLU Approach | Intent + entity training | LLM-based (zero training) |
| Fulfillment | Cloud Functions/webhooks | Native BASIC scripts |
| Knowledge Base | Limited connector | Full RAG system |
| Channels | Via integrations | Native multi-channel |
| Customization | Limited | Full source access |
| Maintenance | Intent training required | LLM handles variations |
Cost Comparison
Dialogflow Pricing
| Edition | Cost |
|---|---|
| ES (Standard) | Free tier + $0.002/request |
| CX | $0.007/request |
| Mega Agent | $0.06/request |
10,000 requests/month: $20-600/month
General Bots
| Component | Cost |
|---|---|
| Software | $0 |
| Infrastructure | $50-200/month |
| LLM API | Usage-based (typically lower) |
Architecture Comparison
Dialogflow Architecture
User → Dialogflow Agent → Intent Matching → Fulfillment Webhook → Response
↓
Entity Extraction
↓
Context Management
General Bots Architecture
User → BASIC Script → LLM Processing → Response
↓
Knowledge Base (RAG)
↓
Direct Actions (DB, API, etc.)
Concept Mapping
Intents to BASIC
| Dialogflow Concept | General Bots Equivalent |
|---|---|
| Intent | LLM understanding + conditions |
| Training Phrases | Not needed (LLM handles) |
| Entity | HEAR AS <type> |
| Context | SET CONTEXT / SET BOT MEMORY |
| Fulfillment | Direct BASIC code |
| Follow-up Intent | Conversation flow |
| Event | ON triggers |
| Knowledge Connector | USE KB |
Entity Types
| Dialogflow Entity | General Bots HEAR AS |
|---|---|
| @sys.date | HEAR AS DATE |
| @sys.time | HEAR AS HOUR |
| @sys.number | HEAR AS INTEGER / FLOAT |
| @sys.email | HEAR AS EMAIL |
| @sys.phone-number | HEAR AS MOBILE |
| @sys.currency-name | HEAR AS MONEY |
| @sys.person | HEAR AS NAME |
| Custom entity | Menu options or LLM extraction |
Migration Examples
Simple FAQ Bot
Dialogflow:
- Intent: “hours” with training phrases
- Response: “We’re open 9 AM to 5 PM”
General Bots:
USE KB "company-info"
SET CONTEXT "You are a helpful assistant for Acme Corp. Answer questions about our business."
TALK "Hi! How can I help you today?"
HEAR question
answer = LLM question
TALK answer
The LLM understands “hours”, “when are you open”, “opening times”, etc. without explicit training.
Order Status Bot
Dialogflow:
Intent: order.status
Training phrases: "where is my order", "track order", "order status"
Entity: @order_number
Fulfillment: Webhook to order API
General Bots:
' order-status.bas
SET CONTEXT "You help customers check their order status."
TALK "I can help you track your order. What's your order number?"
HEAR order_number
' Direct API call - no webhook needed
SET HEADER "Authorization", "Bearer " + GET CONFIG "orders-api-key"
order = GET "https://api.company.com/orders/" + order_number
IF order.error THEN
TALK "I couldn't find that order. Please check the number and try again."
ELSE
TALK "Your order #" + order_number + " is " + order.status + "."
IF order.status = "shipped" THEN
TALK "Tracking number: " + order.tracking
TALK "Expected delivery: " + FORMAT(order.delivery_date, "MMMM d")
END IF
END IF
TALK "Is there anything else I can help with?"
Appointment Booking
Dialogflow:
Intent: book.appointment
Entities: @sys.date, @sys.time, @service_type
Slot filling for required parameters
Fulfillment: Calendar API webhook
General Bots:
' appointment-booking.bas
SET CONTEXT "You help customers book appointments."
TALK "I'd be happy to help you book an appointment."
TALK "What type of service do you need?"
HEAR service AS "Consultation", "Follow-up", "New Patient", "Urgent Care"
TALK "What date works for you?"
HEAR appointment_date AS DATE
TALK "And what time?"
HEAR appointment_time AS HOUR
' Check availability
available = GET "https://api.calendar.com/check?date=" + appointment_date + "&time=" + appointment_time
IF available.open THEN
' Book directly
BOOK service + " Appointment" AT appointment_date + " " + appointment_time
TALK "Perfect! Your " + service + " appointment is confirmed for " + FORMAT(appointment_date, "MMMM d") + " at " + appointment_time
' Send confirmation
TALK "What email should I send the confirmation to?"
HEAR email AS EMAIL
SEND MAIL TO email SUBJECT "Appointment Confirmation" BODY "Your " + service + " is scheduled for " + appointment_date
ELSE
TALK "That time isn't available. How about " + available.next_slot + "?"
HEAR confirm AS BOOLEAN
' ... continue flow
END IF
Multi-Turn Conversation
Dialogflow:
- Follow-up intents
- Context management
- Lifespan settings
General Bots:
' pizza-order.bas
SET CONTEXT "You help customers order pizza."
TALK "Welcome to Pizza Bot! What would you like to order?"
' Size
TALK "What size pizza?"
HEAR size AS "Small", "Medium", "Large", "Extra Large"
' Type
TALK "What type would you like?"
HEAR pizza_type AS "Pepperoni", "Margherita", "Supreme", "Hawaiian", "Custom"
IF pizza_type = "Custom" THEN
TALK "What toppings would you like? (comma separated)"
HEAR toppings
END IF
' Confirm
TALK "So that's a " + size + " " + pizza_type + " pizza. Is that correct?"
HEAR confirmed AS BOOLEAN
IF confirmed THEN
' Store order
order_id = INSERT "orders", #{
size: size,
type: pizza_type,
toppings: toppings,
status: "pending",
created_at: NOW()
}
TALK "Great! Your order #" + order_id + " has been placed."
TALK "Would you like to add anything else?"
HEAR add_more AS BOOLEAN
IF add_more THEN
' Continue ordering
ELSE
TALK "What's your delivery address?"
HEAR address
' ... complete order
END IF
ELSE
TALK "No problem, let's start over."
END IF
Migrating Fulfillment Code
Dialogflow Webhook
// Dialogflow fulfillment
exports.webhook = (req, res) => {
const intent = req.body.queryResult.intent.displayName;
const params = req.body.queryResult.parameters;
if (intent === 'order.status') {
const orderId = params.order_number;
// Call API
fetch(`https://api.example.com/orders/${orderId}`)
.then(response => response.json())
.then(order => {
res.json({
fulfillmentText: `Your order is ${order.status}`
});
});
}
};
General Bots Equivalent
' The logic is inline - no separate webhook needed
order = GET "https://api.example.com/orders/" + order_id
TALK "Your order is " + order.status
Knowledge Base Migration
Dialogflow Knowledge Connector
Limited to FAQ format, requires Google Cloud.
General Bots Knowledge Base
Full document support with RAG:
my-bot.gbkb/
├── products/
│ ├── catalog.pdf
│ └── specifications.xlsx
├── support/
│ ├── faq.md
│ └── troubleshooting.md
└── policies/
├── returns.pdf
└── warranty.md
USE KB "products"
USE KB "support"
USE KB "policies"
answer = LLM customer_question
Context Migration
Dialogflow Contexts
// Setting context in fulfillment
outputContexts: [{
name: `projects/.../contexts/order-context`,
lifespanCount: 5,
parameters: { orderId: '12345' }
}]
General Bots Memory
' Store context
SET BOT MEMORY "current_order_id", order_id
SET BOT MEMORY "customer_name", customer_name
' Retrieve context
order_id = GET BOT MEMORY "current_order_id"
Multi-Channel Deployment
Dialogflow Integrations
Requires separate configuration for each channel:
- Web: Dialogflow Messenger
- Telephony: CCAI
- Other: Custom integrations
General Bots
Same code works everywhere:
' Works on Web, WhatsApp, Teams, Slack, Telegram, SMS
TALK "How can I help?"
HEAR question
USE KB "support"
answer = LLM question
TALK answer
Advanced Features
Small Talk
Dialogflow: Enable small talk prebuilt agent
General Bots: LLM handles naturally
SET CONTEXT "You are a friendly assistant. Engage in casual conversation when appropriate while staying helpful."
' LLM naturally handles:
' - "Hello"
' - "How are you?"
' - "Thanks"
' - "Goodbye"
Sentiment Analysis
Dialogflow: Enable sentiment in settings
General Bots:
HEAR customer_message
sentiment = LLM "Analyze the sentiment of this message and respond with: positive, neutral, or negative. Message: " + customer_message
IF sentiment = "negative" THEN
SET CONTEXT "The customer seems frustrated. Be extra helpful and empathetic."
' Or escalate
CREATE TASK "Review negative sentiment conversation"
END IF
answer = LLM customer_message
TALK answer
Rich Responses
Dialogflow: Card, suggestion chips, etc.
General Bots:
' Suggestions
ADD SUGGESTION "Check Order"
ADD SUGGESTION "Track Shipment"
ADD SUGGESTION "Contact Support"
TALK "What would you like to do?"
' Images
TALK IMAGE "/products/featured.jpg"
' Files
TALK FILE "/documents/brochure.pdf"
Migration Checklist
Pre-Migration
- Export Dialogflow agent (JSON)
- Document all intents and training phrases
- List entities and their values
- Map fulfillment webhooks
- Identify knowledge connectors
- Note channel integrations
Migration
- Set up General Bots environment
- Create knowledge base from FAQs/docs
- Build BASIC scripts for main flows
- Implement entity validation with HEAR AS
- Convert fulfillment logic to BASIC
- Configure channels
Post-Migration
- Test all conversation flows
- Compare response quality
- Verify API integrations
- Train team on new system
- Redirect channel integrations
- Decommission Dialogflow agent
What You Gain
No Intent Training: LLM understands variations without explicit training phrases.
Simpler Architecture: Logic lives in BASIC scripts, not spread across intents and webhooks.
Self-Hosted: No Google Cloud dependency or per-request fees.
Native Integrations: Direct API calls and database access without webhook complexity.
Full RAG: Rich knowledge base support beyond simple FAQ.
Multi-Channel Native: Deploy everywhere with one codebase.
See Also
- HEAR Keyword - Input validation (replaces entities)
- SET CONTEXT - AI behavior configuration
- Knowledge Base - RAG setup
- Platform Comparison - Full feature comparison
Botpress Migration Guide
Migrating chatbots from Botpress to General Bots.
Overview
Botpress is an open-source chatbot platform with visual flow builder and NLU. General Bots provides a simpler approach using LLM-based understanding and BASIC scripting, with integrated productivity features and native multi-channel support.
Why Migrate
| Aspect | Botpress | General Bots |
|---|---|---|
| NLU Approach | Intent training required | LLM-based (no training) |
| Flow Building | Visual + code | BASIC scripts |
| Self-hosting | Available | Available |
| AI Integration | Via hooks | Native LLM keywords |
| Knowledge Base | Limited | Full RAG system |
| Productivity Suite | Not included | Email, calendar, files, tasks |
| Multi-channel | Via connectors | Native support |
| Learning Curve | Moderate | Simple BASIC |
Concept Mapping
| Botpress Concept | General Bots Equivalent |
|---|---|
| Flows | BASIC scripts |
| Nodes | BASIC statements |
| Intents | LLM understanding |
| Entities | HEAR AS <type> |
| Slots | Variables |
| Actions | BASIC keywords |
| Hooks | ON triggers |
| Content Types | TALK variations |
| Knowledge Base | .gbkb folders |
| Channels | Native multi-channel |
Flow Migration
Botpress Flow Structure
# Botpress flow (simplified)
nodes:
- id: entry
type: standard
next: ask_name
- id: ask_name
type: say_something
content: "What's your name?"
next: capture_name
- id: capture_name
type: listen
slot: name
next: greet
- id: greet
type: say_something
content: "Hello {{name}}!"
General Bots Equivalent
' Simple and readable
TALK "What's your name?"
HEAR name AS NAME
TALK "Hello " + name + "!"
Migration Examples
Simple Welcome Flow
Botpress:
- Entry node → Say “Welcome” → Listen for intent → Route to sub-flow
General Bots:
USE KB "help-docs"
SET CONTEXT "You are a friendly assistant for Acme Corp."
TALK "Welcome! How can I help you today?"
HEAR question
answer = LLM question
TALK answer
Lead Capture Flow
Botpress:
Entry → Ask Name → Capture Slot → Ask Email → Capture Slot →
Ask Company → Capture Slot → Save to CRM → Thank You
General Bots:
' lead-capture.bas
TALK "I'd love to learn more about you!"
TALK "What's your name?"
HEAR name AS NAME
TALK "And your work email?"
HEAR email AS EMAIL
TALK "What company are you with?"
HEAR company
' Save directly - no external action needed
INSERT "leads", #{
name: name,
email: email,
company: company,
source: "chatbot",
created_at: NOW()
}
' Score the lead
score = SCORE LEAD #{name: name, email: email, company: company}
IF score.status = "hot" THEN
SEND MAIL TO "sales@company.com" SUBJECT "Hot Lead" BODY "New lead: " + name + " from " + company
END IF
TALK "Thanks, " + name + "! Someone from our team will be in touch soon."
FAQ Bot with Fallback
Botpress:
- NLU intent matching
- Knowledge base query
- Fallback to human
General Bots:
USE KB "faq"
USE KB "product-docs"
SET CONTEXT "Answer customer questions helpfully. If you cannot answer confidently, offer to connect with a human."
TALK "What can I help you with?"
HEAR question
answer = LLM question
' Check if confident answer
IF CONTAINS(LOWER(answer), "i don't") OR CONTAINS(LOWER(answer), "not sure") THEN
TALK "I'm not certain about that. Would you like to speak with someone?"
HEAR wants_human AS BOOLEAN
IF wants_human THEN
CREATE TASK "Customer inquiry: " + question
SEND MAIL TO "support@company.com" SUBJECT "Chat Handoff" BODY question
TALK "I've notified our team. Someone will reach out shortly."
END IF
ELSE
TALK answer
END IF
Multi-Step Booking Flow
Botpress:
Select Service → Choose Date → Choose Time → Confirm → Book
(Multiple nodes with slot filling)
General Bots:
TALK "Let's book your appointment."
TALK "What service do you need?"
HEAR service AS "Consultation", "Checkup", "Follow-up", "Emergency"
TALK "What date works for you?"
HEAR appt_date AS DATE
TALK "What time?"
HEAR appt_time AS HOUR
' Check availability
available = GET "https://calendar.api/available?date=" + appt_date + "&time=" + appt_time
IF available THEN
BOOK service AT appt_date + " " + appt_time
TALK "Your " + service + " is confirmed for " + FORMAT(appt_date, "MMMM d") + " at " + appt_time
ELSE
TALK "That slot isn't available. Would " + available.next + " work instead?"
END IF
NLU Migration
Botpress Intents
# Botpress intent definition
intents:
- name: order_status
utterances:
- where is my order
- track my order
- order status
- what happened to my order
General Bots Approach
No intent definition needed. The LLM understands naturally:
USE KB "order-help"
SET CONTEXT "Help customers with their orders."
TALK "How can I help with your order?"
HEAR question
' LLM understands "where is my order", "track order", etc.
' without explicit training
answer = LLM question
Entity Extraction
Botpress:
entities:
- name: order_number
type: pattern
pattern: "ORD-[0-9]{6}"
General Bots:
TALK "What's your order number?"
HEAR order_number
' Or with validation pattern
IF NOT MATCH(order_number, "ORD-[0-9]{6}") THEN
TALK "Please enter a valid order number (e.g., ORD-123456)"
HEAR order_number
END IF
Actions Migration
Botpress Custom Actions
// Botpress action
const checkOrderStatus = async (orderId) => {
const response = await axios.get(`/api/orders/${orderId}`);
return response.data.status;
};
General Bots
' Direct API call - no separate action file
order = GET "https://api.company.com/orders/" + order_id
TALK "Your order status is: " + order.status
Hooks Migration
Botpress Hooks
// before_incoming_middleware hook
bp.events.on('before_incoming_middleware', async (event) => {
// Custom logic
});
General Bots Triggers
' Event-driven triggers
ON "message:received"
' Log all messages
INSERT "message_log", #{
content: params.content,
user: params.user_id,
timestamp: NOW()
}
END ON
ON "session:started"
' Track new sessions
INSERT "sessions", #{
id: params.session_id,
started: NOW()
}
END ON
Content Types
Botpress Content
// Botpress content types
{
type: 'builtin_card',
title: 'Product',
image: 'product.jpg',
actions: [{ title: 'Buy', action: 'buy' }]
}
General Bots
' Text
TALK "Hello!"
' Image
TALK IMAGE "/products/featured.jpg"
' File
TALK FILE "/docs/brochure.pdf"
' Suggestions
ADD SUGGESTION "View Products"
ADD SUGGESTION "Contact Sales"
ADD SUGGESTION "Get Help"
TALK "What would you like to do?"
Knowledge Base Migration
Botpress Q&A
Limited to question-answer pairs.
General Bots RAG
Full document support:
my-bot.gbkb/
├── products/
│ ├── catalog.pdf
│ ├── specs.xlsx
│ └── pricing.md
├── support/
│ ├── faq.md
│ └── troubleshooting.md
└── company/
├── about.md
└── policies.pdf
USE KB "products"
USE KB "support"
USE KB "company"
answer = LLM customer_question
Channel Migration
Botpress Channels
Requires separate connector configuration for each channel.
General Bots
Native multi-channel with same code:
' Works everywhere: Web, WhatsApp, Teams, Slack, Telegram, SMS
TALK "How can I help?"
HEAR question
answer = LLM question
TALK answer
Database and State
Botpress State
// Botpress user state
event.state.user.name = 'John';
event.state.session.orderId = '12345';
General Bots
' Session/conversation memory
SET BOT MEMORY "customer_name", name
SET BOT MEMORY "current_order", order_id
' Retrieve
name = GET BOT MEMORY "customer_name"
' Persistent storage
INSERT "customers", #{name: name, email: email}
customer = FIND "customers", "email = '" + email + "'"
What You Gain
Simpler Development: BASIC scripts are more readable than visual flows with scattered code.
No NLU Training: LLM understands variations without explicit intent training.
Native AI: Full LLM integration without plugins.
Productivity Suite: Built-in email, calendar, files, and tasks.
Unified Platform: Chat, automation, and productivity in one system.
True Multi-Channel: Same code works everywhere without channel-specific configuration.
Migration Checklist
Pre-Migration
- Export Botpress flows and content
- Document intents and entities
- List custom actions
- Export Q&A/knowledge base
- Note channel configurations
Migration
- Set up General Bots environment
- Create BASIC scripts for main flows
- Build knowledge base structure
- Implement entity validation
- Configure channels
- Test all flows
Post-Migration
- Compare conversation quality
- Verify integrations
- Train team
- Redirect channel endpoints
- Decommission Botpress
See Also
- Dialog Basics - Script fundamentals
- HEAR Keyword - Input validation
- Knowledge Base - RAG configuration
- Platform Comparison - Full comparison
Automation Scripts
High-level approaches to automate migration from cloud services to self-hosted infrastructure.
Overview
Migration automation focuses on using existing tools and scripts to move data from cloud providers to your self-hosted stack.
Common Tools
File Transfer
- rclone: Universal cloud storage migration tool
- rsync: Traditional file synchronization
- wget/curl: API-based downloads
Email Migration
- imapsync: IMAP to IMAP migration
- offlineimap: Email backup and sync
- getmail: POP3/IMAP retrieval
Directory Services
- ldapsearch/ldapadd: LDAP export/import
- csvde: Active Directory CSV export
- PowerShell: AD automation scripts
Migration Approach
1. Assessment
- List what needs migration
- Estimate data volumes
- Identify dependencies
2. Tool Selection
- Match tools to data types
- Consider API availability
- Evaluate bandwidth needs
3. Execution
- Start with test data
- Run in batches
- Monitor progress
4. Validation
- Compare source and destination
- Check data integrity
- Test functionality
General Principles
- Start small, scale up
- Keep source data intact
- Document the process
- Have a rollback plan
Next Steps
- Common Concepts - Shared migration patterns
- Validation - Testing migrated systems
Validation
Post-migration testing and verification procedures.
Overview
Validation ensures that migrated systems and data are functioning correctly in the new self-hosted environment. A thorough validation process catches issues early and builds confidence in the new platform before full cutover.
Key Areas to Validate
User Access
The first priority is confirming that users can authenticate successfully. Verify that login works with the correct credentials and that permissions are properly assigned based on user roles. If single sign-on was configured, test the SSO flow to ensure tokens are being issued and validated correctly.
Data Integrity
Data integrity validation confirms that all files transferred completely and accurately. Compare file counts between source and destination systems, verify that file sizes match the originals, and check that timestamps were preserved during migration. Spot-check important documents by opening them to confirm content integrity.
Email Functionality
Email validation requires testing both sending and receiving capabilities. Confirm that folder structures and existing messages transferred intact. Verify that email aliases and distribution lists function as expected, and test that mail routing delivers messages to the correct destinations.
Document Search
Search functionality depends on proper indexing of migrated content. Verify that searches return expected results for known documents. Confirm that all documents are accessible through search results, and check that indexing has completed for the full document corpus.
Testing Approach
Smoke Testing
Smoke testing provides quick verification of basic functionality before deeper testing begins. Run a login test to confirm authentication works, send a test email to verify mail flow, search for a known document to test the search index, and access several key files to confirm storage connectivity.
User Acceptance Testing
User acceptance testing has actual users verify the system meets their needs. Users should confirm their data is present and accessible, verify that their daily workflows still function correctly, and assess whether performance is acceptable for their tasks.
Load Testing
Load testing validates system behavior under realistic usage conditions. Test concurrent user access to identify bottlenecks, transfer large files to verify storage performance, and run search queries under load to ensure the search infrastructure scales appropriately.
Common Issues
Authentication Problems
Authentication failures typically stem from incorrect credentials, certificate validation issues, or domain configuration problems. Check that usernames and passwords were migrated correctly, verify SSL certificates are valid and trusted, and confirm DNS records point to the correct servers.
Missing Data
Missing data usually results from incomplete transfers, permission errors during migration, or format incompatibilities between systems. Re-run transfer jobs for missing items, check source system permissions, and verify file format support in the destination system.
Performance Issues
Performance problems often manifest as slow searches, network bottlenecks, or resource constraints. Review search index configuration, check network bandwidth between components, and monitor CPU, memory, and disk usage to identify resource limitations.
Validation Checklist
Before declaring migration complete, confirm that all users can authenticate successfully, email send and receive functionality works correctly, files are accessible with proper permissions, search returns accurate results, backup jobs are running successfully, and monitoring systems are actively tracking the new environment.
Next Steps
Once validation completes successfully, proceed to user communication and training. Review the migration overview for next steps, and consult the common concepts guide for ongoing maintenance procedures.
Testing
General Bots uses a comprehensive testing framework including unit tests, integration tests, and end-to-end (E2E) tests to ensure platform reliability and quality.
Overview
The testing strategy covers:
- Unit Tests - Individual component testing
- Integration Tests - Service interaction testing
- E2E Tests - Complete user journey validation
Test Structure
All tests are organized in the bottest package:
bottest/
├── src/ # Test utilities and harness
├── tests/
│ ├── unit/ # Unit tests
│ ├── integration/ # Integration tests
│ └── e2e/ # End-to-end tests
├── benches/ # Performance benchmarks
└── Cargo.toml
Running Tests
All Tests
cd gb/bottest
cargo test
Specific Test Types
# Unit tests
cargo test --lib
# Integration tests
cargo test --test integration
# E2E tests
cargo test --test e2e -- --nocapture
Test Harness
The test harness provides utilities for setting up test environments:
#![allow(unused)] fn main() { use bottest::prelude::*; #[tokio::test] async fn my_test() { let ctx = TestHarness::full().await.unwrap(); // Test code here ctx.cleanup().await.unwrap(); } }
Continuous Integration
Tests run automatically on:
- Pull requests
- Commits to main branch
- Pre-release checks
See the repository’s CI/CD configuration for details.
Next Steps
- End-to-End Testing - Browser automation and user flow testing
- Performance Testing - Benchmarking and profiling
- Test Architecture - Design patterns and best practices
End-to-End Testing
End-to-end (E2E) testing validates complete user workflows from platform loading through authentication, interaction, and logout.
Overview
E2E tests simulate real user interactions:
- Platform Loading - UI and API infrastructure operational
- BotServer Initialization - Backend service running and ready
- User Authentication - Login workflow functional
- Chat Interaction - Message sending and receiving
- Logout - Session management and access control
Complete Platform Flow Test
The main E2E test validates the entire user journey:
#![allow(unused)] fn main() { #[tokio::test] async fn test_complete_platform_flow_login_chat_logout() { // Setup let ctx = E2ETestContext::setup_with_browser().await?; let browser = ctx.browser.as_ref().unwrap(); // Phase 1: Platform Loading verify_platform_loading(&ctx).await?; // Phase 2: BotServer Running verify_botserver_running(&ctx).await?; // Phase 3: User Login test_user_login(browser, &ctx).await?; // Phase 4: Chat Interaction test_chat_interaction(browser, &ctx).await?; // Phase 5: Logout test_user_logout(browser, &ctx).await?; ctx.close().await; } }
Test Phases
Phase 1: Platform Loading
Verifies UI and API infrastructure:
#![allow(unused)] fn main() { verify_platform_loading(&ctx).await? }
Checks:
- Health endpoint responds with 2xx status
- API endpoints are accessible
- Database migrations completed
- Services are initialized
Phase 2: BotServer Initialization
Verifies the backend service is operational:
#![allow(unused)] fn main() { verify_botserver_running(&ctx).await? }
Checks:
- Process is alive and responding
- Configuration properly loaded
- Dependencies connected (DB, cache, storage)
- Health checks pass
Phase 3: User Authentication
Tests the login workflow:
#![allow(unused)] fn main() { test_user_login(browser, &ctx).await? }
Tests:
- Navigate to login page
- Form elements present and functional
- Accept valid test credentials (test@example.com / TestPassword123!)
- Create session and authentication token
- Redirect to dashboard/chat interface
Phase 4: Chat Interaction
Tests messaging functionality:
#![allow(unused)] fn main() { test_chat_interaction(browser, &ctx).await? }
Tests:
- Chat interface loads correctly
- User can type and send messages
- Bot responds with valid output
- Message history persists
- Multiple exchanges work correctly
Phase 5: Logout & Session Management
Tests secure session handling:
#![allow(unused)] fn main() { test_user_logout(browser, &ctx).await? }
Tests:
- Logout button/action works
- Session is invalidated
- User redirected to login page
- Protected routes block unauthenticated access
- Cannot access chat after logout
Running E2E Tests
HTTP-Only Tests (No Browser Required)
These tests verify API and infrastructure without browser automation:
cd gb/bottest
# Platform loading verification
cargo test --test e2e test_platform_loading_http_only -- --nocapture
# BotServer startup verification
cargo test --test e2e test_botserver_startup -- --nocapture
Execution time: ~2-5 seconds
Complete Flow Tests (Requires WebDriver)
Full browser-based tests with user interactions:
# Start WebDriver first
chromedriver --port=4444 &
# Run complete platform flow
cargo test --test e2e test_complete_platform_flow_login_chat_logout -- --nocapture
# Run simplified flow
cargo test --test e2e test_login_and_chat_flow -- --nocapture
Execution time: ~30-60 seconds
WebDriver Setup
Option 1: Local Installation
# Download chromedriver from https://chromedriver.chromium.org/
# Place in PATH, then start:
chromedriver --port=4444
Option 2: Docker
docker run -d -p 4444:4444 selenium/standalone-chrome
Option 3: Docker Compose
docker-compose up -d webdriver
Environment Variables
Control test behavior:
| Variable | Default | Purpose |
|---|---|---|
HEADED | unset | Show browser window instead of headless |
WEBDRIVER_URL | http://localhost:4444 | WebDriver server endpoint |
SKIP_E2E_TESTS | unset | Skip E2E tests if set |
RUST_LOG | info | Logging level: debug, info, warn, error |
KEEP_TEMP_STACK_ON_ERROR | unset | Preserve temp directory on failure |
Examples
# Show browser UI for debugging
HEADED=1 cargo test --test e2e -- --nocapture
# Use custom WebDriver
WEBDRIVER_URL=http://localhost:4445 cargo test --test e2e -- --nocapture
# Verbose logging
RUST_LOG=debug cargo test --test e2e -- --nocapture
# Run single-threaded with output
cargo test --test e2e -- --nocapture --test-threads=1
Test Helpers
Reusable helper functions for custom tests:
#![allow(unused)] fn main() { // Verify platform is operational verify_platform_loading(&ctx) -> Result<()> // Verify BotServer is running verify_botserver_running(&ctx) -> Result<()> // Perform login with credentials test_user_login(browser, &ctx) -> Result<()> // Send message and wait for response test_chat_interaction(browser, &ctx) -> Result<()> // Logout and verify session invalidation test_user_logout(browser, &ctx) -> Result<()> }
Test Context
Setup a test context for E2E testing:
#![allow(unused)] fn main() { use bottest::prelude::*; use bottest::web::{Browser, BrowserConfig}; // HTTP-only context let ctx = E2ETestContext::setup().await?; // With browser automation let ctx = E2ETestContext::setup_with_browser().await?; let browser = ctx.browser.as_ref().unwrap(); // Access base URL let url = ctx.base_url(); // Access running server let is_running = ctx.server.is_running(); // Cleanup ctx.close().await; }
Common Issues
WebDriver Not Available
Problem: Test fails with “WebDriver not available”
Solution:
# Start WebDriver
chromedriver --port=4444
# or
docker run -d -p 4444:4444 selenium/standalone-chrome
Port Already in Use
Problem: Services fail to start due to port conflicts
Solution:
# Kill existing services
pkill -f chromedriver
pkill -f botserver
pkill -f postgres
pkill -f redis-server
Test Hangs or Timeout
Problem: Test appears to hang or timeout
Solution:
# Run with timeout and verbose output
timeout 120s RUST_LOG=debug cargo test --test e2e test_name -- --nocapture --test-threads=1
Browser Connection Issues
Problem: Browser fails to connect to WebDriver
Solution:
# Use different WebDriver port
WEBDRIVER_URL=http://localhost:4445 cargo test --test e2e -- --nocapture
Debugging
View Test Output
# Show all output
cargo test --test e2e test_name -- --nocapture
# With timestamps
RUST_LOG=debug cargo test --test e2e test_name -- --nocapture
# Save to file
cargo test --test e2e test_name -- --nocapture 2>&1 | tee test.log
Watch Browser in Action
# Run with visible browser
HEADED=1 cargo test --test e2e test_name -- --nocapture --test-threads=1
Check Server Logs
# Monitor logs while tests run
tail -f /tmp/bottest-*/botserver.log
# In another terminal:
cargo test --test e2e test_name -- --nocapture
Performance
Typical execution times:
| Test | Time | Resources |
|---|---|---|
| Platform loading (HTTP-only) | ~2s | Minimal |
| BotServer startup (HTTP-only) | ~5s | Minimal |
| Login and chat flow | ~20s | Browser + Memory |
| Complete flow with all phases | ~45s | Browser + Memory |
| Full E2E test suite | ~2-3 min | High |
Use release mode for faster execution:
cargo test --test e2e --release -- --nocapture
CI/CD Integration
GitHub Actions Example
name: E2E Tests
on: [push, pull_request]
jobs:
e2e:
runs-on: ubuntu-latest
services:
chromedriver:
image: selenium/standalone-chrome
options: --shm-size=2gb
steps:
- uses: actions/checkout@v3
- uses: actions-rs/toolchain@v1
with:
toolchain: stable
- run: cd gb/bottest && cargo test --test e2e -- --nocapture
Temporary Stack Architecture (Future)
When BotServer implements --temp-stack, E2E tests will run in isolated environments:
botserver --temp-stack
# Creates: /tmp/botserver-test-{timestamp}-{random}/
# With isolated: PostgreSQL, Redis, MinIO, Mock LLM
# Auto-cleanup after test completion
Benefits:
- ✓ Isolation - Each test in separate environment
- ✓ Reproducibility - Consistent setup every time
- ✓ Automation - No manual configuration
- ✓ Safety - Won’t interfere with development
- ✓ Cleanup - Automatic resource management
- ✓ Parallel - Multiple tests simultaneously
- ✓ CI/CD Ready - Perfect for automated pipelines
Writing Custom E2E Tests
Create new test files in gb/bottest/tests/e2e/:
#![allow(unused)] fn main() { #[tokio::test] async fn test_my_feature() { // Setup context let ctx = E2ETestContext::setup_with_browser().await?; let browser = ctx.browser.as_ref().unwrap(); // Navigate to feature browser.navigate(&format!("{}/my-feature", ctx.base_url())).await?; // Interact with UI browser.click("button.action").await?; browser.wait_for_element(".result", Duration::from_secs(10)).await?; // Verify results let text = browser.get_text(".result").await?; assert_eq!(text, "Expected result"); // Cleanup ctx.close().await; } }
Register in tests/e2e/mod.rs:
#![allow(unused)] fn main() { mod my_feature; }
Best Practices
- Keep tests focused - Test one user workflow per test
- Use meaningful names -
test_complete_platform_flownottest_1 - Explicit waits - Use
wait_for_elementinstead ofsleep - Test realistic flows - Use actual test credentials
- Verify results explicitly - Check status codes, UI elements, and state
- Clean up properly - Always call
ctx.close().await - Handle errors gracefully - Use
?operator for error propagation - Make tests independent - Don’t rely on test execution order
Test Success Criteria
✓ Platform fully loads without errors ✓ BotServer starts and becomes ready ✓ User can login with credentials ✓ Chat messages are sent and responses received ✓ User can logout and session is invalidated ✓ Protected routes block unauthenticated access ✓ Tests run consistently multiple times ✓ Tests complete within acceptable time (~60 seconds)
See Also
- Testing Overview - Testing strategy and structure
- Performance Testing - Benchmarks and load tests
- Test Architecture - Design patterns and best practices
- Integration Testing - Multi-component testing
Testing Architecture
Overview
The General Bots testing framework is designed with a multi-layered, isolated approach to ensure comprehensive coverage from individual components to complete user workflows.
Architecture Diagram
┌─────────────────────────────────────────────────────────────┐
│ Test Execution Layer │
│ (GitHub Actions, CI/CD, Local Development) │
└────────────────────┬────────────────────────────────────────┘
│
┌────────────┼────────────┐
│ │ │
▼ ▼ ▼
┌─────────┐ ┌─────────┐ ┌──────────┐
│ Unit │ │ Integr. │ │ E2E │
│ Tests │ │ Tests │ │ Tests │
└────┬────┘ └────┬────┘ └─────┬────┘
│ │ │
└────────────┼────────────┘
│
┌────────────▼────────────┐
│ Test Harness Layer │
│ (Context, Utils, Mocks) │
└────────────┬────────────┘
│
┌────────────┼────────────┐
│ │ │
▼ ▼ ▼
┌─────────┐ ┌─────────┐ ┌──────────┐
│BotServer│ │ Browser│ │ Services │
│(Testing)│ │ (WebDrv)│ │(Mock/Iso)│
└─────────┘ └─────────┘ └──────────┘
│ │ │
└────────────┼────────────┘
│
┌────────────▼────────────┐
│ Temporary Stack Layer │
│ (Isolated Environments) │
└────────────┬────────────┘
│
┌────────────┴────────────┐
│ │
▼ ▼
┌─────────────┐ ┌──────────────┐
│ PostgreSQL │ │ Redis, MinIO │
│ (Isolated) │ │ (Isolated) │
└─────────────┘ └──────────────┘
Test Layers
1. Unit Tests
Purpose: Test individual components in isolation
Scope:
- Single functions or methods
- Mocked external dependencies
- No database or external services
Example:
#![allow(unused)] fn main() { #[test] fn test_message_formatting() { let msg = format_message("Hello"); assert_eq!(msg, "Hello!"); } }
Location: bottest/tests/unit/
2. Integration Tests
Purpose: Test multiple components working together
Scope:
- Multi-component interactions
- Real database connections
- Service integration
- Error handling across components
Example:
#![allow(unused)] fn main() { #[tokio::test] async fn test_message_storage_and_retrieval() { let db = setup_test_db().await; let msg = Message::new("Hello"); db.save(&msg).await.unwrap(); let retrieved = db.get(msg.id).await.unwrap(); assert_eq!(retrieved.text, "Hello"); } }
Location: bottest/tests/integration/
3. End-to-End Tests
Purpose: Test complete user workflows
Scope:
- Complete user journeys
- Browser interactions
- Multi-phase workflows
- Real-world scenarios
Phases:
- Platform Loading
- BotServer Initialization
- User Authentication
- Chat Interaction
- Logout & Session Management
Example:
#![allow(unused)] fn main() { #[tokio::test] async fn test_complete_platform_flow_login_chat_logout() { let ctx = E2ETestContext::setup_with_browser().await?; verify_platform_loading(&ctx).await?; verify_botserver_running(&ctx).await?; test_user_login(browser, &ctx).await?; test_chat_interaction(browser, &ctx).await?; test_user_logout(browser, &ctx).await?; ctx.close().await; } }
Location: bottest/tests/e2e/
Test Harness
The test harness provides utilities for test setup and context management:
TestHarness
├── Setup utilities
│ ├── Create test database
│ ├── Start mock services
│ ├── Initialize configurations
│ └── Provision test data
├── Context management
│ ├── Resource tracking
│ ├── Cleanup coordination
│ └── Error handling
└── Helper functions
├── HTTP requests
├── Browser interactions
└── Service mocking
E2ETestContext
Provides complete environment for E2E testing:
#![allow(unused)] fn main() { pub struct E2ETestContext { pub ctx: TestContext, pub server: BotServerInstance, pub browser: Option<Browser>, } impl E2ETestContext { pub async fn setup() -> Result<Self> pub async fn setup_with_browser() -> Result<Self> pub fn base_url(&self) -> &str pub fn has_browser(&self) -> bool pub async fn close(self) } }
Temporary Stack Architecture
Isolated test environments for complete system integration:
/tmp/botserver-test-{timestamp}-{id}/
├── postgres/
│ ├── data/ ← PostgreSQL data files
│ ├── postgresql.log ← Database logs
│ └── postgresql.conf ← Configuration
├── redis/
│ ├── data/ ← Redis persistence
│ └── redis.log
├── minio/
│ ├── data/ ← S3-compatible storage
│ └── minio.log
├── botserver/
│ ├── config/
│ │ ├── config.toml ← Application config
│ │ └── .env ← Environment variables
│ ├── logs/
│ │ ├── botserver.log ← Main application logs
│ │ ├── api.log ← API logs
│ │ └── debug.log ← Debug logs
│ ├── cache/ ← Local cache
│ └── state.json ← Stack metadata
└── env.stack ← Connection strings for tests
Isolation Strategy
Service Isolation
Each test gets dedicated service instances:
- Database: Separate PostgreSQL cluster on port 5433
- Cache: Separate Redis instance on port 6380
- Storage: Separate MinIO instance on port 9001
- API: Separate BotServer on port 8000
Network Isolation
- All services on localhost (127.0.0.1)
- Non-standard ports to avoid conflicts
- Docker containers for complete OS-level isolation
Data Isolation
- Separate database schemas per test
- Temporary file systems for storage
- No shared configuration between tests
- Automatic cleanup on completion
Test Execution Flow
1. Test Initialization
├─ Parse environment variables
├─ Check prerequisites (WebDriver, services)
└─ Create test context
2. Stack Setup
├─ Create temporary directory
├─ Initialize databases
├─ Start services
└─ Wait for readiness
3. Test Execution
├─ Setup phase
├─ Action phase
├─ Verification phase
└─ Assertion phase
4. Cleanup
├─ Close browser connections
├─ Shutdown services gracefully
├─ Remove temporary directories
└─ Report results
Browser Automation
Uses WebDriver (Selenium) protocol for browser testing:
Test Code
↓
Reqwest HTTP Client
↓
WebDriver Protocol (JSON-RPC)
↓
chromedriver / Selenium Server
↓
Chrome/Chromium Browser
↓
Test Verification
WebDriver Commands
- Navigate to URL
- Find elements by selector
- Click buttons and links
- Fill form inputs
- Wait for elements
- Execute JavaScript
- Take screenshots
- Get element text
Error Handling
Comprehensive error handling at all levels:
Test Execution
│
├─ Setup Error
│ └─ Fail fast, preserve environment
│
├─ Execution Error
│ ├─ Log detailed context
│ ├─ Capture screenshots
│ └─ Optionally preserve stack
│
└─ Cleanup Error
└─ Log warning, continue cleanup
Performance Considerations
Test Execution Times
- Unit Tests: ~0.1-1 second
- Integration Tests: ~1-10 seconds
- E2E Tests: ~30-60 seconds
- Full Suite: ~2-3 minutes
Optimization Strategies
- Parallel Execution: Run independent tests simultaneously
- Caching: Reuse expensive resources
- Lazy Loading: Initialize only needed components
- Release Mode: Use
--releasefor faster compilation - Selective Testing: Run only relevant tests during development
CI/CD Integration
GitHub Actions Workflow
Trigger (push/PR)
↓
Setup Environment
├─ Install Rust
├─ Start WebDriver
└─ Setup test infrastructure
↓
Run Tests
├─ Unit tests
├─ Integration tests
└─ E2E tests
↓
Collect Artifacts
├─ Test results
├─ Coverage reports
├─ Screenshots/logs
└─ Performance metrics
↓
Report Results
└─ Pass/fail status
Best Practices
1. Test Organization
- Keep tests focused and single-purpose
- Use descriptive names
- Group related tests
- Organize by layer (unit/integration/e2e)
2. Test Design
- Make tests independent
- Use realistic data
- Test both happy and error paths
- Avoid test interdependencies
3. Test Maintenance
- Keep tests up to date with code
- Remove obsolete tests
- Refactor test helpers
- Monitor test execution time
4. Test Documentation
- Document complex test logic
- Explain test prerequisites
- Document setup/teardown
- Include troubleshooting tips
Debugging
Debug Helpers
RUST_LOG=debug- Verbose loggingHEADED=1- Show browser UI--nocapture- Print test output--test-threads=1- Run sequentially
Debug Techniques
- Check server logs
- Review screenshots
- Inspect HTTP requests
- Step through code
- Use REPL for experimentation
Future Enhancements
- Load Testing - Concurrent user scenarios
- Visual Regression - Screenshot comparison
- Accessibility Testing - WCAG compliance
- Security Testing - Vulnerability scanning
- Performance Profiling - Memory and CPU analysis
- Multi-region - Test across deployments
- Snapshot Testing - Compare outputs over time
References
Performance Testing
Best Practices
CI/CD Integration
Appendix I – Database Model
The core database schema for GeneralBots is defined in src/shared/models.rs. It uses Diesel with PostgreSQL and includes the following primary tables:
| Table | Description |
|---|---|
users | Stores user accounts, authentication tokens, and profile data. |
sessions | Tracks active BotSession instances, their start/end timestamps, and associated user. |
knowledge_bases | Metadata for each .gbkb collection (name, vector store configuration, creation date). |
messages | Individual chat messages (role = user/assistant, content, timestamp, linked to a session). |
tools | Registered custom tools per session (name, definition JSON, activation status). |
files | References to files managed by the .gbdrive package (path, size, MIME type, storage location). |
Relationships
- User ↔ Sessions – One‑to‑many: a user can have many sessions.
- Session ↔ Messages – One‑to‑many: each session contains a sequence of messages.
- Session ↔ KnowledgeBase – Many‑to‑one: a session uses a single knowledge base at a time.
- Session ↔ Tools – One‑to‑many: tools are scoped to the session that registers them.
- File ↔ KnowledgeBase – Optional link for documents stored in a knowledge base.
Key Tables
User Table
- id: Integer primary key
- username: String
- email: String
- password_hash: String
- created_at: Timestamp
Session Table
- id: Integer primary key
- user_id: Foreign key to User
- started_at: Timestamp
- last_active: Timestamp
- knowledge_base_id: Integer
Message Table
- id: Integer primary key
- session_id: Foreign key to Session
- role: String (“user” or “assistant”)
- content: Text
- timestamp: Timestamp
The schema is automatically migrated when the server starts.
Database Schema Overview
General Bots uses PostgreSQL as its primary database with Diesel ORM for type-safe database operations. The schema is designed to support multi-tenant bot hosting with comprehensive session management, user authentication, and content storage.
Core Architecture
The database schema follows several key design principles. All tables use UUID primary keys for globally unique identifiers that work across distributed systems. Created and updated timestamps provide audit trails for tracking data changes. Foreign key relationships maintain referential integrity between related entities. JSON fields offer flexible storage for dynamic configuration and metadata that doesn’t fit rigid schema definitions.
Database Schema Diagram
Entity Relationship Overview
Core Tables Structure
Detailed Schema
Schema Categories
Organization & Bot Management
The organizations table provides multi-tenant organization support, isolating data between different customers or deployments. The bots table stores bot instances and their configurations. The bot_configuration table contains bot-specific settings and parameters. The bot_memories table provides persistent key-value storage for bots to maintain state across sessions.
User & Authentication
The users table stores user accounts with secure password storage using Argon2 hashing. The user_sessions table tracks active user sessions with authentication tokens. The user_login_tokens table manages authentication tokens for login flows. The user_preferences table contains user-specific settings and customizations.
Conversation & Messaging
The message_history table maintains complete conversation history between users and bots. The clicks table tracks user interaction events for analytics. The system_automations table stores scheduled tasks and automation rules that run without user intervention.
Knowledge Base
The kb_collections table defines knowledge base collection containers. The kb_documents table stores documents within those collections. The user_kb_associations table manages user access permissions to knowledge bases. The session_tool_associations table tracks which tools are available within specific sessions.
Tools & Integration
The basic_tools table stores BASIC script tool definitions compiled from .bas files. The user_email_accounts table manages email integration accounts for users. The email_drafts table stores draft emails being composed. The email_folders table organizes email folder structures.
Table Relationships
Session Flow
Knowledge Base Access
Primary Relationships
The bot hierarchy establishes that organizations contain multiple bots in a one-to-many relationship. Each bot has multiple configuration entries and memories associated with it.
User sessions connect users to bots through the session table. Users can have multiple sessions, and each session maintains its own message history. Bots also connect to sessions, enabling the many-to-many relationship between users and bots.
Knowledge management links bots to knowledge base collections, with each collection containing multiple documents. Sessions associate with knowledge bases through the user_kb_associations table.
Tool associations connect bots to their defined tools, and sessions link to available tools through the session_tool_associations junction table.
Data Types
The schema uses several PostgreSQL data types throughout. UUID fields serve as primary keys and foreign key references for globally unique identification. Text fields store variable-length string data without length constraints. Varchar fields hold fixed-length strings for codes and identifiers. Timestamptz fields store timestamps with timezone information for accurate time tracking across regions. Jsonb fields provide JSON storage with indexing capabilities for flexible schemas. Boolean fields represent binary flags and settings. Integer fields store counters and numeric values.
Indexing Strategy
Primary indexes exist on all id fields serving as primary keys. Foreign key relationships receive indexes for efficient joins. Timestamp fields are indexed to support time-based queries. Session tokens have indexes for fast authentication lookups.
Composite indexes optimize common query patterns. The combination of bot_id and user_id enables efficient session lookup. Collection_id with document_id accelerates knowledge retrieval. User_id paired with created_at supports history queries ordered by time.
Migration Management
Database migrations are managed through Diesel’s migration system. Migrations reside in the migrations/ directory with each migration containing both up.sql and down.sql files for applying and reverting changes. Version tracking occurs in the __diesel_schema_migrations table. The bootstrap process automatically applies pending migrations on startup.
Performance Considerations
Connection Pooling
The default connection pool maintains 10 connections to balance resource usage with concurrency. Pool size is configurable via environment variables for different deployment scales. Automatic connection recycling prevents stale connections from causing issues.
Query Optimization
Prepared statements cache query plans for repeated queries, improving performance. Batch operations handle bulk inserts efficiently rather than individual row insertions. Lazy loading defers loading of related entities until needed. Pagination limits result sets to manageable sizes for large tables.
Data Retention
Message history retention is configurable to balance storage costs with historical needs. Automatic cleanup removes expired sessions to free resources. An archival strategy moves old conversations to cold storage while maintaining accessibility.
Security Features
Data Protection
Password hashing uses the Argon2 algorithm for strong protection against brute-force attacks. AES-GCM encryption protects sensitive fields at rest. Secure random token generation creates unpredictable session identifiers. Diesel’s parameterized queries prevent SQL injection attacks.
Access Control
Row-level security is implemented through application logic that filters queries by user context. User isolation ensures sessions only access their own data. Bot isolation separates data by organization to prevent cross-tenant access. Audit logging records sensitive operations for compliance and security review.
Backup Strategy
Backup Types
Full database dumps capture complete point-in-time snapshots. Incremental WAL archiving provides continuous backup with minimal storage overhead. Point-in-time recovery support enables restoration to any moment within the retention window. Cross-region replication offers disaster recovery capabilities for critical deployments.
Restore Procedures
Automated restore testing validates backup integrity on a regular schedule. Version compatibility checks ensure backups restore correctly to the current schema. Data integrity validation confirms restored data matches expected checksums. Zero-downtime migration support enables schema changes without service interruption.
Monitoring
Key Metrics
Connection pool usage indicates whether the pool size needs adjustment. Query execution time reveals slow queries requiring optimization. Table sizes and growth rates inform capacity planning. Index effectiveness metrics show whether indexes are being utilized. Lock contention monitoring identifies concurrency bottlenecks.
Health Checks
Database connectivity verification ensures the connection pool can reach PostgreSQL. Migration status checks confirm all migrations have been applied. Replication lag monitoring applies to deployments with read replicas. Storage usage tracking prevents disk space exhaustion.
Best Practices
Always use migrations for schema changes rather than manual DDL to maintain consistency across environments. Never modify production data directly through SQL clients to avoid bypassing application logic. Test migrations in development first to catch issues before they affect production. Monitor performance metrics regularly to identify degradation early. Plan capacity based on growth projections to avoid emergency scaling. Document changes in migration files with comments explaining the purpose of each change. Use transactions for data consistency when multiple tables must be updated together. Implement retry logic for transient failures like connection timeouts or deadlocks.
Future Considerations
Partitioning for large tables like message_history would improve query performance and enable efficient data archival. Read replicas could scale read-heavy workloads across multiple database instances. Time-series optimization for metrics data would support analytics features. Full-text search indexes would enable natural language queries against stored content. Graph relationships could support advanced queries for interconnected data like conversation flows.
Database Tables
This section documents all database tables in General Bots, their structures, and purposes.
Core Tables
organizations
Stores organization/tenant information for multi-tenant deployments.
| Column | Type | Description |
|---|---|---|
| org_id | UUID | Primary key |
| name | TEXT | Organization name |
| slug | TEXT | URL-friendly identifier |
| created_at | TIMESTAMPTZ | Creation timestamp |
bots
Bot instances and their basic configuration.
| Column | Type | Description |
|---|---|---|
| id | UUID | Primary key |
| name | TEXT | Bot name |
| org_id | UUID | Foreign key to organizations |
| created_at | TIMESTAMPTZ | Creation timestamp |
| updated_at | TIMESTAMPTZ | Last update timestamp |
bot_configuration
Stores bot-specific configuration parameters from config.csv.
| Column | Type | Description |
|---|---|---|
| id | UUID | Primary key |
| bot_id | UUID | Foreign key to bots |
| key | TEXT | Configuration key |
| value | TEXT | Configuration value |
| created_at | TIMESTAMPTZ | Creation timestamp |
| updated_at | TIMESTAMPTZ | Last update timestamp |
bot_memories
Persistent key-value storage for bots (used by GET BOT MEMORY/SET BOT MEMORY).
| Column | Type | Description |
|---|---|---|
| id | UUID | Primary key |
| bot_id | UUID | Foreign key to bots |
| key | TEXT | Memory key |
| value | TEXT | Memory value |
| created_at | TIMESTAMPTZ | Creation timestamp |
| updated_at | TIMESTAMPTZ | Last update timestamp |
User Management Tables
users
User accounts with authentication credentials.
| Column | Type | Description |
|---|---|---|
| id | UUID | Primary key |
| username | TEXT | Unique username |
| TEXT | Email address | |
| password_hash | TEXT | Argon2 hashed password |
| active | BOOLEAN | Account status |
| created_at | TIMESTAMPTZ | Registration timestamp |
| updated_at | TIMESTAMPTZ | Last update timestamp |
user_sessions
Active user sessions for authentication and state management.
| Column | Type | Description |
|---|---|---|
| id | UUID | Primary key |
| user_id | UUID | Foreign key to users |
| bot_id | UUID | Foreign key to bots |
| session_token | TEXT | Unique session identifier |
| expires_at | TIMESTAMPTZ | Session expiration |
| created_at | TIMESTAMPTZ | Session start |
| updated_at | TIMESTAMPTZ | Last activity |
user_login_tokens
Authentication tokens for login flows.
| Column | Type | Description |
|---|---|---|
| id | UUID | Primary key |
| user_id | UUID | Foreign key to users |
| token | TEXT | Login token |
| expires_at | TIMESTAMPTZ | Token expiration |
| used | BOOLEAN | Whether token was used |
| created_at | TIMESTAMPTZ | Token creation |
user_preferences
User-specific settings and preferences.
| Column | Type | Description |
|---|---|---|
| id | UUID | Primary key |
| user_id | UUID | Foreign key to users |
| preferences | JSONB | Preferences data |
| created_at | TIMESTAMPTZ | Creation timestamp |
| updated_at | TIMESTAMPTZ | Last update |
Conversation Tables
message_history
Complete conversation history between users and bots.
| Column | Type | Description |
|---|---|---|
| id | UUID | Primary key |
| session_id | UUID | Foreign key to user_sessions |
| user_id | UUID | Foreign key to users |
| bot_id | UUID | Foreign key to bots |
| message | TEXT | Message content |
| sender | TEXT | ‘user’ or ‘bot’ |
| created_at | TIMESTAMPTZ | Message timestamp |
clicks
Tracks user interactions with UI elements.
| Column | Type | Description |
|---|---|---|
| id | UUID | Primary key |
| session_id | UUID | Foreign key to user_sessions |
| element_id | TEXT | UI element identifier |
| timestamp | TIMESTAMPTZ | Click timestamp |
system_automations
Scheduled tasks and automation rules.
| Column | Type | Description |
|---|---|---|
| id | UUID | Primary key |
| bot_id | UUID | Foreign key to bots |
| name | TEXT | Automation name |
| schedule | TEXT | Cron expression |
| script | TEXT | BASIC script to execute |
| active | BOOLEAN | Whether automation is active |
| created_at | TIMESTAMPTZ | Creation timestamp |
| updated_at | TIMESTAMPTZ | Last update |
Knowledge Base Tables
kb_collections
Knowledge base collection definitions.
| Column | Type | Description |
|---|---|---|
| id | TEXT | Primary key (collection name) |
| bot_id | UUID | Foreign key to bots |
| name | TEXT | Collection display name |
| description | TEXT | Collection description |
| metadata | JSONB | Additional metadata |
| created_at | TIMESTAMPTZ | Creation timestamp |
| updated_at | TIMESTAMPTZ | Last update |
kb_documents
Documents stored in knowledge base collections.
| Column | Type | Description |
|---|---|---|
| id | TEXT | Primary key (document ID) |
| collection_id | TEXT | Foreign key to kb_collections |
| bot_id | UUID | Foreign key to bots |
| name | TEXT | Document name |
| content | TEXT | Document content |
| metadata | JSONB | Document metadata |
| embedding_id | TEXT | Vector embedding reference |
| indexed | BOOLEAN | Whether document is indexed |
| created_at | TIMESTAMPTZ | Upload timestamp |
| updated_at | TIMESTAMPTZ | Last update |
user_kb_associations
Links user sessions to available knowledge bases.
| Column | Type | Description |
|---|---|---|
| id | TEXT | Primary key |
| session_id | UUID | Foreign key to user_sessions |
| collection_id | TEXT | Foreign key to kb_collections |
| created_at | TIMESTAMPTZ | Association timestamp |
Tool Tables
basic_tools
BASIC script tool definitions.
| Column | Type | Description |
|---|---|---|
| id | TEXT | Primary key (tool name) |
| bot_id | UUID | Foreign key to bots |
| name | TEXT | Tool display name |
| description | TEXT | Tool description |
| parameters | JSONB | Parameter definitions |
| script | TEXT | BASIC script implementation |
| metadata | JSONB | Additional metadata |
| created_at | TIMESTAMPTZ | Creation timestamp |
| updated_at | TIMESTAMPTZ | Last update |
session_tool_associations
Links sessions to available tools.
| Column | Type | Description |
|---|---|---|
| id | TEXT | Primary key |
| session_id | UUID | Foreign key to user_sessions |
| tool_id | TEXT | Foreign key to basic_tools |
| created_at | TIMESTAMPTZ | Association timestamp |
Email Integration Tables
user_email_accounts
Email accounts configured for users.
| Column | Type | Description |
|---|---|---|
| id | UUID | Primary key |
| user_id | UUID | Foreign key to users |
| email_address | TEXT | Email address |
| imap_server | TEXT | IMAP server address |
| imap_port | INTEGER | IMAP port |
| smtp_server | TEXT | SMTP server address |
| smtp_port | INTEGER | SMTP port |
| encrypted_password | TEXT | Encrypted email password |
| active | BOOLEAN | Account status |
| created_at | TIMESTAMPTZ | Configuration timestamp |
| updated_at | TIMESTAMPTZ | Last update |
email_drafts
Draft emails created by users or bots.
| Column | Type | Description |
|---|---|---|
| id | UUID | Primary key |
| user_id | UUID | Foreign key to users |
| account_id | UUID | Foreign key to user_email_accounts |
| to_addresses | TEXT[] | Recipient addresses |
| cc_addresses | TEXT[] | CC addresses |
| bcc_addresses | TEXT[] | BCC addresses |
| subject | TEXT | Email subject |
| body | TEXT | Email body |
| attachments | JSONB | Attachment metadata |
| created_at | TIMESTAMPTZ | Draft creation |
| updated_at | TIMESTAMPTZ | Last edit |
email_folders
Email folder organization.
| Column | Type | Description |
|---|---|---|
| id | UUID | Primary key |
| account_id | UUID | Foreign key to user_email_accounts |
| name | TEXT | Folder name |
| path | TEXT | IMAP folder path |
| parent_id | UUID | Parent folder ID |
| message_count | INTEGER | Number of messages |
| unread_count | INTEGER | Unread messages |
| created_at | TIMESTAMPTZ | Folder creation |
| updated_at | TIMESTAMPTZ | Last sync |
Indexes
Primary Indexes
- All
idcolumns have primary key indexes - All foreign key columns have indexes for joins
Performance Indexes
user_sessions.session_token- for session lookupmessage_history.created_at- for time-based querieskb_documents.collection_id- for collection queriesbot_memories(bot_id, key)- composite for memory lookup
Full-Text Search Indexes
kb_documents.content- for document search (when enabled)message_history.message- for conversation search (when enabled)
Database Relationships
This document describes the relationships between tables in the General Bots database schema.
Entity Relationship Overview
The database follows a hierarchical structure with organizations at the top, containing bots, which in turn manage users, sessions, and content.
Primary Relationships
Organization Hierarchy
organizations
bots (1:N)
bot_configuration (1:N)
bot_memories (1:N)
kb_collections (1:N)
kb_documents (1:N)
basic_tools (1:N)
system_automations (1:N)
Each organization can have multiple bots, and each bot has its own configuration, memories, knowledge bases, tools, and automations. Cascade delete behavior means that deleting an organization removes all associated bots and their data.
User and Session Management
users
user_sessions (1:N)
message_history (1:N)
clicks (1:N)
user_kb_associations (1:N)
session_tool_associations (1:N)
user_login_tokens (1:N)
user_preferences (1:1)
user_email_accounts (1:N)
email_drafts (1:N)
email_folders (1:N)
folder_messages (1:N)
Users can have multiple active sessions across different bots. Each session maintains its own message history and associations. Sessions link to both users and bots, forming a many-to-many relationship through the sessions table.
Bot-User Interaction
bots ←→ user_sessions ←→ users
user_sessions:
message_history
user_kb_associations → kb_collections
session_tool_associations → basic_tools
bots:
kb_collections
basic_tools
Users interact with bots through sessions. Sessions dynamically associate with knowledge bases and tools as needed. Message history preserves the conversation context for continuity across interactions.
Foreign Key Constraints
Strong Relationships (CASCADE DELETE)
These relationships enforce referential integrity with cascade deletion.
The organizations to bots relationship means deleting an organization removes all its bots, with bots.org_id referencing organizations.org_id.
The bots to bot_configuration relationship means deleting a bot removes all its configuration, with bot_configuration.bot_id referencing bots.id.
The bots to bot_memories relationship means deleting a bot removes all its memories, with bot_memories.bot_id referencing bots.id.
The user_sessions to message_history relationship means ending a session removes its message history, with message_history.session_id referencing user_sessions.id.
Weak Relationships (SET NULL/RESTRICT)
These relationships maintain data integrity without cascade deletion.
The users to user_sessions relationship sets session.user_id to NULL when a user is deleted, preserving conversation history for audit purposes while making the session anonymous.
The kb_collections to kb_documents relationship restricts deletion if documents exist, requiring explicit document deletion first to prevent accidental data loss.
The user_email_accounts to email_drafts relationship preserves drafts when an email account is deleted, allowing draft recovery or reassignment to other accounts.
Many-to-Many Relationships
Sessions ↔ Knowledge Bases
user_sessions ←→ user_kb_associations ←→ kb_collections
The user_kb_associations junction table allows dynamic KB activation per session. Multiple knowledge bases can be active simultaneously, enabling conversations that draw from several information sources.
Sessions ↔ Tools
user_sessions ←→ session_tool_associations ←→ basic_tools
The session_tool_associations junction table enables tools to be loaded per session as needed. This supports dynamic tool discovery where available capabilities vary based on context.
Relationship Cardinality
One-to-one relationships exist between users and user_preferences, where each user has exactly one preferences record.
One-to-many relationships include organizations to bots, bots to bot_configuration, bots to kb_collections, kb_collections to kb_documents, users to user_sessions, user_sessions to message_history, and user_email_accounts to email_drafts.
Many-to-many relationships exist between user_sessions and kb_collections through user_kb_associations, between user_sessions and basic_tools through session_tool_associations, and between users and bots through user_sessions.
Referential Integrity Rules
Insert Order
When inserting data, follow this sequence: organizations first, then bots, then bot_configuration. For user data, insert users first, then user_sessions, then message_history. Knowledge base data requires kb_collections before kb_documents. Tools require basic_tools before session_tool_associations.
Delete Order (reverse of insert)
When deleting data, reverse the insert order: message_history first, then user_sessions, then users. For tools, delete session_tool_associations before basic_tools. For knowledge bases, delete kb_documents before kb_collections. For organizational data, delete bot_configuration, then bots, then organizations.
Orphan Prevention
Automatic Cleanup
Sessions expire based on the expires_at timestamp. Orphaned associations are cleaned by background jobs that run periodically. Temporary data has TTL settings that trigger automatic removal.
Manual Cleanup Required
Some data requires manual cleanup. Unused kb_documents should be periodically reviewed and removed. Old message_history should be cleared based on retention policy. Expired user_login_tokens should be purged.
Performance Implications
Hot Paths
These relationships are frequently traversed and should be optimized.
The user_sessions to message_history path benefits from an index on (session_id, created_at DESC) and is used for conversation display.
The bots to bot_memories path benefits from an index on (bot_id, key) and is used by GET BOT MEMORY and SET BOT MEMORY operations.
The kb_collections to kb_documents path benefits from an index on (collection_id, indexed) and is used for semantic search.
Join Optimization
Common join patterns benefit from composite indexes.
User session context queries join user_sessions with users on user_sessions.user_id = users.id and with bots on user_sessions.bot_id = bots.id.
Knowledge base loading joins user_kb_associations with kb_collections on user_kb_associations.collection_id = kb_collections.id and kb_documents on kb_collections.id = kb_documents.collection_id.
Tool discovery joins session_tool_associations with basic_tools on session_tool_associations.tool_id = basic_tools.id filtered by session_id and bot_id.
Data Consistency Patterns
Transaction Boundaries
Certain operations must be atomic.
Session creation requires inserting the user_session record, initializing default associations, and creating the initial message all within a single transaction.
Tool registration requires inserting the basic_tool record, updating bot_configuration, and refreshing active sessions together.
Document upload requires inserting the kb_document record, triggering the indexing job, and updating collection metadata atomically.
Eventual Consistency
Some operations can be eventually consistent.
Vector embeddings allow document upload to complete first, with asynchronous indexing creating embeddings afterward. Search becomes available after processing completes.
Email synchronization saves account configuration immediately, then background sync fetches emails asynchronously. Folders and counts update as sync progresses.
Best Practices
Always use foreign keys for data integrity to catch relationship violations at the database level. Index foreign key columns for join performance to avoid full table scans on relationship traversals. Use transactions for related updates to maintain consistency across multiple tables.
Implement soft deletes for audit trails where regulations require historical data retention. Monitor constraint violations in logs to catch application bugs early. Plan cascade paths carefully to avoid unintended data deletion.
Document relationship changes in migrations so the team understands schema evolution over time.
Appendix B: External Services
This appendix catalogs all external services that General Bots integrates with, including their configuration requirements, associated BASIC keywords, and API endpoints.
Overview
General Bots connects to external services for extended functionality. All service credentials should be stored in config.csv within the bot’s .gbot folder - never hardcoded in scripts.
Infrastructure services (database, storage, cache) are automatically managed by the Directory service (Zitadel).
Service Categories
| Category | Services | Configuration Location |
|---|---|---|
| LLM Providers | OpenAI, Groq, Anthropic, Azure OpenAI | config.csv |
| Weather | OpenWeatherMap | config.csv |
| Messaging Channels | WhatsApp, Teams, Instagram, Telegram | config.csv |
| Storage | S3-Compatible (MinIO, etc.) | Vault (automatic) |
| Directory | Zitadel | VAULT_* environment variables |
| Stalwart / IMAP/SMTP | Vault (automatic) | |
| Calendar | CalDAV servers | config.csv |
| Database | PostgreSQL | Vault (automatic) |
| Cache | Redis-compatible | Vault (automatic) |
Quick Reference
BASIC Keywords That Call External Services
| Keyword | Service | Config Key |
|---|---|---|
LLM | LLM Provider | llm-provider, llm-api-key |
WEATHER | OpenWeatherMap | weather-api-key |
SEND MAIL | SMTP Server | Managed by Directory service |
SEND WHATSAPP | WhatsApp Business API | whatsapp-api-key, whatsapp-phone-number-id |
SEND TEAMS | Microsoft Teams | teams-app-id, teams-app-password |
SEND INSTAGRAM | Instagram Graph API | instagram-access-token, instagram-page-id |
GET (with http/https URL) | Any HTTP endpoint | N/A |
IMAGE | BotModels (local) | botmodels-enabled, botmodels-url |
VIDEO | BotModels (local) | botmodels-enabled, botmodels-url |
AUDIO | BotModels (local) | botmodels-enabled, botmodels-url |
SEE | BotModels (local) | botmodels-enabled, botmodels-url |
FIND | Qdrant (local) | Internal service |
USE WEBSITE | Web crawling | N/A |
Service Configuration Template
Add these to your config.csv:
key,value
llm-provider,openai
llm-api-key,YOUR_API_KEY
llm-model,claude-sonnet-4.5
weather-api-key,YOUR_OPENWEATHERMAP_KEY
whatsapp-api-key,YOUR_WHATSAPP_KEY
whatsapp-phone-number-id,YOUR_PHONE_ID
whatsapp-verify-token,YOUR_WEBHOOK_VERIFY_TOKEN
teams-app-id,YOUR_TEAMS_APP_ID
teams-app-password,YOUR_TEAMS_PASSWORD
instagram-access-token,YOUR_INSTAGRAM_TOKEN
instagram-page-id,YOUR_PAGE_ID
botmodels-enabled,true
botmodels-url,http://localhost:5000
# Human Handoff / CRM Features
crm-enabled,true
attendant-llm-tips,true
attendant-polish-message,true
attendant-smart-replies,true
attendant-auto-summary,true
attendant-sentiment-analysis,true
Auto-Managed Services
The following services are automatically configured by the Directory service (Zitadel):
| Service | What’s Managed |
|---|---|
| PostgreSQL | Connection credentials, database creation |
| S3-Compatible Storage | Access keys, bucket policies |
| Cache | Connection credentials |
| Stalwart Email | User accounts, SMTP/IMAP access |
You do not need to configure these services manually. The Directory service handles credential provisioning and rotation.
Security Notes
- Never hardcode credentials - Always use
config.csvorGET BOT MEMORY - Rotate keys regularly - Update
config.csvand restart the bot - Use least privilege - Only grant permissions needed by the bot
- Audit access - Monitor external API usage through logs
- Infrastructure credentials - Managed automatically by Directory service
See Also
- Service Catalog - Detailed service documentation
- LLM Providers - AI model configuration
- Weather API - Weather service setup
- Channel Integrations - Messaging platform setup
- Storage Services - S3-compatible storage
- Directory Services - User authentication
- Environment Variables - DIRECTORY_* configuration
Service Catalog
This catalog provides detailed information about every external service that General Bots integrates with.
LLM Providers
OpenAI
| Property | Value |
|---|---|
| Service URL | https://api.openai.com/v1 |
| Config Key | llm-provider=openai |
| API Key Config | llm-api-key (stored in Vault) |
| Documentation | platform.openai.com/docs |
| BASIC Keywords | LLM |
| Supported Models | gpt-5, gpt-oss-120b, gpt-oss-20b |
Groq
| Property | Value |
|---|---|
| Service URL | https://api.groq.com/openai/v1 |
| Config Key | llm-provider=groq |
| API Key Config | llm-api-key (stored in Vault) |
| Documentation | console.groq.com/docs |
| BASIC Keywords | LLM |
| Supported Models | llama-4-scout, llama-4-maverick, qwen3, mixtral-8x22b |
Anthropic
| Property | Value |
|---|---|
| Service URL | https://api.anthropic.com/v1 |
| Config Key | llm-provider=anthropic |
| API Key Config | llm-api-key (stored in Vault) |
| Documentation | docs.anthropic.com |
| BASIC Keywords | LLM |
| Supported Models | claude-opus-4.5, claude-sonnet-4.5 |
Azure OpenAI
| Property | Value |
|---|---|
| Service URL | https://{resource}.openai.azure.com/ |
| Config Key | llm-provider=azure |
| API Key Config | llm-api-key (stored in Vault) |
| Documentation | learn.microsoft.com/azure/ai-services/openai |
| BASIC Keywords | LLM |
Google (Gemini)
| Property | Value |
|---|---|
| Service URL | https://generativelanguage.googleapis.com/v1 |
| Config Key | llm-provider=google |
| API Key Config | llm-api-key (stored in Vault) |
| Documentation | ai.google.dev/docs |
| BASIC Keywords | LLM |
| Supported Models | gemini-3-pro, gemini-2.5-pro, gemini-2.5-flash |
xAI (Grok)
| Property | Value |
|---|---|
| Service URL | https://api.x.ai/v1 |
| Config Key | llm-provider=xai |
| API Key Config | llm-api-key (stored in Vault) |
| Documentation | docs.x.ai |
| BASIC Keywords | LLM |
| Supported Models | grok-4 |
DeepSeek
| Property | Value |
|---|---|
| Service URL | https://api.deepseek.com/v1 |
| Config Key | llm-provider=deepseek |
| API Key Config | llm-api-key (stored in Vault) |
| Documentation | platform.deepseek.com/docs |
| BASIC Keywords | LLM |
| Supported Models | deepseek-v3.1, deepseek-r3 |
Mistral AI
| Property | Value |
|---|---|
| Service URL | https://api.mistral.ai/v1 |
| Config Key | llm-provider=mistral |
| API Key Config | llm-api-key (stored in Vault) |
| Documentation | docs.mistral.ai |
| BASIC Keywords | LLM |
| Supported Models | mixtral-8x22b |
Weather Services
OpenWeatherMap
| Property | Value |
|---|---|
| Service URL | https://api.openweathermap.org/data/2.5 |
| Config Key | weather-api-key |
| Documentation | openweathermap.org/api |
| BASIC Keywords | WEATHER |
| Free Tier | 1,000 calls/day |
| Required Plan | Free or higher |
Example Usage:
weather = WEATHER "Seattle"
TALK weather
Messaging Channels
WhatsApp Business API
| Property | Value |
|---|---|
| Service URL | https://graph.facebook.com/v17.0 |
| Config Keys | whatsapp-api-key, whatsapp-phone-number-id, whatsapp-business-account-id |
| Documentation | developers.facebook.com/docs/whatsapp |
| BASIC Keywords | SEND WHATSAPP, SEND FILE (WhatsApp) |
| Webhook URL | /api/channels/whatsapp/webhook |
Microsoft Teams
| Property | Value |
|---|---|
| Service URL | https://smba.trafficmanager.net/apis |
| Config Keys | teams-app-id, teams-app-password, teams-tenant-id |
| Documentation | learn.microsoft.com/microsoftteams/platform |
| BASIC Keywords | SEND TEAMS, SEND FILE (Teams) |
| Webhook URL | /api/channels/teams/messages |
Instagram Messaging
| Property | Value |
|---|---|
| Service URL | https://graph.facebook.com/v17.0 |
| Config Keys | instagram-access-token, instagram-page-id, instagram-account-id |
| Documentation | developers.facebook.com/docs/instagram-api |
| BASIC Keywords | SEND INSTAGRAM |
| Webhook URL | /api/channels/instagram/webhook |
Telegram
| Property | Value |
|---|---|
| Service URL | https://api.telegram.org/bot{token} |
| Config Keys | telegram-bot-token |
| Documentation | core.telegram.org/bots/api |
| BASIC Keywords | SEND TELEGRAM |
| Webhook URL | /api/channels/telegram/webhook |
Storage Services
S3-Compatible Storage
General Bots uses S3-compatible object storage. Configuration is automatically managed by the Directory service (Zitadel).
| Property | Value |
|---|---|
| Local Default | MinIO on port 9000 |
| Management | Directory service (automatic) |
| Console Port | 9001 (when using MinIO) |
| BASIC Keywords | GET (file retrieval) |
Compatible Services:
- MinIO (default local installation)
- Backblaze B2
- Wasabi
- DigitalOcean Spaces
- Cloudflare R2
- Any S3-compatible provider
Storage credentials are provisioned and rotated automatically by the Directory service. No manual configuration required.
Directory Services
Zitadel (Identity Provider)
| Property | Value |
|---|---|
| Local Default | Port 8080 |
| Environment Variables | DIRECTORY_URL, DIRECTORY_CLIENT_ID, DIRECTORY_CLIENT_SECRET |
| Documentation | zitadel.com/docs |
| Purpose | User authentication, SSO, OAuth2/OIDC, service credential management |
The Directory service manages:
- User authentication
- Service credentials (database, storage, cache)
- OAuth applications
- Role-based access control
Email Services
Stalwart Mail Server
| Property | Value |
|---|---|
| Ports | 25 (SMTP), 993 (IMAPS), 587 (Submission) |
| Management | Directory service (automatic) |
| Documentation | stalw.art/docs |
| BASIC Keywords | SEND MAIL |
Email accounts are created and managed through the Directory service.
External IMAP/SMTP
| Property | Value |
|---|---|
| Config Keys | smtp-server, smtp-port, imap-server, imap-port, email-username, email-password |
| BASIC Keywords | SEND MAIL |
| Supported Providers | Gmail, Outlook, custom SMTP/IMAP |
Gmail Configuration Example (in config.csv):
smtp-server,smtp.gmail.com
smtp-port,587
imap-server,imap.gmail.com
imap-port,993
Local Services (BotModels)
Image Generation
| Property | Value |
|---|---|
| Service URL | http://localhost:5000 (default) |
| Config Keys | botmodels-enabled, botmodels-url |
| BASIC Keywords | IMAGE |
| Requires | BotModels service running |
Video Generation
| Property | Value |
|---|---|
| Service URL | http://localhost:5000 (default) |
| Config Keys | botmodels-enabled, botmodels-url |
| BASIC Keywords | VIDEO |
| Requires | BotModels service running, GPU recommended |
Audio Generation (TTS)
| Property | Value |
|---|---|
| Service URL | http://localhost:5000 (default) |
| Config Keys | botmodels-enabled, botmodels-url |
| BASIC Keywords | AUDIO |
| Requires | BotModels service running |
Vision/Captioning
| Property | Value |
|---|---|
| Service URL | http://localhost:5000 (default) |
| Config Keys | botmodels-enabled, botmodels-url |
| BASIC Keywords | SEE |
| Requires | BotModels service running |
Internal Services
These services are deployed locally as part of the General Bots stack. All are managed by the Directory service:
| Service | Default Port | Purpose | Management |
|---|---|---|---|
| PostgreSQL | 5432 | Primary database | Vault |
| Qdrant | 6333 | Vector storage for KB | Vault |
| Cache | 6379 | Caching | Vault |
| Stalwart | 25, 993 | Email server (optional) | Vault |
| BotModels | 5000 | AI model inference | config.csv |
Service Health Checks
All services can be checked via the monitoring API:
GET /api/monitoring/services
Response includes status for all configured external services.
Troubleshooting
Common Issues
- API Key Invalid - Verify key in
config.csv, ensure no trailing whitespace - Rate Limited - Check service quotas, implement caching with
SET BOT MEMORY - Connection Timeout - Verify network access to external URLs
- Service Unavailable - Check service status pages
Debug Logging
Enable trace logging to see external API calls:
RUST_LOG=trace ./botserver
LLM Providers
General Bots supports multiple Large Language Model (LLM) providers, both cloud-based services and local deployments. This guide helps you choose the right provider for your use case.
Overview
LLMs are the intelligence behind General Bots’ conversational capabilities. You can configure:
- Cloud Providers — External APIs (OpenAI, Anthropic, Google, etc.)
- Local Models — Self-hosted models via llama.cpp
- Hybrid — Use local for simple tasks, cloud for complex reasoning
Cloud Providers
OpenAI (GPT Series)
The most widely known LLM provider, offering the GPT-5 flagship model.
| Model | Context | Best For | Speed |
|---|---|---|---|
| GPT-5 | 1M | All-in-one advanced reasoning | Medium |
| GPT-oss 120B | 128K | Open-weight, agent workflows | Medium |
| GPT-oss 20B | 128K | Cost-effective open-weight | Fast |
Configuration (config.csv):
name,value
llm-provider,openai
llm-model,gpt-5
Strengths:
- Most advanced all-in-one model
- Excellent general knowledge
- Strong code generation
- Good instruction following
Considerations:
- API costs can add up
- Data sent to external servers
- Rate limits apply
Anthropic (Claude Series)
Known for safety, helpfulness, and extended thinking capabilities.
| Model | Context | Best For | Speed |
|---|---|---|---|
| Claude Opus 4.5 | 200K | Most capable, complex reasoning | Slow |
| Claude Sonnet 4.5 | 200K | Best balance of capability/speed | Fast |
Configuration (config.csv):
name,value
llm-provider,anthropic
llm-model,claude-sonnet-4.5
Strengths:
- Extended thinking mode for multi-step tasks
- Excellent at following complex instructions
- Strong coding abilities
- Better at refusing harmful requests
Considerations:
- Premium pricing
- Newer provider, smaller ecosystem
Google (Gemini Series)
Google’s multimodal AI models with strong reasoning capabilities.
| Model | Context | Best For | Speed |
|---|---|---|---|
| Gemini Pro | 2M | Complex reasoning, benchmarks | Medium |
| Gemini Flash | 1M | Fast multimodal tasks | Fast |
Configuration (config.csv):
name,value
llm-provider,google
llm-model,gemini-pro
Strengths:
- Largest context window (2M tokens)
- Native multimodal (text, image, video, audio)
- Strong at structured data
- Good coding abilities
Considerations:
- Some features region-limited
- API changes more frequently
xAI (Grok Series)
Integration with real-time data from X platform.
| Model | Context | Best For | Speed |
|---|---|---|---|
| Grok 4 | 128K | Real-time research, analysis | Fast |
Configuration (config.csv):
name,value
llm-provider,xai
llm-model,grok-4
Strengths:
- Real-time data access from X
- Strong research and analysis
- Good for trend analysis
Considerations:
- Newer provider
- X platform integration focus
Groq
Ultra-fast inference using custom LPU hardware. Offers open-source models at high speed.
| Model | Context | Best For | Speed |
|---|---|---|---|
| Llama 4 Scout | 10M | Long context, multimodal | Very Fast |
| Llama 4 Maverick | 1M | Complex tasks | Very Fast |
| Qwen3 | 128K | Efficient MoE architecture | Extremely Fast |
Configuration (config.csv):
name,value
llm-provider,groq
llm-model,llama-4-scout
Strengths:
- Fastest inference speeds (500+ tokens/sec)
- Competitive pricing
- Open-source models
- Great for real-time applications
Considerations:
- Rate limits on free tier
- Models may be less capable than GPT-5/Claude
Mistral AI
European AI company offering efficient, open-weight models.
| Model | Context | Best For | Speed |
|---|---|---|---|
| Mixtral-8x22B | 64K | Multi-language, coding | Fast |
Configuration (config.csv):
name,value
llm-provider,mistral
llm-model,mixtral-8x22b
Strengths:
- European data sovereignty (GDPR)
- Excellent code generation
- Open-weight models available
- Competitive pricing
- Proficient in multiple languages
Considerations:
- Smaller context than competitors
- Less brand recognition
DeepSeek
Known for efficient, capable models with exceptional reasoning.
| Model | Context | Best For | Speed |
|---|---|---|---|
| DeepSeek-V3.1 | 128K | General purpose, optimized cost | Fast |
| DeepSeek-R3 | 128K | Reasoning, math, science | Medium |
Configuration (config.csv):
name,value
llm-provider,deepseek
llm-model,deepseek-r3
llm-server-url,https://api.deepseek.com
Strengths:
- Extremely cost-effective
- Strong reasoning (R1 model)
- Rivals proprietary leaders in performance
- Open-weight versions available (MIT/Apache 2.0)
Considerations:
- Data processed in China
- Newer provider
Local Models
Run models on your own hardware for privacy, cost control, and offline operation.
Setting Up Local LLM
General Bots uses llama.cpp server for local inference:
name,value
llm-provider,local
llm-server-url,http://localhost:8081
llm-model,DeepSeek-R3-Distill-Qwen-1.5B
Recommended Local Models
For High-End GPU (24GB+ VRAM)
| Model | Size | VRAM | Quality |
|---|---|---|---|
| Llama 4 Scout 17B Q8 | 18GB | 24GB | Excellent |
| Qwen3 72B Q4 | 42GB | 48GB+ | Excellent |
| DeepSeek-R3 32B Q4 | 20GB | 24GB | Very Good |
For Mid-Range GPU (12-16GB VRAM)
| Model | Size | VRAM | Quality |
|---|---|---|---|
| Qwen3 14B Q8 | 15GB | 16GB | Very Good |
| GPT-oss 20B Q4 | 12GB | 16GB | Very Good |
| DeepSeek-R3-Distill 14B Q4 | 8GB | 12GB | Good |
| Gemma 3 27B Q4 | 16GB | 16GB | Good |
For Small GPU or CPU (8GB VRAM or less)
| Model | Size | VRAM | Quality |
|---|---|---|---|
| DeepSeek-R3-Distill 1.5B Q4 | 1GB | 4GB | Basic |
| Gemma 2 9B Q4 | 5GB | 8GB | Acceptable |
| Gemma 3 27B Q2 | 10GB | 8GB | Acceptable |
Model Download URLs
Add models to installer.rs data_download_list:
#![allow(unused)] fn main() { // Qwen3 14B - Recommended for mid-range GPU "https://huggingface.co/Qwen/Qwen3-14B-GGUF/resolve/main/qwen3-14b-q4_k_m.gguf" // DeepSeek R1 Distill - For CPU or minimal GPU "https://huggingface.co/unsloth/DeepSeek-R3-Distill-Qwen-1.5B-GGUF/resolve/main/DeepSeek-R3-Distill-Qwen-1.5B-Q4_K_M.gguf" // GPT-oss 20B - Good balance for agents "https://huggingface.co/openai/gpt-oss-20b-GGUF/resolve/main/gpt-oss-20b-q4_k_m.gguf" // Gemma 3 27B - For quality local inference "https://huggingface.co/google/gemma-3-27b-it-GGUF/resolve/main/gemma-3-27b-it-q4_k_m.gguf" }
Embedding Models
For vector search, you need an embedding model:
name,value
embedding-provider,local
embedding-server-url,http://localhost:8082
embedding-model,bge-small-en-v1.5
Recommended embedding models:
| Model | Dimensions | Size | Quality |
|---|---|---|---|
| bge-small-en-v1.5 | 384 | 130MB | Good |
| bge-base-en-v1.5 | 768 | 440MB | Better |
| bge-large-en-v1.5 | 1024 | 1.3GB | Best |
| nomic-embed-text | 768 | 550MB | Good |
Hybrid Configuration
Use different models for different tasks:
name,value
llm-provider,anthropic
llm-model,claude-sonnet-4.5
llm-fast-provider,groq
llm-fast-model,llama-3.3-70b
llm-fallback-provider,local
llm-fallback-model,DeepSeek-R3-Distill-Qwen-1.5B
embedding-provider,local
embedding-model,bge-small-en-v1.5
Model Selection Guide
By Use Case
| Use Case | Recommended | Why |
|---|---|---|
| Customer support | Claude Sonnet 4.5 | Best at following guidelines |
| Code generation | DeepSeek-R3, Claude Sonnet 4.5 | Specialized for code |
| Document analysis | Gemini Pro | 2M context window |
| Real-time chat | Groq Llama 3.3 | Fastest responses |
| Privacy-sensitive | Local DeepSeek-R3 | No external data transfer |
| Cost-sensitive | DeepSeek, Local models | Lowest cost per token |
| Complex reasoning | Claude Opus, Gemini Pro | Best reasoning ability |
| Real-time research | Grok | Live data access |
| Long context | Gemini Pro, Claude | Largest context windows |
By Budget
| Budget | Recommended Setup |
|---|---|
| Free | Local models only |
| Low ($10-50/mo) | Groq + Local fallback |
| Medium ($50-200/mo) | DeepSeek-V3.1 + Claude Sonnet 4.5 |
| High ($200+/mo) | GPT-5 + Claude Opus 4.5 |
| Enterprise | Private deployment + premium APIs |
Configuration Reference
config.csv Parameters
All LLM configuration belongs in config.csv, not environment variables:
| Parameter | Description | Example |
|---|---|---|
llm-provider | Provider name | openai, anthropic, local |
llm-model | Model identifier | gpt-5 |
llm-server-url | API endpoint (local only) | http://localhost:8081 |
llm-server-ctx-size | Context window size | 128000 |
llm-temperature | Response randomness (0-2) | 0.7 |
llm-max-tokens | Maximum response length | 4096 |
llm-cache-enabled | Enable semantic caching | true |
llm-cache-ttl | Cache time-to-live (seconds) | 3600 |
API Keys
API keys are stored in Vault, not in config files or environment variables:
# Store API key in Vault
vault kv put gbo/llm/openai api_key="sk-..."
vault kv put gbo/llm/anthropic api_key="sk-ant-..."
vault kv put gbo/llm/google api_key="AIza..."
Reference in config.csv:
name,value
llm-provider,openai
llm-model,gpt-5
llm-api-key,vault:gbo/llm/openai/api_key
Security Considerations
Cloud Providers
- API keys stored in Vault, never in config files
- Consider data residency requirements (EU: Mistral)
- Review provider data retention policies
- Use separate keys for production/development
Local Models
- All data stays on your infrastructure
- No internet required after model download
- Full control over model versions
- Consider GPU security for sensitive deployments
Performance Optimization
Caching
Enable semantic caching to reduce API calls:
name,value
llm-cache-enabled,true
llm-cache-ttl,3600
llm-cache-similarity-threshold,0.92
Batching
For bulk operations, use batch APIs when available:
name,value
llm-batch-enabled,true
llm-batch-size,10
Context Management
Optimize context window usage with episodic memory:
name,value
episodic-memory-enabled,true
episodic-memory-threshold,4
episodic-memory-history,2
episodic-memory-auto-summarize,true
See Episodic Memory for details.
Troubleshooting
Common Issues
API Key Invalid
- Verify key is stored correctly in Vault
- Check if key has required permissions
- Ensure billing is active on provider account
Model Not Found
- Check model name spelling
- Verify model is available in your region
- Some models require waitlist access
Rate Limits
- Implement exponential backoff
- Use caching to reduce calls
- Consider upgrading API tier
Local Model Slow
- Check GPU memory usage
- Reduce context size
- Use quantized models (Q4 instead of F16)
Logging
Enable LLM logging for debugging:
name,value
llm-log-requests,true
llm-log-responses,false
llm-log-timing,true
2025 Model Comparison
| Model | Creator | Type | Strengths |
|---|---|---|---|
| GPT-5 | OpenAI | Proprietary | Most advanced all-in-one |
| Claude Opus/Sonnet 4.5 | Anthropic | Proprietary | Extended thinking, complex reasoning |
| Gemini 3 Pro | Proprietary | Benchmarks, reasoning | |
| Grok 4 | xAI | Proprietary | Real-time X data |
| DeepSeek-V3.1/R1 | DeepSeek | Open (MIT/Apache) | Cost-optimized, reasoning |
| Llama 4 | Meta | Open-weight | 10M context, multimodal |
| Qwen3 | Alibaba | Open (Apache) | Efficient MoE |
| Mixtral-8x22B | Mistral | Open (Apache) | Multi-language, coding |
| GPT-oss | OpenAI | Open (Apache) | Agent workflows |
| Gemma 2/3 | Open-weight | Lightweight, efficient |
Next Steps
- config.csv Reference — Complete configuration guide
- Secrets Management — Vault integration
- Semantic Caching — Cache configuration
- NVIDIA GPU Setup — GPU configuration for local models
Weather API Integration
The WEATHER and FORECAST keywords provide real-time weather information and multi-day forecasts using the OpenWeatherMap API.
Keywords Overview
| Keyword | Purpose |
|---|---|
WEATHER | Get current weather conditions for a location |
FORECAST | Get extended weather forecast for multiple days |
WEATHER
Retrieves current weather conditions for a specified location.
Syntax
result = WEATHER location
Parameters
| Parameter | Type | Description |
|---|---|---|
location | String | City name, optionally with country code (e.g., “London” or “London,UK”) |
Return Value
Returns a formatted string containing:
- Temperature (current and feels-like)
- Weather conditions description
- Humidity percentage
- Wind speed and direction
- Visibility
- Atmospheric pressure
Example
' Get current weather for London
weather = WEATHER "London"
TALK weather
' Output:
' Current weather in London:
' 🌡️ Temperature: 15.2°C (feels like 14.5°C)
' ☁️ Conditions: Partly cloudy
' 💧 Humidity: 65%
' 💨 Wind: 3.5 m/s NE
' 🔍 Visibility: 10.0 km
' 📊 Pressure: 1013 hPa
FORECAST
Retrieves an extended weather forecast for multiple days.
Syntax
result = FORECAST location, days
Parameters
| Parameter | Type | Description |
|---|---|---|
location | String | City name, optionally with country code |
days | Integer | Number of days to forecast (1-5, default: 5) |
Example
' Get 5-day forecast for Paris
forecast = FORECAST "Paris,FR", 5
TALK forecast
' Output:
' Weather forecast for Paris:
'
' 📅 2024-03-15
' 🌡️ High: 18.5°C, Low: 12.3°C
' ☁️ Scattered clouds
' ☔ Rain chance: 20%
'
' 📅 2024-03-16
' 🌡️ High: 20.1°C, Low: 13.0°C
' ☁️ Clear sky
' ☔ Rain chance: 5%
' ...
Complete Example: Weather Bot
' weather-assistant.bas
' A conversational weather assistant
TALK "Hello! I can help you with weather information."
TALK "Which city would you like to know about?"
HEAR city
TALK "Would you like the current weather or a forecast?"
HEAR choice
IF INSTR(LOWER(choice), "forecast") > 0 THEN
TALK "How many days? (1-5)"
HEAR days
IF NOT IS_NUMERIC(days) THEN
days = 5
END IF
result = FORECAST city, days
TALK result
ELSE
result = WEATHER city
TALK result
END IF
TALK "Is there another city you'd like to check?"
Weather-Based Automation
' weather-alert.bas
' Send alerts based on weather conditions
cities = ["New York", "London", "Tokyo", "Sydney"]
FOR EACH city IN cities
weather = WEATHER city
' Check for extreme conditions
IF INSTR(weather, "storm") > 0 OR INSTR(weather, "heavy rain") > 0 THEN
SEND MAIL "alerts@company.com", "Weather Alert: " + city, weather
END IF
NEXT
Daily Weather Report
' daily-weather.bas
' Generate a daily weather report for multiple locations
locations = ["San Francisco,US", "Austin,US", "Seattle,US"]
report = "☀️ Daily Weather Report\n\n"
FOR EACH loc IN locations
weather = WEATHER loc
report = report + weather + "\n\n---\n\n"
NEXT
' Send the compiled report
SEND MAIL "team@company.com", "Daily Weather Update", report
Travel Planning Assistant
' travel-weather.bas
' Help users plan travel based on weather
TALK "Where are you planning to travel?"
HEAR destination
TALK "When are you planning to go? (Please provide a date)"
HEAR travel_date
' Get forecast for destination
forecast = FORECAST destination, 5
TALK "Here's the weather forecast for " + destination + ":"
TALK forecast
TALK "Based on the forecast, would you like packing suggestions?"
HEAR wants_suggestions
IF LOWER(wants_suggestions) = "yes" THEN
weather = WEATHER destination
IF INSTR(weather, "rain") > 0 THEN
TALK "🌂 Don't forget to pack an umbrella and rain jacket!"
END IF
IF INSTR(weather, "Temperature: 2") > 0 OR INSTR(weather, "Temperature: 3") > 0 THEN
TALK "🩳 It's warm! Pack light clothing and sunscreen."
ELSE IF INSTR(weather, "Temperature: 0") > 0 OR INSTR(weather, "Temperature: 1") > 0 THEN
TALK "🧥 It's cool. Bring a light jacket."
ELSE
TALK "🧣 It's cold! Pack warm layers and a coat."
END IF
END IF
Weather Data Structure
The WeatherData object returned internally contains:
| Field | Type | Description |
|---|---|---|
location | String | Resolved location name |
temperature | Float | Current temperature in Celsius |
temperature_unit | String | Temperature unit (°C) |
description | String | Weather condition description |
humidity | Integer | Humidity percentage (0-100) |
wind_speed | Float | Wind speed in m/s |
wind_direction | String | Compass direction (N, NE, E, etc.) |
feels_like | Float | “Feels like” temperature |
pressure | Integer | Atmospheric pressure in hPa |
visibility | Float | Visibility in kilometers |
uv_index | Float (optional) | UV index if available |
forecast | Array | Forecast data (for FORECAST keyword) |
Forecast Day Structure
Each forecast day contains:
| Field | Type | Description |
|---|---|---|
date | String | Date in YYYY-MM-DD format |
temp_high | Float | Maximum temperature |
temp_low | Float | Minimum temperature |
description | String | Weather conditions |
rain_chance | Integer | Probability of precipitation (0-100%) |
Configuration
To use the weather keywords, configure your OpenWeatherMap API key in config.csv:
| Key | Description | Required |
|---|---|---|
weather-api-key | OpenWeatherMap API key | Yes |
Getting an API Key
- Visit OpenWeatherMap
- Create a free account
- Navigate to “API Keys” in your dashboard
- Generate a new API key
- Add to your bot’s
config.csv:
weather-api-key,your-api-key-here
Wind Direction Compass
Wind direction is converted from degrees to compass directions:
| Degrees | Direction |
|---|---|
| 0° | N (North) |
| 45° | NE (Northeast) |
| 90° | E (East) |
| 135° | SE (Southeast) |
| 180° | S (South) |
| 225° | SW (Southwest) |
| 270° | W (West) |
| 315° | NW (Northwest) |
Error Handling
' Handle weather API errors gracefully
ON ERROR GOTO weather_error
weather = WEATHER "Unknown City XYZ"
TALK weather
END
weather_error:
TALK "Sorry, I couldn't get weather information for that location."
TALK "Please check the city name and try again."
END
Rate Limits
The OpenWeatherMap free tier includes:
- 60 calls per minute
- 1,000,000 calls per month
For higher limits, consider upgrading to a paid plan.
Best Practices
-
Use country codes: For accuracy, include country codes (e.g., “Paris,FR” instead of just “Paris”).
-
Cache results: Weather data doesn’t change frequently—consider caching results for 10-15 minutes.
-
Handle timeouts: Weather API calls have a 10-second timeout. Handle failures gracefully.
-
Validate locations: Check if the location is valid before making API calls.
-
Localization: Consider user preferences for temperature units (Celsius vs Fahrenheit).
Fallback Behavior
If the OpenWeatherMap API is unavailable, the system will:
- Log the error
- Attempt a fallback weather service (if configured)
- Return a user-friendly error message
Related Keywords
- GET - Make custom HTTP requests to weather APIs
- SET SCHEDULE - Schedule regular weather checks
- SEND MAIL - Send weather alerts via email
- SEND SMS - Send weather alerts via SMS
See Also
- OpenWeatherMap API Documentation
- API Tool Generator - Create custom weather integrations
Channel Integrations
Storage Services
Directory Services
Attendance Queue Module
Human-attendant queue management for hybrid bot/human support workflows, plus CRM automations for follow-ups, collections, scheduling, and sales.
Overview
The attendance queue module manages handoffs from bot to human agents, tracking conversation queues, attendant availability, and real-time assignment. It also provides automated CRM workflows that run without human intervention.
Configuration
Create attendant.csv in your bot’s .gbai folder:
id,name,channel,preferences,department
att-001,John Smith,whatsapp,sales,commercial
att-002,Jane Doe,web,support,customer-service
att-003,Bob Wilson,all,technical,engineering
att-004,Maria Santos,whatsapp,collections,finance
Queue Status
| Status | Description |
|---|---|
waiting | User waiting for attendant |
assigned | Attendant assigned, not yet active |
active | Conversation in progress |
resolved | Conversation completed |
abandoned | User left before assignment |
Attendant Status
| Status | Description |
|---|---|
online | Available for new conversations |
busy | Currently handling conversations |
away | Temporarily unavailable |
offline | Not working |
CRM Automations
The attendant module includes built-in CRM automations that handle common business workflows automatically.
Follow-Up Automation
Automated follow-up sequences for leads and customers.
' follow-up.bas
' Automated follow-up workflow
SET SCHEDULE "follow-ups", "0 9 * * 1-5"
' Find leads needing follow-up
leads_1_day = FIND "leads", "status='new' AND DATEDIFF(NOW(), last_contact) = 1"
leads_3_day = FIND "leads", "status='contacted' AND DATEDIFF(NOW(), last_contact) = 3"
leads_7_day = FIND "leads", "status='contacted' AND DATEDIFF(NOW(), last_contact) = 7"
' 1-day follow-up: Thank you message
FOR EACH lead IN leads_1_day
SEND TEMPLATE lead.phone, "follow_up_thanks", lead.name, lead.interest
UPDATE "leads", "id=" + lead.id, "contacted", NOW()
INSERT "activities", lead.id, "follow_up", "1-day thank you sent", NOW()
NEXT lead
' 3-day follow-up: Value proposition
FOR EACH lead IN leads_3_day
SEND TEMPLATE lead.phone, "follow_up_value", lead.name, lead.interest
UPDATE "leads", "id=" + lead.id, "nurturing", NOW()
INSERT "activities", lead.id, "follow_up", "3-day value prop sent", NOW()
NEXT lead
' 7-day follow-up: Special offer
FOR EACH lead IN leads_7_day
SEND TEMPLATE lead.phone, "follow_up_offer", lead.name, "10%"
UPDATE "leads", "id=" + lead.id, "offer_sent", NOW()
INSERT "activities", lead.id, "follow_up", "7-day offer sent", NOW()
' Alert sales team for hot leads
IF lead.score >= 70 THEN
attendant = FIND "attendants", "department='commercial' AND status='online'"
IF attendant THEN
SEND MAIL attendant.email, "Hot Lead Follow-up: " + lead.name, "Lead " + lead.name + " received 7-day offer. Score: " + lead.score
END IF
END IF
NEXT lead
PRINT "Follow-ups completed: " + UBOUND(leads_1_day) + " 1-day, " + UBOUND(leads_3_day) + " 3-day, " + UBOUND(leads_7_day) + " 7-day"
Collections Automation (Cobranças)
Automated payment reminders and collection workflow.
' collections.bas
' Automated payment collection workflow
SET SCHEDULE "collections", "0 8 * * 1-5"
' Find overdue invoices by age
due_today = FIND "invoices", "status='pending' AND due_date = CURDATE()"
overdue_3 = FIND "invoices", "status='pending' AND DATEDIFF(NOW(), due_date) = 3"
overdue_7 = FIND "invoices", "status='pending' AND DATEDIFF(NOW(), due_date) = 7"
overdue_15 = FIND "invoices", "status='pending' AND DATEDIFF(NOW(), due_date) = 15"
overdue_30 = FIND "invoices", "status='pending' AND DATEDIFF(NOW(), due_date) >= 30"
' Due today: Friendly reminder via WhatsApp
FOR EACH invoice IN due_today
customer = FIND "customers", "id=" + invoice.customer_id
SEND TEMPLATE customer.phone, "payment_due_today", customer.name, invoice.id, invoice.amount
INSERT "collection_log", invoice.id, "reminder_due_today", NOW()
NEXT invoice
' 3 days overdue: First collection notice
FOR EACH invoice IN overdue_3
customer = FIND "customers", "id=" + invoice.customer_id
SEND TEMPLATE customer.phone, "payment_overdue_3", customer.name, invoice.id, invoice.amount
SEND MAIL customer.email, "Pagamento Pendente - Fatura #" + invoice.id, "Sua fatura está vencida há 3 dias. Valor: R$ " + invoice.amount
UPDATE "invoices", "id=" + invoice.id, "first_notice_sent", NOW()
INSERT "collection_log", invoice.id, "first_notice", NOW()
NEXT invoice
' 7 days overdue: Second notice with urgency
FOR EACH invoice IN overdue_7
customer = FIND "customers", "id=" + invoice.customer_id
SEND TEMPLATE customer.phone, "payment_overdue_7", customer.name, invoice.id, invoice.amount
UPDATE "invoices", "id=" + invoice.id, "second_notice_sent", NOW()
INSERT "collection_log", invoice.id, "second_notice", NOW()
' Notify collections team
SEND MAIL "cobranca@empresa.com", "Cobrança 7 dias: " + customer.name, "Cliente: " + customer.name + "\nFatura: " + invoice.id + "\nValor: R$ " + invoice.amount
NEXT invoice
' 15 days overdue: Final notice before action
FOR EACH invoice IN overdue_15
customer = FIND "customers", "id=" + invoice.customer_id
late_fee = invoice.amount * 0.02
interest = invoice.amount * 0.01 * 15
total_due = invoice.amount + late_fee + interest
SEND TEMPLATE customer.phone, "payment_final_notice", customer.name, invoice.id, total_due
UPDATE "invoices", "id=" + invoice.id, late_fee, interest, total_due, "final_notice_sent", NOW()
INSERT "collection_log", invoice.id, "final_notice", NOW()
' Assign to human attendant for follow-up call
attendant = FIND "attendants", "department='finance' AND status='online'"
IF attendant THEN
INSERT "queue", invoice.customer_id, attendant.id, "collection_call", "high", NOW()
END IF
NEXT invoice
' 30+ days overdue: Escalate to collections
FOR EACH invoice IN overdue_30
IF invoice.status <> "collections" THEN
customer = FIND "customers", "id=" + invoice.customer_id
UPDATE "invoices", "id=" + invoice.id, "collections", NOW()
UPDATE "customers", "id=" + customer.id, "suspended"
SEND MAIL "juridico@empresa.com", "Inadimplência 30+ dias: " + customer.name, "Cliente enviado para cobrança jurídica.\n\nCliente: " + customer.name + "\nFatura: " + invoice.id + "\nValor total: R$ " + invoice.total_due
INSERT "collection_log", invoice.id, "sent_to_collections", NOW()
END IF
NEXT invoice
PRINT "Collections processed: " + UBOUND(due_today) + " due today, " + UBOUND(overdue_30) + " sent to collections"
Scheduling Automation (Agendamentos)
Automated appointment scheduling and reminders.
' scheduling.bas
' Automated appointment scheduling and reminders
SET SCHEDULE "appointment-reminders", "0 7 * * *"
' Find appointments for today and tomorrow
today_appointments = FIND "appointments", "DATE(scheduled_at) = CURDATE() AND status='confirmed'"
tomorrow_appointments = FIND "appointments", "DATE(scheduled_at) = DATE_ADD(CURDATE(), INTERVAL 1 DAY) AND status='confirmed'"
' Send day-before reminders
FOR EACH appt IN tomorrow_appointments
customer = FIND "customers", "id=" + appt.customer_id
staff = FIND "staff", "id=" + appt.staff_id
appt_time = FORMAT(appt.scheduled_at, "HH:mm")
appt_date = FORMAT(appt.scheduled_at, "DD/MM/YYYY")
SEND TEMPLATE customer.phone, "appointment_reminder_24h", customer.name, appt.service, appt_date, appt_time, staff.name
UPDATE "appointments", "id=" + appt.id, "reminder_24h_sent", NOW()
NEXT appt
' Send same-day reminders (2 hours before)
FOR EACH appt IN today_appointments
IF DATEDIFF_HOURS(appt.scheduled_at, NOW()) <= 2 AND appt.reminder_2h_sent IS NULL THEN
customer = FIND "customers", "id=" + appt.customer_id
staff = FIND "staff", "id=" + appt.staff_id
appt_time = FORMAT(appt.scheduled_at, "HH:mm")
SEND TEMPLATE customer.phone, "appointment_reminder_2h", customer.name, appt.service, appt_time
UPDATE "appointments", "id=" + appt.id, "reminder_2h_sent", NOW()
' Notify staff
SEND TEMPLATE staff.phone, "staff_appointment_alert", staff.name, customer.name, appt.service, appt_time
END IF
NEXT appt
' Check for no-shows (30 min past appointment time)
past_appointments = FIND "appointments", "scheduled_at < DATE_SUB(NOW(), INTERVAL 30 MINUTE) AND status='confirmed'"
FOR EACH appt IN past_appointments
customer = FIND "customers", "id=" + appt.customer_id
UPDATE "appointments", "id=" + appt.id, "no_show"
INSERT "activities", appt.customer_id, "no_show", "Missed appointment: " + appt.service, NOW()
' Send reschedule offer
SEND TEMPLATE customer.phone, "missed_appointment", customer.name, appt.service
NEXT appt
PRINT "Reminders sent: " + UBOUND(tomorrow_appointments) + " for tomorrow, " + UBOUND(today_appointments) + " for today"
Sales Automation (Vendas)
Automated sales pipeline and lead scoring.
' sales-automation.bas
' Automated sales pipeline management
SET SCHEDULE "sales-automation", "0 8,14,18 * * 1-5"
' Score and prioritize leads
new_leads = FIND "leads", "score IS NULL OR score = 0"
FOR EACH lead IN new_leads
score = 0
' Score based on source
IF lead.source = "website" THEN score = score + 20
IF lead.source = "referral" THEN score = score + 30
IF lead.source = "campaign" THEN score = score + 15
' Score based on company size
IF lead.company_size = "enterprise" THEN score = score + 25
IF lead.company_size = "mid-market" THEN score = score + 20
IF lead.company_size = "small" THEN score = score + 10
' Score based on engagement
page_views = FIND "analytics", "lead_id=" + lead.id + " AND type='page_view'"
score = score + MIN(UBOUND(page_views) * 2, 20)
' Score based on email opens
email_opens = FIND "email_tracking", "lead_id=" + lead.id + " AND opened=true"
score = score + MIN(UBOUND(email_opens) * 5, 25)
UPDATE "leads", "id=" + lead.id, score, NOW()
NEXT lead
' Auto-assign hot leads to sales reps
hot_leads = FIND "leads", "score >= 70 AND assigned_to IS NULL"
FOR EACH lead IN hot_leads
' Round-robin assignment
available_reps = FIND "attendants", "department='commercial' AND status='online'"
IF UBOUND(available_reps) > 0 THEN
' Get rep with fewest active leads
rep = available_reps[0]
min_leads = 999
FOR EACH r IN available_reps
rep_leads = FIND "leads", "assigned_to='" + r.id + "' AND status NOT IN ('converted', 'lost')"
IF UBOUND(rep_leads) < min_leads THEN
min_leads = UBOUND(rep_leads)
rep = r
END IF
NEXT r
UPDATE "leads", "id=" + lead.id, rep.id, NOW()
' Notify sales rep via WhatsApp
SEND TEMPLATE rep.phone, "new_hot_lead", rep.name, lead.name, lead.company, lead.score
' Create follow-up task
CREATE TASK "Contact hot lead: " + lead.name, rep.email, NOW()
END IF
NEXT lead
' Move stale opportunities
stale_opportunities = FIND "opportunities", "DATEDIFF(NOW(), last_activity) > 14 AND stage NOT IN ('closed_won', 'closed_lost')"
FOR EACH opp IN stale_opportunities
owner = FIND "attendants", "id=" + opp.owner_id
' Send reminder to owner
SEND TEMPLATE owner.phone, "stale_opportunity", owner.name, opp.name, opp.amount, DATEDIFF(NOW(), opp.last_activity)
' Create urgent task
CREATE TASK "URGENT: Update stale opportunity - " + opp.name, owner.email, NOW()
INSERT "activities", opp.id, "stale_alert", "Opportunity marked as stale", NOW()
NEXT opp
' Generate daily pipeline report
pipeline = FIND "opportunities", "stage NOT IN ('closed_won', 'closed_lost')"
total_value = AGGREGATE "SUM", pipeline, "amount"
weighted_value = 0
FOR EACH opp IN pipeline
weighted_value = weighted_value + (opp.amount * opp.probability / 100)
NEXT opp
report = "📊 Pipeline Diário\n\n"
report = report + "Total Pipeline: R$ " + FORMAT(total_value, "#,##0.00") + "\n"
report = report + "Valor Ponderado: R$ " + FORMAT(weighted_value, "#,##0.00") + "\n"
report = report + "Oportunidades Ativas: " + UBOUND(pipeline) + "\n"
report = report + "Leads Quentes: " + UBOUND(hot_leads)
SEND MAIL "vendas@empresa.com", "Pipeline Diário - " + FORMAT(NOW(), "DD/MM/YYYY"), report
PRINT "Sales automation completed. Hot leads assigned: " + UBOUND(hot_leads)
REST API Endpoints
Queue Management
GET /api/queue
List conversations in queue.
POST /api/queue/assign
Assign conversation to attendant.
{
"session_id": "uuid",
"attendant_id": "uuid"
}
POST /api/queue/transfer
Transfer conversation between attendants.
{
"session_id": "uuid",
"from_attendant_id": "uuid",
"to_attendant_id": "uuid",
"reason": "Specialist needed"
}
Attendant Management
GET /api/attendants
List all attendants with stats.
PUT /api/attendants/{id}/status
Update attendant status.
{
"status": "online"
}
CRM Automation
GET /api/automation/status
Check automation job status.
POST /api/automation/trigger/
Manually trigger an automation job.
BASIC Keywords
Transfer to Human
' Transfer to any available human
TRANSFER TO HUMAN
' Transfer to specific department
TRANSFER TO HUMAN "sales"
' Transfer with priority
TRANSFER TO HUMAN "support", "high"
' Transfer with context
TRANSFER TO HUMAN "technical", "normal", "Customer needs help with API integration"
Create Lead
' Create lead from conversation
CREATE LEAD name, email, phone, source
' Create lead with company info
CREATE LEAD name, email, phone, "website", company, "enterprise"
Schedule Appointment
' Schedule appointment
BOOK customer_email, service, date, time, staff_id
' Schedule with duration
BOOK customer_email, "Consultation", "2025-01-20", "14:00", staff_id, 60
WhatsApp Templates
Configure these templates in your WhatsApp Business account:
| Template Name | Purpose | Variables |
|---|---|---|
follow_up_thanks | 1-day follow-up | name, interest |
follow_up_value | 3-day value proposition | name, interest |
follow_up_offer | 7-day special offer | name, discount |
payment_due_today | Payment due reminder | name, invoice_id, amount |
payment_overdue_3 | 3-day overdue notice | name, invoice_id, amount |
payment_overdue_7 | 7-day overdue notice | name, invoice_id, amount |
payment_final_notice | 15-day final notice | name, invoice_id, total |
appointment_reminder_24h | Day-before reminder | name, service, date, time, staff |
appointment_reminder_2h | 2-hour reminder | name, service, time |
missed_appointment | No-show reschedule | name, service |
new_hot_lead | Hot lead alert for sales | rep_name, lead_name, company, score |
stale_opportunity | Stale deal reminder | rep_name, deal_name, amount, days |
See Also
Hosting, DNS, and MDA Integration
General Bots integrates with hosting providers, DNS services, and Mail Delivery Agents (MDA) for complete platform deployment.
Overview
A complete General Bots deployment typically includes:
| Component | Purpose | Providers Supported |
|---|---|---|
| Hosting | Run BotServer | Any VPS, LXC, bare metal |
| DNS | Domain management | Namecheap, Cloudflare, Route53 |
| MDA | Email delivery | Stalwart, Postfix, external SMTP |
| AI/LLM | Language models | OpenAI, Anthropic, local models |
Namecheap Integration
General Bots can automatically manage DNS records via the Namecheap API.
Configuration
Add to your bot’s config.csv:
name,value
namecheap-api-user,your-username
namecheap-api-key,stored-in-vault
namecheap-username,your-username
namecheap-client-ip,your-server-ip
Note: API key is stored in Vault, not in config.csv. Only reference it by name.
Automatic DNS Setup
When deploying a new bot instance, General Bots can:
- Create A record pointing to your server
- Create MX records for email
- Create TXT records for SPF/DKIM/DMARC
- Create CNAME for www subdomain
BASIC Keywords for DNS
' Create DNS record
DNS SET "bot.example.com", "A", server_ip
' Create MX record for email
DNS SET "example.com", "MX", "mail.example.com", 10
' Create SPF record
DNS SET "example.com", "TXT", "v=spf1 mx a ip4:" + server_ip + " -all"
' List current records
records = DNS LIST "example.com"
Supported DNS Providers
| Provider | API Support | Auto-SSL |
|---|---|---|
| Namecheap | ✅ Full | ✅ Let’s Encrypt |
| Cloudflare | ✅ Full | ✅ Native |
| Route53 | ✅ Full | ✅ ACM |
| DigitalOcean | ✅ Full | ✅ Let’s Encrypt |
| Manual | Via config | Manual |
Hosting Options
VPS Providers
General Bots runs on any Linux VPS:
| Provider | Minimum Spec | Recommended |
|---|---|---|
| DigitalOcean | 2GB RAM, 1 vCPU | 4GB RAM, 2 vCPU |
| Linode | 2GB RAM, 1 vCPU | 4GB RAM, 2 vCPU |
| Vultr | 2GB RAM, 1 vCPU | 4GB RAM, 2 vCPU |
| Hetzner | 2GB RAM, 2 vCPU | 4GB RAM, 2 vCPU |
| AWS EC2 | t3.small | t3.medium |
| GCP | e2-small | e2-medium |
LXC Container Deployment
Recommended for production isolation:
# Create container
lxc launch ubuntu:22.04 botserver
# Configure resources
lxc config set botserver limits.memory 4GB
lxc config set botserver limits.cpu 2
# Forward ports
lxc config device add botserver http proxy listen=tcp:0.0.0.0:80 connect=tcp:127.0.0.1:8080
lxc config device add botserver https proxy listen=tcp:0.0.0.0:443 connect=tcp:127.0.0.1:8443
# Set environment for Vault
lxc config set botserver environment.VAULT_ADDR="http://vault:8200"
# Deploy
lxc exec botserver -- ./botserver
Docker Deployment
version: '3.8'
services:
botserver:
image: generalbots/botserver:latest
ports:
- "8080:8080"
environment:
- VAULT_ADDR=http://vault:8200
volumes:
- ./bots:/app/bots
- ./botserver-stack:/app/botserver-stack
MDA (Mail Delivery Agent) Integration
General Bots includes Stalwart mail server for complete email functionality.
Built-in Stalwart
Stalwart is automatically configured during bootstrap:
| Feature | Status |
|---|---|
| IMAP | ✅ Enabled |
| SMTP | ✅ Enabled |
| JMAP | ✅ Enabled |
| Spam filtering | ✅ SpamAssassin |
| Virus scanning | ✅ ClamAV |
| DKIM signing | ✅ Auto-configured |
Email Configuration
In config.csv:
name,value
email-domain,example.com
email-dkim-selector,mail
email-spam-threshold,5.0
email-max-size-mb,25
DNS Records for Email
Required DNS records (auto-created with Namecheap integration):
| Record | Type | Value |
|---|---|---|
mail.example.com | A | Your server IP |
example.com | MX | mail.example.com (priority 10) |
example.com | TXT | v=spf1 mx a -all |
mail._domainkey.example.com | TXT | DKIM public key |
_dmarc.example.com | TXT | v=DMARC1; p=quarantine |
External SMTP
To use external email providers instead:
name,value
smtp-host,smtp.sendgrid.net
smtp-port,587
smtp-user,apikey
smtp-secure,tls
Credentials stored in Vault:
vault kv put secret/botserver/smtp password="your-api-key"
AI/LLM Integration
Supported Providers
| Provider | Models | Config Key |
|---|---|---|
| OpenAI | GPT-5, o3 | llm-url=https://api.openai.com/v1 |
| Anthropic | Claude Sonnet 4.5, Opus 4.5 | llm-url=https://api.anthropic.com |
| Groq | Llama 3.3, Mixtral | llm-url=https://api.groq.com/openai/v1 |
| DeepSeek | DeepSeek-V3, R3 | llm-url=https://api.deepseek.com |
| Local | Any GGUF | llm-url=http://localhost:8081 |
Local LLM Setup
Run local models with BotModels:
# Install BotModels
./botserver install llm
# Download a model
./botserver model download llama-3-8b
# Configure in config.csv
name,value
llm-url,http://localhost:8081
llm-model,llama-3-8b.gguf
llm-context-size,8192
llm-gpu-layers,35
AI Features
| Feature | Description |
|---|---|
| Conversation | Natural language chat |
| RAG | Knowledge base search |
| Tool Calling | Automatic BASIC tool invocation |
| Embeddings | Document vectorization |
| Vision | Image analysis (multimodal models) |
| Voice | Speech-to-text, text-to-speech |
Complete Deployment Example
1. Provision Server
# On your VPS
wget https://github.com/GeneralBots/botserver/releases/latest/botserver
chmod +x botserver
2. Configure DNS (Namecheap)
' setup-dns.bas
domain = "mybot.example.com"
server_ip = "203.0.113.50"
DNS SET domain, "A", server_ip
DNS SET "mail." + domain, "A", server_ip
DNS SET domain, "MX", "mail." + domain, 10
DNS SET domain, "TXT", "v=spf1 mx a ip4:" + server_ip + " -all"
PRINT "DNS configured for " + domain
3. Start BotServer
./botserver
4. Configure SSL
# Auto-configured with Let's Encrypt
./botserver ssl enable mybot.example.com
5. Verify Email
' test-email.bas
SEND MAIL "test@gmail.com", "Test from General Bots", "Email is working!"
PRINT "Email sent successfully"
Troubleshooting
DNS Not Propagating
- Check Namecheap API credentials
- Verify client IP is whitelisted
- Wait up to 48 hours for propagation
- Use
digornslookupto verify
Email Marked as Spam
- Verify SPF record is correct
- Check DKIM signature is valid
- Ensure DMARC policy is set
- Check IP reputation at mxtoolbox.com
SSL Certificate Errors
- Verify DNS A record points to server
- Check port 80 is accessible for ACME challenge
- Review Let’s Encrypt rate limits
- Check certificate expiry
LLM Connection Failed
- Verify
llm-urlin config.csv - Check API key in Vault
- Test endpoint with curl
- Review BotServer logs
See Also
- LLM Providers — Detailed LLM configuration
- Storage — S3-compatible storage setup
- Directory — User authentication
- Channels — WhatsApp, Telegram, etc.
- Installation — Full installation guide
Time-Series Database Module
InfluxDB 3 integration for metrics, analytics, and operational data.
Overview
High-performance time-series storage supporting 2.5M+ points/sec ingestion with async batching.
Configuration
Add to config.csv:
influxdb-url,http://localhost:8086
influxdb-token,your-token
influxdb-org,pragmatismo
influxdb-bucket,metrics
Or environment variables:
INFLUXDB_URL=http://localhost:8086
INFLUXDB_TOKEN=your-token
INFLUXDB_ORG=pragmatismo
INFLUXDB_BUCKET=metrics
Metric Points
Structure:
| Field | Description |
|---|---|
measurement | Metric name (e.g., “messages”, “response_time”) |
tags | Indexed key-value pairs for filtering |
fields | Actual metric values |
timestamp | When the metric was recorded |
Built-in Metrics
| Measurement | Tags | Fields |
|---|---|---|
messages | bot, channel, user | count |
response_time | bot, endpoint | duration_ms |
llm_tokens | bot, model, type | input, output, total |
kb_queries | bot, collection | count, latency_ms |
errors | bot, type, severity | count |
Usage in Rust
#![allow(unused)] fn main() { let client = TimeSeriesClient::new(config).await?; client.write_point( MetricPoint::new("messages") .tag("bot", "sales-bot") .tag("channel", "whatsapp") .field_i64("count", 1) ).await?; }
Querying
REST endpoint for analytics:
GET /api/analytics/timeseries/messages?range=24h
GET /api/analytics/timeseries/response_time?range=7d
Installation
The timeseries_db component is installed via package manager:
gb install timeseries_db
Ports: 8086 (HTTP API), 8083 (RPC)
See Also
NVIDIA GPU Module
System monitoring for NVIDIA GPU utilization and performance metrics.
Overview
This module provides GPU monitoring capabilities when NVIDIA hardware is available, useful for tracking resource usage during LLM inference and multimodal generation tasks.
Feature Flag
Enabled via Cargo feature:
[features]
nvidia = []
Functions
has_nvidia_gpu()
Check if NVIDIA GPU is available:
#![allow(unused)] fn main() { if nvidia::has_nvidia_gpu() { // GPU acceleration available } }
Returns true if nvidia-smi command succeeds.
get_gpu_utilization()
Get current GPU and memory utilization:
#![allow(unused)] fn main() { let util = nvidia::get_gpu_utilization()?; let gpu_percent = util.get("gpu"); // GPU compute utilization % let mem_percent = util.get("memory"); // GPU memory utilization % }
get_system_metrics()
Get combined CPU and GPU metrics:
#![allow(unused)] fn main() { let metrics = nvidia::get_system_metrics()?; println!("CPU: {}%", metrics.cpu_usage); if let Some(gpu) = metrics.gpu_usage { println!("GPU: {}%", gpu); } }
SystemMetrics Struct
| Field | Type | Description |
|---|---|---|
cpu_usage | f32 | CPU utilization percentage |
gpu_usage | Option<f32> | GPU utilization (None if no NVIDIA GPU) |
Requirements
- NVIDIA GPU with driver installed
nvidia-smicommand available in PATH
Use Cases
- Monitor GPU during image/video generation
- Track resource usage for LLM inference
- Capacity planning for bot deployments
- Performance dashboards
See Also
- Multimodal Module
- Time-Series Database - Store GPU metrics over time
Multimodal Module
Image, video, and audio generation with vision/captioning capabilities.
Overview
The multimodal module connects to BotModels server for AI-powered media generation and analysis.
BASIC Keywords
| Keyword | Purpose |
|---|---|
IMAGE | Generate image from text prompt |
VIDEO | Generate video from text prompt |
AUDIO | Generate speech audio from text |
SEE | Describe/caption an image or video |
IMAGE
Generate an image from a text prompt:
url = IMAGE "A sunset over mountains with a lake"
TALK "Here's your image: " + url
Timeout: 300 seconds (5 minutes)
VIDEO
Generate a video from a text prompt:
url = VIDEO "A cat playing with a ball of yarn"
TALK "Here's your video: " + url
Timeout: 600 seconds (10 minutes)
AUDIO
Generate speech audio from text:
url = AUDIO "Welcome to our service. How can I help you today?"
PLAY url
SEE
Get a description of an image or video:
description = SEE "path/to/image.jpg"
TALK "I see: " + description
Configuration
Add to config.csv:
botmodels-enabled,true
botmodels-host,localhost
botmodels-port,5000
botmodels-api-key,your-api-key
botmodels-use-https,false
Image Generation Config
botmodels-image-model,stable-diffusion
botmodels-image-steps,20
botmodels-image-width,512
botmodels-image-height,512
Video Generation Config
botmodels-video-model,text2video
botmodels-video-frames,16
botmodels-video-fps,8
BotModels Client
Rust API for direct integration:
#![allow(unused)] fn main() { let client = BotModelsClient::from_state(&state, &bot_id); if client.is_enabled() { let image_url = client.generate_image("A beautiful garden").await?; let description = client.describe_image("path/to/photo.jpg").await?; } }
Available Methods
| Method | Description |
|---|---|
generate_image(prompt) | Create image from text |
generate_video(prompt) | Create video from text |
generate_audio(text) | Create speech audio |
describe_image(path) | Get image caption |
describe_video(path) | Get video description |
speech_to_text(audio_path) | Transcribe audio |
health_check() | Check BotModels server status |
Response Structures
GenerationResponse
{
"status": "success",
"file_path": "/path/to/generated/file.png",
"generation_time": 12.5,
"error": null
}
DescribeResponse
{
"description": "A golden retriever playing fetch in a park",
"confidence": 0.92
}
Requirements
- BotModels server running (separate service)
- GPU recommended for generation tasks
- Sufficient disk space for generated media
See Also
- NVIDIA Module - GPU monitoring
- PLAY Keyword - Play generated audio
Console Module (XtreeUI)
Terminal-based admin interface for managing General Bots instances.
Overview
XtreeUI is a TUI (Terminal User Interface) for administering bots directly from the command line. It provides file browsing, log viewing, chat testing, and status monitoring in a single terminal window.
Feature Flag
Enabled via Cargo feature:
[features]
console = []
Panels
| Panel | Key | Description |
|---|---|---|
| File Tree | 1 | Browse bot files and packages |
| Editor | 2 | View/edit configuration files |
| Status | 3 | System status and metrics |
| Logs | 4 | Real-time log viewer |
| Chat | 5 | Test bot conversations |
Keyboard Navigation
| Key | Action |
|---|---|
1-5 | Switch between panels |
Tab | Cycle panels |
↑/↓ | Navigate within panel |
Enter | Select/open item |
q | Quit console |
? | Show help |
Components
File Tree
Browse .gbai folder structure:
- View packages (.gbkb, .gbdialog, .gbtheme)
- Open config.csv for editing
- Navigate bot resources
Status Panel
Real-time system metrics:
- CPU/memory usage
- Active connections
- Bot status
- Database connectivity
Log Panel
Live log streaming with filtering:
- Error highlighting
- Log level filtering
- Search functionality
Chat Panel
Interactive bot testing:
- Send messages to bot
- View responses
- Debug conversation flow
Editor
Basic file editing:
- Syntax highlighting
- Save/reload files
- Config validation
Starting the Console
./botserver --console
Or programmatically:
#![allow(unused)] fn main() { let mut ui = XtreeUI::new(); ui.set_app_state(app_state); ui.start_ui()?; }
Progress Channel
Monitor background tasks:
#![allow(unused)] fn main() { let (tx, rx) = tokio::sync::mpsc::channel(100); ui.set_progress_channel(rx); // Send progress updates tx.send(ProgressUpdate::new("Loading KB...", 50)).await; }
Use Cases
- Server administration without web UI
- SSH-based remote management
- Development and debugging
- Headless server deployments
- Quick configuration changes
See Also
Conversation Examples Style Guide
Standard format for displaying bot-user conversations in documentation
Overview
All conversation examples in General Bots documentation use a WhatsApp-style chat format. This provides a consistent, familiar, and readable way to show bot interactions.
CSS Include
The styling is defined in /assets/wa-chat.css. Include it in your mdBook or HTML output.
Basic Structure
<div class="wa-chat">
<div class="wa-message bot">
<div class="wa-bubble">
<p>Bot message here</p>
<div class="wa-time">10:30</div>
</div>
</div>
<div class="wa-message user">
<div class="wa-bubble">
<p>User message here</p>
<div class="wa-time">10:31</div>
</div>
</div>
</div>
Message Types
Bot Message
<div class="wa-message bot">
<div class="wa-bubble">
<p>Hello! How can I help you today?</p>
<div class="wa-time">10:30</div>
</div>
</div>
User Message
<div class="wa-message user">
<div class="wa-bubble">
<p>What meetings do I have today?</p>
<div class="wa-time">10:31</div>
</div>
</div>
Formatting Within Messages
Multiple Paragraphs
<div class="wa-bubble">
<p>You have 2 meetings scheduled:</p>
<p>• 2:00 PM - Team Standup (30 min)</p>
<p>• 4:00 PM - Project Review (1 hour)</p>
<div class="wa-time">10:31</div>
</div>
Bold Text
<p><strong>Name:</strong> John Smith</p>
<p><strong>Email:</strong> john@example.com</p>
Emoji Usage
Emojis are encouraged to make conversations more expressive:
| Purpose | Emoji Examples |
|---|---|
| Success | ✅ ✓ 🎉 |
| Warning | ⚠️ ⚡ |
| Error | ❌ 🔴 |
| Info | ℹ️ 📋 |
| File | 📄 📁 📎 |
| Calendar | 📅 🗓️ |
| 📧 ✉️ | |
| Person | 👤 👥 |
| Time | 🕐 ⏱️ |
File Attachments
<div class="wa-message user">
<div class="wa-bubble">
<p>Here's the report</p>
<p>📎 quarterly-report.pdf</p>
<div class="wa-time">10:32</div>
</div>
</div>
Action Buttons (visual representation)
<p>[📧 Send] [✏️ Edit] [🗑 Discard]</p>
Complete Example
<div class="wa-chat">
<div class="wa-message user">
<div class="wa-bubble">
<p>Schedule a meeting with Sarah tomorrow at 2pm</p>
<div class="wa-time">10:30</div>
</div>
</div>
<div class="wa-message bot">
<div class="wa-bubble">
<p>✅ Meeting scheduled!</p>
<p>👥 Meeting with Sarah</p>
<p>📅 Tomorrow at 2:00 PM</p>
<p>⏱️ Duration: 1 hour</p>
<p>Invitation sent to Sarah.</p>
<div class="wa-time">10:30</div>
</div>
</div>
</div>
Rendered Output:
Variants
Full Width
Add wa-full-width class for wider conversations:
<div class="wa-chat wa-full-width">
...
</div>
Compact
Add wa-compact class for tighter spacing:
<div class="wa-chat wa-compact">
...
</div>
Hide Timestamps
Add wa-no-time class to hide timestamps:
<div class="wa-chat wa-no-time">
...
</div>
Best Practices
- Keep messages concise - Break long bot responses into multiple paragraphs
- Use consistent timestamps - Use realistic times (10:30, 10:31, etc.)
- Start with user context - Show what the user asked before the bot response
- Include visual feedback - Use emojis for status (✅, ❌, 📋)
- Show realistic flows - Include multi-turn conversations when appropriate
- Use semantic formatting - Bold for labels, lists for options
Files Using This Format
This format is used throughout the documentation:
chapter-02/template-crm-contacts.mdchapter-04-gbui/apps/*.mdchapter-06-gbdialog/basic-vs-automation-tools.md- And many more…
See Also
Documentation Style Standards
All interface layouts in this documentation use SVG-based wireframe representations for screenshots and diagrams. Conversation examples use the WhatsApp-style HTML format for consistent, visually appealing rendering.
Interface Wireframes (SVG)
All interface screenshots and layouts should use SVG wireframes located in /assets/.
Directory Structure
assets/
├── suite/
│ ├── chat-screen.svg
│ ├── drive-screen.svg
│ ├── calendar-screen.svg
│ ├── mail-screen.svg
│ ├── tasks-screen.svg
│ ├── meet-screen.svg
│ ├── live-monitoring-organism.svg
│ └── ...
├── chapter-01/
│ ├── bootstrap-process.svg
│ └── session-states.svg
└── chapter-04/
└── analytics-interface.svg
Referencing SVG Wireframes
Use standard HTML image syntax with responsive styling:
<img src="../assets/suite/chat-screen.svg" alt="Chat Interface" style="max-width: 100%; height: auto;">
Conversation Examples (WhatsApp Style)
All conversation examples throughout the book use the WhatsApp-style HTML format. This provides a familiar, visually consistent representation of bot interactions.
Standard Format
<div class="wa-chat">
<div class="wa-message user">
<div class="wa-bubble">
<p>User message goes here</p>
<div class="wa-time">10:30</div>
</div>
</div>
<div class="wa-message bot">
<div class="wa-bubble">
<p>Bot response goes here</p>
<div class="wa-time">10:30</div>
</div>
</div>
</div>
Complete Example
<div class="wa-chat">
<div class="wa-message bot">
<div class="wa-bubble">
<p>Hello! How can I help you today?</p>
<div class="wa-time">10:30</div>
</div>
</div>
<div class="wa-message user">
<div class="wa-bubble">
<p>I want to enroll in computer science</p>
<div class="wa-time">10:31</div>
</div>
</div>
<div class="wa-message bot">
<div class="wa-bubble">
<p>I'll help you enroll! What's your name?</p>
<div class="wa-time">10:31</div>
</div>
</div>
<div class="wa-message user">
<div class="wa-bubble">
<p>Sarah Chen</p>
<div class="wa-time">10:31</div>
</div>
</div>
<div class="wa-message bot">
<div class="wa-bubble">
<p>✅ Welcome to Computer Science, Sarah!</p>
<p>Your enrollment ID is: ENR-2025-0142</p>
<div class="wa-time">10:31</div>
</div>
</div>
</div>
Message Classes
| Class | Usage |
|---|---|
wa-chat | Container for the conversation |
wa-message | Individual message wrapper |
wa-message user | User message (right-aligned, colored) |
wa-message bot | Bot message (left-aligned) |
wa-bubble | Message bubble with styling |
wa-time | Timestamp display |
Formatting Guidelines
- User messages — Use
wa-message userclass - Bot messages — Use
wa-message botclass - Timestamps — Include
wa-timediv with realistic times - Multi-line responses — Use separate
<p>tags for each line - Emojis — Use sparingly for status indicators (✅, ❌, 📧, 📅, 📁)
- Bold text — Use
<strong>for emphasis - Attachments — Indicate with 📎 emoji and filename
File Attachments Example
<div class="wa-chat">
<div class="wa-message user">
<div class="wa-bubble">
<p>Upload the quarterly report</p>
<p>📎 Q4-Report.pdf</p>
<div class="wa-time">10:30</div>
</div>
</div>
<div class="wa-message bot">
<div class="wa-bubble">
<p>✅ File uploaded successfully!</p>
<p>📄 Q4-Report.pdf (2.4 MB)</p>
<p>📁 Saved to: My Drive</p>
<div class="wa-time">10:30</div>
</div>
</div>
</div>
CSS Styling
The WhatsApp chat styling is defined in whatsapp-chat.css and automatically included in the book build. The styles provide:
- Familiar messaging app appearance
- Proper alignment (user right, bot left)
- Bubble styling with shadows
- Responsive layout
- Timestamp formatting
When to Use Each Format
| Content Type | Format |
|---|---|
| Interface screenshots | SVG wireframe |
| System architecture | SVG diagram |
| Data flow diagrams | SVG diagram |
| Bot conversations | WhatsApp HTML |
| API examples | Code blocks |
| Configuration | Code blocks |
Global Conversation Style Reference
For all conversation examples throughout the book, follow the format established in:
This document serves as the canonical reference for:
- Conversation formatting
- Multi-channel message representation
- Bot response styling
- User input examples
See Also
- Conversation Examples — Example patterns
- BASIC vs Automation Tools — Canonical conversation style
- Chapter 04 Apps — Interface documentation
Glossary
Quick lookup for General Bots terms. If you’re lost, start here.
A
A2A Protocol - Agent-to-Agent Protocol. Enables bots to communicate and delegate tasks to each other in multi-agent systems. Messages include request, response, broadcast, and delegate types.
ADD BOT - BASIC keyword to add a bot to the current session with triggers, tools, or schedules.
Argon2 - Password hashing algorithm used for secure credential storage. Makes brute-force attacks computationally infeasible.
Auto-Bootstrap - The automatic first-run process that installs and configures all dependencies: PostgreSQL, cache, storage, and LLM servers.
B
BASIC - The scripting language for General Bots dialogs. Inspired by the 1964 language, simplified for conversational AI. Powers all .bas scripts with keywords like TALK, HEAR, and LLM.
BM25 - Best Match 25. Sparse retrieval algorithm for keyword-based search. Used in hybrid RAG alongside dense (semantic) search.
BOOK - BASIC keyword to schedule calendar appointments.
Bot Memory - Persistent storage scoped to a single bot, shared across all users. Access with SET BOT MEMORY and GET BOT MEMORY.
Bot Package - A folder ending in .gbai containing everything to run a bot: scripts, documents, and configuration.
BotSession - The active conversation between user and bot. Tracks state, history, and context. Persists to database, cached for speed.
Bootstrap - Initial setup process that installs all dependencies automatically on first launch.
BROADCAST TO BOTS - BASIC keyword to send a message to all bots in the current session.
C
Cache - In-memory storage component for sessions, temporary data, and semantic caching. Provides sub-millisecond access times.
Collection - A folder of documents in .gbkb/ that becomes searchable knowledge. Each subfolder is a separate collection.
Chunking - The process of splitting documents into smaller pieces for embedding and retrieval. Default chunk size is optimized for context windows.
config.csv - The configuration file for each bot. Simple key-value pairs in CSV format. Lives in the .gbot/ folder.
Context - Information available to the LLM during a conversation. Includes history, knowledge base results, and user-provided context via SET CONTEXT.
Context Compaction - Automatic summarization of older conversation history to fit within token limits while preserving important information.
CREATE DRAFT - BASIC keyword to compose and save an email draft to the user’s mailbox.
CREATE TASK - BASIC keyword to create a task with assignee and due date.
D
DELEGATE TO BOT - BASIC keyword to send a task to another bot and optionally wait for a response.
Dense Search - Semantic search using vector embeddings. Finds content by meaning rather than exact keywords.
Dialog - A .bas script defining conversation flow. Contains BASIC code with keywords like TALK and HEAR.
Drive - Built-in S3-compatible object storage. Stores documents, templates, and uploads. Auto-installed during bootstrap.
E
Embedding - Text converted to numerical vectors for similarity search. Similar meanings produce similar vectors.
Embedding Model - Neural network that generates embeddings. Default is BGE, replaceable with any GGUF-compatible model.
Episodic Memory - Summaries of past conversations stored for long-term context. Automatically generated when conversations end.
Event Handler - BASIC code triggered by events. Use ON keyword with triggers like "login", "email", or cron expressions.
F
FIND - BASIC keyword to search database tables with filter criteria. Returns matching records.
FOR EACH - BASIC keyword for iterating over collections and query results.
G
.gbai - “General Bot AI” package folder. Contains the entire bot. Example: support.gbai/ becomes the bot at /support.
.gbdialog - Subfolder containing BASIC scripts. Must include start.bas as the entry point. Tools go in tools/ subdirectory.
.gbdrive - File storage configuration subfolder. Maps to Drive buckets for document management.
.gbkb - “Knowledge Base” subfolder. Each subdirectory becomes a searchable collection with automatic indexing.
.gbot - Configuration subfolder containing config.csv with bot settings.
.gbtheme - Optional UI customization subfolder for CSS, images, and HTML templates.
General Bots - Open-source enterprise conversational AI platform. Combines LLMs with structured dialogs, knowledge bases, and multi-channel support.
GET - BASIC keyword to retrieve data from APIs, files, or session variables.
GET BOT MEMORY - BASIC keyword to retrieve persistent bot-level data.
GET USER MEMORY - BASIC keyword to retrieve cross-session user data accessible from any bot.
GraphQL - Query language for APIs. Supported via the GRAPHQL keyword for complex data retrieval.
H
HEAR - BASIC keyword to wait for and capture user input. name = HEAR stores the response in a variable.
Hot Reload - Automatic reloading of BASIC scripts when files change. No restart needed.
Hybrid Search - RAG approach combining dense (semantic) and sparse (keyword) retrieval using Reciprocal Rank Fusion.
HTMX - Frontend library used for dynamic UI updates without full page reloads.
I
INSERT - BASIC keyword to add records to database tables.
Intent - What the user wants to accomplish. Detected from natural language via LLM classification.
K
Keyword - A BASIC command like TALK, HEAR, or LLM. About 50+ available. Written in uppercase by convention.
Knowledge Base (KB) - Documents searchable by the bot. Organized in folders under .gbkb/. Activate with USE KB "foldername".
L
LiveKit - WebRTC platform used for video meetings in General Bots.
LLM - Large Language Model. The AI that powers natural conversation. Supports OpenAI, Anthropic, Groq, and local models via llama.cpp.
llama.cpp - C++ library for running LLM inference locally. Used for self-hosted model deployment.
Local-First - Architecture principle where everything runs locally by default. No cloud dependencies required.
M
MCP - Model Context Protocol. Standard format for defining tools that LLMs can call. Supported alongside OpenAI function format.
Memory - Data persistence system with four scopes: User Memory (cross-bot), Bot Memory (per-bot), Session Memory (temporary), and Episodic Memory (conversation summaries).
Model Routing - Dynamic selection of LLM models based on task requirements. Use USE MODEL "fast", "quality", "code", or "auto".
Multi-Agent - Architecture where multiple specialized bots collaborate on complex tasks.
Multi-Channel - Same bot works across WhatsApp, Telegram, Teams, Web, and other channels without modification.
N
No Forms - General Bots philosophy since 2017: people should converse, not fill forms. Conversations replace traditional UI forms.
O
ON - BASIC keyword to define event handlers for triggers, schedules, or webhooks.
OIDC - OpenID Connect. Authentication protocol handled by the Directory service (Zitadel).
P
Package Manager - Built-in system that installs bot packages. Drop a .gbai folder and it’s automatically loaded.
PARAM - Declares tool parameters. PARAM name, email means the tool needs these inputs. LLM collects them automatically.
PostgreSQL - The database for General Bots. Stores users, sessions, messages, and bot configuration. Auto-installed and auto-configured.
POST - BASIC keyword to make HTTP POST requests to external APIs.
Pragmatismo - Brazilian software company that created and maintains General Bots.
Q
Qdrant - Vector database for semantic search at scale. Optional component for large knowledge bases.
R
RAG - Retrieval-Augmented Generation. Pattern where relevant documents are retrieved and provided to the LLM as context.
Reranking - Optional LLM-based scoring of search results for improved relevance. Adds latency but improves quality.
Rhai - Rust scripting engine that powers the BASIC interpreter. Sandboxed and safe.
RRF - Reciprocal Rank Fusion. Algorithm for combining rankings from multiple search methods in hybrid RAG.
RUN PYTHON / JAVASCRIPT / BASH - BASIC keywords to execute code in sandboxed environments.
S
SAVE - BASIC keyword to write data to CSV files or database tables.
Script - A .bas file with BASIC code. start.bas is the entry point; other scripts are tools or utilities.
Semantic Cache - Caching system that matches similar (not just identical) queries to reuse LLM responses.
Semantic Search - Finding content by meaning rather than exact keywords. Powered by embeddings and vector similarity.
SEND MAIL - BASIC keyword to send emails with optional HTML and attachments.
Session - Active conversation state between user and bot. Expires after inactivity (default 30 minutes).
Session Memory - Temporary storage for the current conversation. Access with SET and GET.
SET - BASIC keyword to store values in session variables or update database records.
SET BOT MEMORY - BASIC keyword to store persistent bot-level data.
SET CONTEXT - BASIC keyword to add information to the LLM context. Influences all subsequent responses.
SET SCHEDULE - BASIC keyword for cron-based task scheduling. Accepts natural language like "every monday at 9am".
SET USER MEMORY - BASIC keyword to store cross-session user data accessible from any bot.
Sparse Search - Keyword-based search using algorithms like BM25. Excels at exact matches and rare terms.
SSE - Server-Sent Events. Used for real-time streaming of LLM responses.
Stalwart - Email server component providing IMAP/SMTP/JMAP support.
Suite - The complete General Bots workspace application with Chat, Drive, Tasks, Mail, Calendar, and other apps.
SWITCH - BASIC keyword for multi-way conditional branching.
T
TALK - BASIC keyword to send messages to the user. Supports text, markdown, and multimedia.
Template - Pre-built bot configuration in the templates/ folder. Copy and modify to create new bots.
Token - Unit of text for LLMs. Roughly 4 characters. Context windows are measured in tokens.
Tool - A .bas file the LLM can call automatically. Define with PARAM declarations and a DESCRIPTION. Place in the tools/ folder.
TRANSFER CONVERSATION - BASIC keyword to hand off the entire conversation to another bot.
U
UPDATE - BASIC keyword to modify existing database records.
USE KB - BASIC keyword to activate a knowledge base for semantic search. USE KB "policies" makes the policies collection searchable.
USE MODEL - BASIC keyword to switch LLM models. Options: "fast", "quality", "code", or "auto".
USE TOOL - BASIC keyword to enable a tool for LLM use. The AI determines when to call it.
User Memory - Persistent storage scoped to a user, accessible across all bots and sessions.
V
Vault - HashiCorp Vault. Secrets management service for storing credentials securely. Only VAULT_* environment variables are used.
Vector - Mathematical representation of meaning. Similar meanings produce similar vectors.
Vector Database - Database optimized for storing and searching embeddings. Qdrant is the default option.
W
WAIT - BASIC keyword to pause execution for a specified duration.
WEBHOOK - BASIC keyword to create HTTP endpoints that trigger bot actions.
WebSocket - Real-time connection for chat. Enables instant messaging without polling. Path: /ws.
Z
Zitadel - Identity and access management service. Handles authentication, users, and permissions.
Package Extensions
| Extension | Purpose |
|---|---|
.gbai | Complete bot package |
.gbdialog | BASIC scripts |
.gbkb | Knowledge base documents |
.gbot | Bot configuration |
.gbtheme | UI customization |
.gbdrive | File storage mapping |
.bas | BASIC script file |
Common Confusions
“Do I need containers?” - No. BotServer installs everything directly or in optional LXC containers.
“What database?” - PostgreSQL, automatically installed and configured.
“What about scaling?” - Single server handles 1000+ concurrent users. Scale by running multiple instances.
“Is BASIC really BASIC?” - Inspired by BASIC, not strict implementation. Simplified and focused on conversations.
“Can I use TypeScript/Python/etc?” - BASIC handles conversation logic. Use RUN PYTHON/JAVASCRIPT for code execution, or integrate via REST API.
“Is it production-ready?” - Yes. Used in production since 2016, current Rust version since 2023.
Contact
Get in Touch
General Bots has been powering conversational AI solutions since 2018. We’re here to help you build intelligent automation and enhanced customer engagement.
Support
For technical support and documentation:
- Documentation: https://pragmatismo.com.br
- GitHub: https://github.com/GeneralBots
- Email: support@pragmatismo.com.br
Business Inquiries
Interested in implementing General Bots for your organization?
Pragmatismo Inovações Ltda.
Avenida Rio Branco, 177, Sala 201 a 2201
Rio de Janeiro - Brasil
CNPJ: 40.293.841/0001-59
📞 Phone: +55 21 4040-2160 (24h)
📧 Email: info@pragmatismo.com.br
🌐 Website: pragmatismo.com.br
Schedule a Demo
Want to see General Bots in action? Our team can demonstrate:
- Custom bot development
- LLM integration
- Knowledge base setup
- Multi-channel deployment
- Enterprise automation
Schedule a meeting to explore how General Bots can transform your business operations.
Data Protection
Data Protection Officer (DPO)
Rodrigo Rodriguez
📧 security@pragmatismo.com.br
Open Source Community
General Bots is open source and we welcome contributions:
- Report issues on GitHub
- Submit pull requests
- Join discussions
- Share your use cases
Legal
- Terms of Service
- Privacy Policy
- DUNS Number: 926754884
General Bots® - Enterprise conversational AI platform
Copyright © 2016-2025 Pragmatismo Inovações Ltda.