Skip to content
Boltbase

Lightweight workflow engine with distributed execution

Define workflows in YAML. Execute with a single binary. No database or message broker required. Ideal for VMs, containers, and bare metal.

Demo

CLI: Execute workflows from the command line.

CLI Demo

Web UI: Monitor, control, and debug workflows visually.

Web UI Demo

Try It Live

Explore without installing: Live Demo

Credentials: demouser / demouser

Why Boltbase?

Quick Start

Install

bash
curl -L https://raw.githubusercontent.com/dagu-org/dagu/main/scripts/installer.sh | bash
powershell
irm https://raw.githubusercontent.com/dagu-org/dagu/main/scripts/installer.ps1 | iex
cmd
curl -fsSL https://raw.githubusercontent.com/dagu-org/dagu/main/scripts/installer.cmd -o installer.cmd && .\installer.cmd && del installer.cmd
bash
docker run --rm -v ~/.boltbase:/var/lib/boltbase -p 8080:8080 ghcr.io/dagu-org/boltbase:latest boltbase start-all
bash
brew install boltbase
bash
npm install -g --ignore-scripts=false @dagu-org/boltbase

Create a Workflow

bash
cat > hello.yaml << 'EOF'
steps:
  - command: echo "Hello from Boltbase!"
  - command: echo "Step 2"
EOF

Run

bash
boltbase start hello.yaml

Start Web UI

bash
boltbase start-all

Visit http://localhost:8080

Key Capabilities

CapabilityDescription
Nested WorkflowsReusable sub-DAGs with full execution lineage tracking
Distributed ExecutionLabel-based worker routing with automatic service discovery
Error HandlingExponential backoff retries, lifecycle hooks, continue-on-failure
Step TypesShell, Docker, SSH, HTTP, JQ, Mail, and more
ObservabilityLive logs, Gantt charts, Prometheus metrics, OpenTelemetry
SecurityBuilt-in RBAC with admin, manager, operator, and viewer roles

Example

A data pipeline with scheduling, parallel execution, sub-workflows, and retry logic:

yaml
schedule: "0 2 * * *"
type: graph

steps:
  - name: extract
    command: python extract.py --date=${DATE}
    output: RAW_DATA

  - name: transform
    call: transform-workflow
    params: "INPUT=${RAW_DATA}"
    depends: extract
    parallel:
      items: [customers, orders, products]

  - name: load
    command: python load.py
    depends: transform
    retry_policy:
      limit: 3
      interval_sec: 10

handler_on:
  success:
    command: notify.sh "Pipeline succeeded"
  failure:
    command: alert.sh "Pipeline failed"

See Examples for more patterns.

Use Cases

  • Data Pipelines - ETL/ELT with complex dependencies and parallel processing
  • ML Workflows - GPU/CPU worker routing for training and inference
  • Deployment Automation - Multi-environment rollouts with approval gates
  • Legacy Migration - Wrap existing scripts without rewriting them

Quick Links: Overview | CLI | Web UI | API | Architecture

Learn More

Overview

What is Boltbase and how it works

Getting Started

Installation and first workflow

Writing Workflows

Complete workflow authoring guide

YAML Reference

All configuration options

Features

Scheduling, queues, distributed execution

Configuration

Server, authentication, operations

Community

Released under the MIT License.