Creating a Berserk Cluster
Deploy Berserk to Kubernetes, configure ingest, and run your first query
In this guide you'll deploy Berserk to Kubernetes, ingest telemetry, and run your first query. You're going to need helm, kubectl, and the bzrk CLI:
# Install bzrk
curl -fsSL https://go.bzrk.dev | bash
# Install helm and kubectl
brew install helm kubectl # macOS
pacman -S helm kubectl # Arch
apt install helm kubectl # Debian/UbuntuDon't have a Kubernetes cluster? We recommend k3d for running a local dev cluster.
Installation
Berserk needs a PostgreSQL instance (version 18+) and access to S3-compatible object storage.
Install with Helm
helm repo add berserk https://berserkdb.github.io/helm-charts
helm repo updateDownload and update the quick-start-values.yaml with you S3 bucket
curl -o quick-start-values.yaml https://raw.githubusercontent.com/berserkdb/helm-charts/refs/heads/main/examples/quick-start-values.yamlThe Helm chart can create the required Kubernetes secrets for you. In order to install Berserk you need a Github token from the Berserk team, so reach out to get yours. Pass your S3, PostgreSQL, and image pull credentials inline:
helm install berserk berserk/berserk \
--namespace bzrk \
--create-namespace \
--set global.s3Credentials.accessKeyId=YOUR_ACCESS_KEY \
--set global.s3Credentials.secretAccessKey=YOUR_SECRET_KEY \
--set global.postgresCredentials.databaseUrl="postgres://<user>:<password>@<host>:5432/<database>" \
--set global.imageCredentials.password=ghp_xxxx \
-f quick-start-values.yamlGetting Data Into Berserk
Berserk receives telemetry from any OpenTelemetry Collector.
Add Berserk as an OTLP gRPC exporter in your OpenTelemetry Collector configuration:
exporters:
otlp/berserk:
endpoint: "http://<ingest-bzrk-service>:4317"
tls:
insecure: true
service:
pipelines:
traces:
exporters: [otlp/berserk]
logs:
exporters: [otlp/berserk]
metrics:
exporters: [otlp/berserk]Replace <ingest-bzrk-service> with the address of your Berserk ingest service.
For our recommended configuration with disk-backed queues, retry policies, and tuned timeouts, see the OpenTelemetry Collector Configuration.
Query Your Data
Once data is flowing, query it using the Berserk CLI:
bzrk search "default | take 10" --since "1h ago"The dataset name is the first part of the query. From there you can use KQL to filter, project, and aggregate your data. See the Query Reference documentation for the full query language reference.
To organize data by service or environment, you can create additional datasets.
More configuration
See the helm-charts examples for ready-to-use configurations (minimal and production-ready) and how to adjust the resource specs for the servies.
If you manage secrets externally (e.g. via Vault, External Secrets Operator, or GitOps), see Managing Secrets for the expected secret formats.
For more configuration options and service details, see Cluster Admin.