Install a MySQL Cluster with Helm
Deploy MySQL replication quickly using Helm charts.
Helm packages complex YAML into charts, which is ideal for quick database deployments. The Bitnami MySQL chart gives you a ready-made replication topology with sane defaults, so you can stand up a primary and replicas without writing dozens of manifests by hand.
This quick start adds the practical steps most teams need: preparing a namespace, setting credentials, customizing values, verifying replication, and testing access from a temporary client Pod.
Prerequisites
- A Kubernetes cluster with a default StorageClass.
helmandkubectlinstalled locally.- At least 2-4GB memory available for a simple primary + replica setup.
Create a namespace
Keep databases isolated from app workloads:
kubectl create namespace db
Install the chart (replication)
Add the Bitnami repo and install with replication enabled:
helm repo add bitnami https://charts.bitnami.com/bitnami
helm repo update
helm install mysql bitnami/mysql \
--namespace db \
--set architecture=replication \
--set auth.rootPassword='ChangeMe123!' \
--set primary.persistence.size=10Gi \
--set secondary.replicaCount=1
If you prefer a values file, create values.yaml:
architecture: replication
auth:
rootPassword: "ChangeMe123!"
username: "app"
password: "AppPassword123!"
database: "appdb"
primary:
persistence:
size: 10Gi
secondary:
replicaCount: 1
Apply it:
helm install mysql bitnami/mysql -n db -f values.yaml
Useful Helm commands
helm list -n db
helm status mysql -n db
helm get values mysql -n db
helm upgrade mysql -n db -f values.yaml
helm rollback mysql -n db 1
Verify the Pods and Services
kubectl get pods -n db
kubectl get svc -n db
You should see a primary and at least one secondary Pod. The chart typically exposes a primary service for writes and a read-only service for replicas.
Connect with a temporary client
Launch a short-lived MySQL client Pod to validate connectivity:
kubectl run -it --rm mysql-client \
--namespace db \
--image=bitnami/mysql:8.0 \
--command -- bash
Inside the Pod:
mysql -h mysql-primary.db.svc.cluster.local -u root -p
Create a table and insert a row:
CREATE DATABASE IF NOT EXISTS demo;
USE demo;
CREATE TABLE items (id INT PRIMARY KEY, name VARCHAR(64));
INSERT INTO items VALUES (1, 'hello');
Now query the replica service to confirm replication:
mysql -h mysql-secondary.db.svc.cluster.local -u root -p -e "SELECT * FROM demo.items;"
Service exposure
For production, you typically keep MySQL internal and only allow app Pods to access it. If you must access it from outside the cluster, use a port-forward first:
kubectl port-forward -n db svc/mysql-primary 3306:3306
Then connect locally with your MySQL client:
mysql -h 127.0.0.1 -P 3306 -u root -p
Read/write separation
The chart exposes separate Services for primary and secondary Pods. Use the primary for writes and the secondary for read-heavy queries. A typical pattern is to set two connection strings in your app:
MYSQL_WRITE_HOST=mysql-primary.db.svc.cluster.localMYSQL_READ_HOST=mysql-secondary.db.svc.cluster.local
This lets you scale read replicas later without changing application logic.
If you are unsure, start with a single write host and keep reads on the primary. Once your app is stable, split reads and verify that read-only traffic is hitting the secondary Service.
Resource requests and limits
Databases are sensitive to CPU throttling and IO latency. Start with explicit requests and limits so the scheduler places the Pods on nodes with enough capacity. Add this to values.yaml:
primary:
resources:
requests:
cpu: "500m"
memory: "1Gi"
limits:
cpu: "2"
memory: "2Gi"
secondary:
resources:
requests:
cpu: "250m"
memory: "512Mi"
limits:
cpu: "1"
memory: "1Gi"
Tune these based on your workload and the node size.
Init scripts and schema bootstrap
If you want to create tables or seed data on first start, use initdbScripts:
initdbScripts:
00-schema.sql: |
CREATE DATABASE IF NOT EXISTS appdb;
USE appdb;
CREATE TABLE users (
id INT PRIMARY KEY,
email VARCHAR(128) NOT NULL
);
This runs only on first initialization, which helps keep your cluster idempotent.
Configuration overrides
MySQL needs tuning for production. You can override defaults via chart values:
primary:
configuration: |-
[mysqld]
max_connections=500
slow_query_log=1
long_query_time=1
Apply changes with helm upgrade and watch the Pods roll.
Scaling replicas
If you need more read capacity, scale the secondaries:
helm upgrade mysql bitnami/mysql -n db --set secondary.replicaCount=3
Check replication health after scaling.
Monitoring basics
Enable metrics if you have Prometheus:
helm upgrade mysql bitnami/mysql -n db --set metrics.enabled=true
Track query latency, replication lag, and disk usage. Alerts on disk fullness are especially important for stateful workloads.
Secret management
Avoid putting passwords on the command line in shared environments. You can pre-create a Kubernetes Secret and point the chart at it:
kubectl -n db create secret generic mysql-auth \
--from-literal=root-password='ChangeMe123!' \
--from-literal=password='AppPassword123!'
Then reference it in values.yaml:
auth:
existingSecret: mysql-auth
username: app
database: appdb
This keeps sensitive values out of your Helm history.
User and privilege design
Avoid using root credentials in applications. Create a dedicated user with minimal grants, and scope it to the database it needs. If you set auth.username, auth.password, and auth.database in values, the chart will bootstrap that user for you.
For tighter security, restrict network access to the MySQL Pods, and rotate passwords regularly. Small habits here prevent large incidents later.
Review user grants periodically to ensure they still match your application scope. Log connection errors early; they often reveal mis-scoped privileges.
Network policy (optional)
If your cluster enforces NetworkPolicy, restrict database access to app namespaces. A simple example:
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: allow-app
namespace: db
spec:
podSelector:
matchLabels:
app.kubernetes.io/name: mysql
policyTypes:
- Ingress
ingress:
- from:
- namespaceSelector:
matchLabels:
name: app
ports:
- protocol: TCP
port: 3306
Adjust labels to match your namespaces and policies.
Backup strategy
For early testing, a logical dump is enough:
kubectl exec -n db mysql-primary-0 -- \
mysqldump -u root -p appdb > backup.sql
In production, use scheduled backups and store them outside the cluster. Volume snapshots are useful, but verify restore procedures before you rely on them.
Backups and upgrades
Even for a quick cluster, define a backup path early. At minimum, decide how you will export data and where it will be stored. The Bitnami chart supports backup jobs and external object storage in advanced setups, but you can start with logical dumps and later move to a managed backup tool.
When upgrading the chart, read the release notes and test in a staging namespace. Database upgrades can be disruptive if the chart changes default settings.
Operational workflow
Keep a simple runbook:
- Check Pods and PVCs daily for capacity and restarts.
- Test a restore in staging at least once per quarter.
- Pin chart versions in production to avoid surprise changes.
When you change configuration, prefer helm upgrade with a versioned values.yaml committed to git. It makes rollback predictable and audit-friendly.
Maintenance windows help. Schedule disruptive changes such as chart upgrades or storage migrations when your traffic is low. If you must change a setting that triggers a restart, warn application owners and validate query performance after the rollout.
Replication troubleshooting
If replicas fall behind, look at CPU and disk pressure first. You can also inspect replication status from the primary and secondary:
kubectl exec -n db mysql-primary-0 -- mysql -u root -p -e "SHOW MASTER STATUS\\G"
kubectl exec -n db mysql-secondary-0 -- mysql -u root -p -e "SHOW SLAVE STATUS\\G"
Check for large transactions, slow disk, or network issues between nodes.
Common issues
- Pods Pending: storage class missing or PVC cannot be bound.
- CrashLoopBackOff: invalid credentials or insufficient memory.
- Replication lag: node CPU or disk IO is saturated.
Quick diagnostics:
kubectl describe pod -n db <pod-name>
kubectl logs -n db <pod-name>
Practical notes
- Start with a quick inventory:
kubectl get nodes,kubectl get pods -A, andkubectl get events -A. - Compare desired vs. observed state;
kubectl describeusually explains drift or failed controllers. - Keep names, labels, and selectors consistent so Services and controllers can find Pods.
Quick checklist
- The resource matches the intent you described in YAML.
- Namespaces, RBAC, and images are correct for the target environment.
- Health checks and logs are in place before promotion.
- Backup and restore procedures are documented.