This directory contains a Helm chart to deploy a three node TimescaleDB cluster in a High Availability (HA) configuration on Kubernetes. This chart will do the following:
When deploying on AWS EKS:
When configured for Backups to S3:
To install the chart with the release name my-release
, first you need
to create a set of Kubernetes Secret objects that will contain:
This repo has a simple script that uses Kustomize to help you with this (See the Administration Guide for more details):
./generate_kustomization.sh my-release
Then you can install the chart with:
helm install --name my-release charts/timescaledb-single
You can override parameters using the --set key=value[,key=value]
argument to helm install
,
e.g., to install the chart with backup enabled:
helm install --name my-release charts/timescaledb-single --set backup.enabled=true
Alternatively, a YAML file that specifies the values for the parameters can be provided while installing the chart. For example,
helm install --name my-release -f myvalues.yaml charts/timescaledb-single
For details about what parameters you can set, have a look at the Administrator Guide
We have a Helm Repository that you can use, instead of cloning this Git repo.
First add the repository with:
helm repo add timescale 'https://charts.timescale.com'
NOTICE: Before installing the chart, you need to make sure that the required Kubernetes Secrets are created. You can do this with our helper script. Look at the Administrator Guide for more details.
The fastest way is to use the helper script packed with the chart itself.
First pull the chart and unpack it:
helm pull timescale/timescaledb-single --untar
Then run the generate_kustomization.sh
script:
cd ./timescaledb-single
bash ./generate_kustomization.sh my_release
The script will generate configuration for
It will prompt if you want it install the secrets directly, or print out how to do it after (p)reviewing the generated files.
And install the chart:
helm install --name my-release .
To keep the repo up to date with new versions you can do:
helm repo update
To connect to the TimescaleDB instance, we first need to know to which host we need to connect. Use kubectl
to get that information:
kubectl get service/my-release
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
my-release LoadBalancer 10.100.149.189 verylongname.example.com 5432:31294/TCP 27s
Using the External IP for the service (which will route through the LoadBalancer to the Master), you
can connect via psql
using the superuser postgres
by:
PGPOSTGRESPASSWORD=$(kubectl get secret --namespace default my-release-credentials -o jsonpath="{.data.PATRONI_SUPERUSER_PASSWORD}" | base64 --decode)
PGPASSWORD=$PGPOSTGRESPASSWORD psql -h verylongname.example.com -U postgres
NOTICE: You may have to wait a few minutes before you can resolve the DNS record
From here, you can start creating users and databases, for example, using the above psql
session:
CREATE USER example WITH PASSWORD 'thisIsInsecure';
CREATE DATABASE example OWNER example;
Connect to the example database with the example user:
psql -h verylongname.example.com -U example -d example
This should get you into the example database, from here on you can follow our TimescaleDB > Getting Started to create hypertables and start using TimescaleDB.
To access the database from inside the cluster, you can run psql
inside the Pod containing the primary:
RELEASE=my-release
kubectl exec -ti $(kubectl get pod -o name -l role=master,release=$RELEASE) psql
The backup is disabled by default, look at the Administrator Guide on how to configure backup location, credentials, schedules, etc.
To remove the spawned pods you can run a simple
helm delete my-release
Some items, (pvc’s and S3 backups for example) are not immediately removed. To also purge these items, have a look at the Administrator Guide