Redis is a powerful in-memory data store commonly used for caching, session storage, and as a message broker. However, in some scenarios, using a single-node Redis instance on Kubernetes makes sense—especially when budget constraints and non-critical use cases are involved. In this blog post, we’ll explore how to set up a single-node Redis instance on Kubernetes using Helm, discuss its use cases, and highlight some of its pitfalls and optimizations.
Single-node Redis instances are ideal for:
Ephemeral Use Cases: If the cache is cleaned (e.g., during pod restarts), it won’t break the application since the data can be regenerated or is non-critical.
Budget-Conscious Applications: Without the overhead of clustering or persistence, it’s a cost-effective option for projects with tight budgets. Especially compared to managed redis solutions.
Non-Critical Caching Needs: Suitable for caching tasks where data loss (due to pod restarts or evictions) doesn’t disrupt application functionality.
Web Applications: Perfect for object caching or session caching where persistence isn’t required, such as caching HTML fragments, API responses, or user sessions.
We will deploy a Redis instance on Kubernetes using Helm. This setup consists of:
By default:
These issues can lead to instability, particularly in resource-constrained environments like Kubernetes pods.
To avoid these pitfalls:
maxmemory: Limits the memory Redis can use (e.g., 80% of the pod’s memory limit for overhead).maxmemory-policy: Defines how Redis evicts data when memory limits are reached.redis-server at startup.The maxmemory-policy parameter specifies how Redis handles memory exhaustion. For this setup, we use allkeys-lfu (Least Frequently Used), which evicts the least accessed keys first when memory is full. This ensures that frequently accessed data remains in the cache.
For more information on Redis eviction policies, check out the Redis Documentation on maxmemory-policy.
Here’s an example Helm configuration for deploying a single-node Redis instance.
You can find all files on Github: https://github.com/odileon-net/examples/tree/main/kubernetes/redis/helm/helm-redis-in-memory
configmap.yaml
---
apiVersion: v1
kind: ConfigMap
metadata:
name: {{.Release.Name}}-redis-cm
labels:
app: {{.Release.Name}}
data:
redis.conf: |
{{- with .Values.config }}
{{- range $key, $value := . }}
{{ $key }} {{ $value }}
{{- end }}
{{- end }}
deployment.yaml
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: {{.Release.Name}}
labels:
app: {{.Release.Name}}
spec:
replicas: {{.Values.replicaCount | default 1}}
revisionHistoryLimit: {{ .Values.revisionHistoryLimit | default 2 }}
selector:
matchLabels:
app: {{.Release.Name}}
template:
metadata:
labels:
app: {{.Release.Name}}
spec:
containers:
- name: {{.Release.Name}}
image: "{{ .Values.image.repository }}:{{ .Values.image.tag }}"
command: ["redis-server", "/etc/redis/redis.conf"]
volumeMounts:
- name: redis-config-volume
mountPath: /etc/redis
imagePullPolicy: {{ .Values.containers.imagePullPolicy | default "IfNotPresent" }}
ports:
- containerPort: {{ .Values.containers.containerPort }}
resources:
{{ toYaml .Values.resources | indent 12 }}
volumes:
- name: redis-config-volume
configMap:
name: {{.Release.Name}}-redis-cm
service.yaml
---
apiVersion: v1
kind: Service
metadata:
name: {{.Release.Name}}
labels:
app: {{.Release.Name}}
spec:
ports:
- port: {{.Values.service.port}}
targetPort: {{.Values.service.targetPort}}
selector:
app: {{.Release.Name}}
values.yaml
---
replicaCount: 1
image:
repository: redis
tag: latest
service:
port: 6379
targetPort: 6379
containers:
containerPort: 6379
imagePullPolicy: "Always"
resources:
limits:
cpu: 10m
memory: 75Mi
requests:
cpu: 3m
memory: 25Mi
config:
maxmemory: 50mb
maxmemory-policy: allkeys-lfu
Keep in mind that while this is a simple and effective solution for its use case, there are important considerations to address. Your application must be capable of handling connection losses to the Redis instance.
Using a single-pod Redis setup means there are no replicas or Pod Disruption Budgets in place. As a result, if the node hosting the Redis pod is replaced or undergoes maintenance, the pod could be evicted without warning. This would cause a temporary loss of connection to Redis, potentially impacting your application’s functionality during that time.
Deploying a single-node Redis instance on Kubernetes is a simple and cost-effective solution for many caching scenarios. By configuring memory limits and eviction policies, you can ensure stability and efficient resource usage, making it a great choice for non-critical, ephemeral caching needs. However, always evaluate your application’s requirements before deciding on this setup.
If you need high availability or persistence, consider alternative setups like Redis clustering or persistent volumes or even managed redis by your favorite public cloud provider.
https://redis.io/docs/latest/operate/rs/databases/memory-performance/eviction-policy
https://github.com/odileon-net/examples/tree/main/kubernetes/redis/helm/helm-redis-in-memory