VANDEKERCKHOVE

A journey of ideas, insights, and inspiration.

Setting Up Single-Node In-Memory Redis on Kubernetes (Using Helm)

Redis is a powerful in-memory data store commonly used for caching, session storage, and as a message broker. However, in some scenarios, using a single-node Redis instance on Kubernetes makes sense—especially when budget constraints and non-critical use cases are involved. In this blog post, we’ll explore how to set up a single-node Redis instance on Kubernetes using Helm, discuss its use cases, and highlight some of its pitfalls and optimizations.

Why Single-Node Redis?

Single-node Redis instances are ideal for:

Ephemeral Use Cases: If the cache is cleaned (e.g., during pod restarts), it won’t break the application since the data can be regenerated or is non-critical.

Budget-Conscious Applications: Without the overhead of clustering or persistence, it’s a cost-effective option for projects with tight budgets. Especially compared to managed redis solutions.

Non-Critical Caching Needs: Suitable for caching tasks where data loss (due to pod restarts or evictions) doesn’t disrupt application functionality.

Web Applications: Perfect for object caching or session caching where persistence isn’t required, such as caching HTML fragments, API responses, or user sessions.

The Deployment Plan

We will deploy a Redis instance on Kubernetes using Helm. This setup consists of:

  1. A Deployment: Runs the Redis container.
  2. A Service: Exposes Redis within the cluster.
  3. A ConfigMap: Configures Redis settings, including limiting its memory usage and defining eviction policies.

The Problem with Default Redis Configuration

By default:

  • Redis does not limit memory usage, which can cause the container to exceed the pod’s memory limits and crash due to Out of Memory (OOM) errors.
  • Redis lacks an eviction policy out of the box, meaning old data won’t be removed when the memory limit is reached.

These issues can lead to instability, particularly in resource-constrained environments like Kubernetes pods.

Optimizing Redis Configuration

To avoid these pitfalls:

  1. Use a ConfigMap to define Redis settings, including:
    • maxmemory: Limits the memory Redis can use (e.g., 80% of the pod’s memory limit for overhead).
    • maxmemory-policy: Defines how Redis evicts data when memory limits are reached.
  2. Mount the ConfigMap as a volume in the Redis container and pass it to redis-server at startup.

Redis Eviction Policy

The maxmemory-policy parameter specifies how Redis handles memory exhaustion. For this setup, we use allkeys-lfu (Least Frequently Used), which evicts the least accessed keys first when memory is full. This ensures that frequently accessed data remains in the cache.

For more information on Redis eviction policies, check out the Redis Documentation on maxmemory-policy.

The Helm Chart

Here’s an example Helm configuration for deploying a single-node Redis instance.
You can find all files on Github: https://github.com/odileon-net/examples/tree/main/kubernetes/redis/helm/helm-redis-in-memory

configmap.yaml

deployment.yaml

service.yaml

values.yaml

Caution!

Keep in mind that while this is a simple and effective solution for its use case, there are important considerations to address. Your application must be capable of handling connection losses to the Redis instance.

Using a single-pod Redis setup means there are no replicas or Pod Disruption Budgets in place. As a result, if the node hosting the Redis pod is replaced or undergoes maintenance, the pod could be evicted without warning. This would cause a temporary loss of connection to Redis, potentially impacting your application’s functionality during that time.

Conclusion

Deploying a single-node Redis instance on Kubernetes is a simple and cost-effective solution for many caching scenarios. By configuring memory limits and eviction policies, you can ensure stability and efficient resource usage, making it a great choice for non-critical, ephemeral caching needs. However, always evaluate your application’s requirements before deciding on this setup.

If you need high availability or persistence, consider alternative setups like Redis clustering or persistent volumes or even managed redis by your favorite public cloud provider.

Sources

https://redis.io/docs/latest/operate/rs/databases/memory-performance/eviction-policy

https://github.com/odileon-net/examples/tree/main/kubernetes/redis/helm/helm-redis-in-memory

Leave a Reply

Your email address will not be published. Required fields are marked *

Follow me for more content
Share this post if you liked it !
Comments