Do a little work with Redis Clusters and you will see in multiple places developers trying to get Redis node instances with passwords to work in a Redis cluster environment. The fact is it’s not supported. It’s not an option for good reason. There is a second back data channel that essentially makes the password AUTH meaningless. On top of that, passwords on a memory key store is well, meaningless for a good hacker. If you can throw thousands of passwords at the instance in ONE SECOND then the brute force hack is pretty easy. Maybe future versions of Redis will start to take password retries into consideration.

On the other hand, there are good reasons for a password on any service. A couple reasons come to mind: 1. You simply want to stop inadvertent prying eyes, such as an employee within the company that has access to the machine and the redis-cli command tool. 2. Maybe you post your passwords on a sticky note next to the computer room monitor so the password itself is not a concern but the person that has access to the machine but not the computer room should be staying out of the data. 3. You have multiple people that work on the machine and you want to protect your instance so a co-developer doesn’t accidently access and delete your Redis node. The list could probably go on much longer. One thing for sure, even if you assign a password to your Redis instance if you open the port up to the public you are opening yourself up to a hack. On the other hand, if it has a password and you are only using it for testing and development, maybe it’s not a big deal.  The better option is to use SSH to tunnel to your Redis server over the internet. That has its own issues.

One reason I chose to setup a cluster with AUTH passwords is I wanted to build apps on my laptop running locally on my laptop node.js server. I want the app to connect to my remote MYSQL development/production database and the same type of situation for the memory key store. That way, in theory, you can develop and test a version on the laptop. You can push it to the development EC2 server without any code changes and it should also work because it would be using the same MYSQL database (connecting to hostname mysql.mydomain.com) and Redis Cluster connection to a hostname redis.mydomain.com. (it won’t be using a local cluster or node on my laptop during development.)

With a Redis cluster environment, there is a back channel communications port for the cluster for each Redis node instance. The communications port is the node’s port with a 1 in front of the number. So if you have a node sitting on port 6101 there is also a back channel cluster communications port of 16101. We don’t use it. It’s only used by the Redis server. So In my situation above, I will not open the communications port to the public.

six-node-redis-cluster

Furthermore, why go to all this trouble if you are just working on a development application. Well, in theory, your development application will soon be a minimum viable product (MVP) and that won’t happen, or be much more difficult later, if you develop an application with code using a single memory store environment and then have to transition to a whole new API client for production. It’s better to develop an application once the right way. If you are developing an application that will have widespread use you know the cluster environment is needed. It may be a question of your development process and some won’t want to take this approach. But if you develop with a single node you might expect a multi-stage redevelopment as clusters are needed down the road and that adds a few steps.

With my development scenario, I am using the same server for all six nodes with 3 masters and 3 slaves. Again it’s not needed unless you start moving these nodes to additional EC2 instances or your application usage grows to handle the larger demand. With this design you can always add additional nodes later without application coding changes.

So here are the steps to create the cluster:

  1. create a minimum 6 Redis node instances with different hosts or ports using the following changes to the Redis conf file. To do this I created a /redis/data directory and copied the initial install 6379.conf file to the new port name in the /etc/redis directory. Then change each with the following
    port 6101
    
    pidfile /redis/data/redis_pid_6101.pid
    logfile /redis/data/redis_log_6101.log
    
    dbfilename dump_6101.rdb
    appendfilename "appendonly_6101.aof"
    cluster-config-file nodes_6101.conf
    
    requirepass myWickedLong256CharacterHashPassword
    
    dir /redis/data
    
    protected-mode no
    appendonly yes
    cluster-enabled yes
    
    # USE CLUSTER SYS INSTALL DEFAULTS BELOW
    cluster-node-timeout 15000
    cluster-slave-validity-factor 10
    cluster-migration-barrier 1
    cluster-require-full-coverage yes
    
    # USE OTHER SYS INSTALL DEFAULTS

    * create a .conf file for each port 6101 – 6106

  2. start each node with the redis-server command
    /usr/local/bin/redis-server /etc/redis/6101.conf

    * start each port 6101 – 6106

  3. Now we need to hack the redis-trib.rb progam with the following changes:
    This code change starts around line 57 and goes to line 125. You can cut, copy, and past as long as you get the exact same section of code (using Redis version 3.2.6) or simply scan through my code for the lines added and changed that are tagged with # tlr <start/end> comments

    class ClusterNode
     def initialize(addr)
     s = addr.split(":")
     if s.length < 2  puts "Invalid IP or Port (given as #{addr}) - use IP:Port format"  exit 1  end # tlr start pwd = nil  if s.length == 3  pwd = s.pop  end # tlr end port = s.pop # removes port from split array  ip = s.join(":") # if s.length > 1 here, it's IPv6, so restore address
     @r = nil
     @info = {}
     @info[:host] = ip
     @info[:port] = port
     @info[:slots] = {}
     @info[:migrating] = {}
     @info[:importing] = {}
     @info[:replicate] = false
    
    # tlr start
    @info[:password] = pwd 
    # tlr end
    
    @dirty = false # True if we need to flush slots info into node.
     @friends = []
     end
    
    def friends
     @friends
     end
    
    def slots
     @info[:slots]
     end
    
    def has_flag?(flag)
     @info[:flags].index(flag)
     end
    
    def to_s
     "#{@info[:host]}:#{@info[:port]}"
     end
    
    def connect(o={})
     return if @r
     print "Connecting to node #{self}: " if $verbose
     STDOUT.flush
     begin
    
    # tlr start
     if @info[:password] != nil
     @r = Redis.new(:host => @info[:host], :port => @info[:port], :timeout => 60, :password=>@info[:password])
     @r.ping
     else
     @r = Redis.new(:host => @info[:host], :port => @info[:port], :timeout => 60)
     @r.ping
     end
    # tlr end (the 2 lines in the else section are not changed from original)
    
    rescue
     xputs "[ERR] Sorry, can't connect to node #{self}"
     exit 1 if o[:abort]
     @r = nil
     end
     xputs "OK" if $verbose
     end
  4. next run the redis-trib.rb program to combine your nodes into one cluster. This may be a super long command from the command line especially if you have 256 character passwords but it works. (do it all on one line)
    /redis/redis-3.2.6/src/redis-trib.rb create --replicas 1 
    127.0.0.1:6101:my256charPassword 127.0.0.1:6102:my256charPassword 
    127.0.0.1:6103:my256charPassword 127.0.0.1:6104:my256charPassword 
    127.0.0.1:6105:my256charPassword 127.0.0.1:6106:my256charPassword

    * I did notice this produced a few errors as shown below but they are simply the process verification errors and the nodes are working fine.

    >>> Creating cluster
    >>> Performing hash slots allocation on 6 nodes...
    Using 3 masters:
    127.0.0.1:6101
    127.0.0.1:6102
    127.0.0.1:6103
    Adding replica 127.0.0.1:6104 to 127.0.0.1:6101
    Adding replica 127.0.0.1:6105 to 127.0.0.1:6102
    Adding replica 127.0.0.1:6106 to 127.0.0.1:6103
    
    (slot master/slave identifiers)
    
    Can I set the above configuration? (type 'yes' to accept): yes
    >>> Nodes configuration updated
    >>> Assign a different config epoch to each node
    >>> Sending CLUSTER MEET messages to join the cluster
    Waiting for the cluster to join.....
    [ERR] Sorry, can't connect to node 127.0.0.1:6105
    [ERR] Sorry, can't connect to node 127.0.0.1:6106
    [ERR] Sorry, can't connect to node 127.0.0.1:6103
    [ERR] Sorry, can't connect to node 127.0.0.1:6102
    [ERR] Sorry, can't connect to node 127.0.0.1:6104
    >>> Performing Cluster Check (using node 127.0.0.1:6101)
    M: 4f531ed4bcfd058b688a8692138fbdcc01a9dc7e 127.0.0.1:6101
     slots:0-5460 (5461 slots) master
     0 additional replica(s)
    [OK] All nodes agree about slots configuration.
    >>> Check for open slots...
    >>> Check slots coverage...
    [ERR] Not all 16384 slots are covered by nodes.

    A few more edits would fix the warning errors. 🙂 Since this is a one time command to initially setup your cluster, it’s not an issue. To add nodes to your existing cluster in the future you will user the rediscli command line tool with the CLUSTER MEET command.

  5. Confirm the cluster is working with the rediscli command setting a value.
    /usr/local/bin/redis-cli -c -p 6101 -a my256CharPassword
    SET foo bar
    GET foo
    CLUSTER SLOTS
    CLUSTER INFO

    You might notice foo gets pushed to a slot on the 2nd master. try SET a a and then try SET z z. You can also connect to any of the six nodes (6101 – 6106) to verify the sets with a GET command. (GET foo)

That’s all there is to it. You can open the 6101 – 6106 port to your local laptop and start developing on your local machine using the node.js ioredis client package on NPM at https://www.npmjs.com/package/ioredis

ps. Of course that’s not all! 🙂 additional code changes would be needed for example the slave to master login with AUTH.

Advertisements