Search

Todd Rodzen

Agile Application Development, DevOps, Cloud Architecture Engineering

Yet another UBUNTU EC2 server config

Here is another Linux (Ubuntu) AWS EC2 config with multiple nginx virtual servers and SSL using Let’s Encrypt free certificates and CertBot. For this, I spin up an Amazon Ubuntu 16.04 LTS AMI and use Putty terminal to the system with SSH using ubuntu and private key.

sudo -i
apt update -y
apt upgrade
# auto start nginx
sudo systemctl enable nginx.service
# add ubuntu to the root and www-data groups
usermod -a -G root ubuntu
usermod -a -G www-data ubuntu
# give root group write permissions to nginx conf file
chmod 664 /etc/nginx/nginx.conf
# create directory for nginx domain configs
mkdir -p /www/nginx-conf/domains
chmod -R 2775 /www

Modify /etc/nginx/nginx.conf file to include the following line at the bottom of the http section. Also, the default server section can be removed.

# add to the bottom of the http section of the /etc/nginx/conf file
include /www/nginx-conf/domains/*.conf;

Now create an example_com.conf file in /www/nginx-conf/domains/ for each domain. (replace example.com with your domain.)

# save as /www/nginx-conf/domains/example_com.conf file
server {
 listen 80;
 server_name www.example.com;
 return 302 $scheme://example.com$request_uri;

 listen 443 ssl;
  ssl_certificate /etc/letsencrypt/live/example.com/fullchain.pem;
  ssl_certificate_key /etc/letsencrypt/live/example.com/privkey.pem;

}

server {
 listen 80;
 server_name example.com;
 root /www/html/example.com;
 include /www/nginx-conf/global_restrictions.conf;

 listen 443 ssl;
  ssl_certificate /etc/letsencrypt/live/example.com/fullchain.pem;
  ssl_certificate_key /etc/letsencrypt/live/example.com/privkey.pem;

 location / {
  try_files $uri $uri/ /index.php?$args;
  index index.php index.html index.htm;
 }
 location ~ \.php$ {
  fastcgi_pass unix:/run/php/php7.0-fpm.sock;
  include snippets/fastcgi-php.conf;
 }
}

INSTALL SSL/TLS

Create a CertBot conf file for each domain.

# save as /www/letsencrypt/example_com.conf file

# domains to retrieve certificate
domains = example.com,www.example.com

# increase key size
rsa-key-size = 4096

# the CA endpoint server
server = https://acme-v01.api.letsencrypt.org/directory

# the email to receive renewal reminders, IIRC
email = letsencrypt@example.com

# turn off the ncurses UI, we want this to be run as a cronjob
text = True

Now install CertBot for letsEncrypt certificates from the EPEL.

# install certbot
add-apt-repository ppa:certbot/certbot
apt update
apt upgrade
apt install python-certbot-nginx
# run certbot for each domain
certbot --standalone --config /www/letsencrypt/example_com.conf certonly

# allow write to cron file
chmod 664 /etc/crontab

Add the following line to the /etc/crontab file. This will run the Certbot certificate renew every day at 8am. By default Let’s Encrypt certificates last 90 days and must be renewed.

0 8 * * * root certbot renew --no-self-upgrade

INSTALL PHP and MYSQL/MariaDB

Since this server is only serving Nginx and Node I will not install the Apache httpd server. This also installs from the Amazon Linux extras repo.

# install php and mysql/mariadb
apt install -y php mariadb-server php-mysqlnd
# php modules (gd needed for WordPress, zip needed for phplist plugins)
apt install php-pear php-gd php-mbstring php-zip
# modify mariaDB config file to allow remote bind
chmod 0664 /etc/mysql/mariadb.conf.d/50-server.cnf
# change the following line in the 50-server.cnf file
bind-address=0.0.0.0
# restart the mariadb sysql server
systemctl restart mysql.service
# follow the prompts to create a root password and remove anon access
mysql_secure_installation
# set MariaDB to autostart
systemctl enable mysql.service
# create a test php script 
echo "<?php phpinfo(); ?>" > /www/html/example.com/phpinfo.php

Next login locally to the MySQL server to create a remote access user

# login locally to the mysql server
mysql -u root -p mysql
# enter the root password and the following commands to create a remote access user and password
CREATE USER 'remoteuser'@'localhost' IDENTIFIED BY 'remotepassword';
CREATE USER 'remoteuser'@'%' IDENTIFIED BY 'remotepassword';
GRANT ALL PRIVILEGES ON *.* to remoteuser@localhost IDENTIFIED BY 'remotepassword' WITH GRANT OPTION;
GRANT ALL PRIVILEGES ON *.* to remoteuser@'%' IDENTIFIED BY 'remotepassword' WITH GRANT OPTION;
FLUSH PRIVILEGES;
EXIT;

# Finally start the nginx server
systemctl start nginx.service

That’s All!

Advertisements

Just another WordPress site on EC2

The basic process to create a WordPress site on a Amazon Linux 2. Before running the WordPress install script I create a DB in MariaDB using Database Workbench and create a username.

cd /www/nginx/example.com
wget https://wordpress.org/latest.zip
unzip latest.zip
mv /www/nginx/example.com/wordpress/* /www/nginx/example.com/
# allow owner upload for adding themes and plugins
sudo chown -R nginx:nginx /www/nginx

After running the install you may need to manually create (from the provided installation config process) and add the following line to the bottom of the /www/nginx/gtk.link/wp-config.php file

define('FS_METHOD', 'direct');

That’s All!

BONUS: if you need to backup a MariaDB directly on linux server without other tools. You can then move the sql file and restore it on another machine with these commands.

mysqldump --user=root --password --lock-tables --databases wp > /www/wp.sql

mysql -p wp < wp.sql > output.txt

# or backup the files to a zip and MariaDB to an sql
zip -r www-20180411-1532.zip /www
mysqldump --user=root --password --databases databasename > /backup/wp-20180411-1544.sql

Yet another Linux EC2 server config

Here is another Linux 2 (Amazon Fedora 23/24) AWS EC2 config with multiple nginx virtual servers and SSL using Let’s Encrypt free certificates and CertBot. For this I spin up an Amazon Linux 2 AMI and use Putty terminal to the system with SSH using ec2-user and private key.

sudo -i
yum update -y 
# install nginx from amazon linux etras repo
amazon-linux-extras install nginx1.12
# auto start nginx
chkconfig nginx on
# add ec2-user to the root and nginx groups
usermod -a -G root ec2-user
usermod -a -G nginx ec2-user
# give root group write permissions to nginx conf file
chmod 664 /etc/nginx/nginx.conf
# create directory for nginx domain configs
mkdir -p /www/nginx-conf/domains
chmod -R 2775 /www

Modify /etc/nginx/nginx.conf file to include the following line at the bottom of the http section. Also, the default server section can be removed.

# add to the bottom of the http section of the /etc/nginx/conf file
include /www/nginx-conf/domains/*.conf;

Now create an example_com.conf file in /www/nginx-conf/domains/ for each domain. (replace example.com with your domain.)

# save as /www/nginx-conf/domains/example_com.conf file
server {
 listen 80;
 server_name www.example.com;
 return 302 $scheme://example.com$request_uri;

 listen 443 ssl;
  ssl_certificate /etc/letsencrypt/live/example.com/fullchain.pem;
  ssl_certificate_key /etc/letsencrypt/live/example.com/privkey.pem;

}

server {
 listen 80;
 server_name example.com;
 root /www/html/example.com;
 include /www/nginx-conf/global_restrictions.conf;

 listen 443 ssl;
  ssl_certificate /etc/letsencrypt/live/example.com/fullchain.pem;
  ssl_certificate_key /etc/letsencrypt/live/example.com/privkey.pem;

 location / {
  try_files $uri $uri/ /index.php?$args;
  index index.php index.html index.htm;
 }
 location ~ \.php$ {
  fastcgi_pass unix:/var/run/php-fpm/www.sock;
  fastcgi_index index.php;
  fastcgi_param SCRIPT_FILENAME /www/html/example.com$fastcgi_script_name;
  include fastcgi_params;
 }
}

INSTALL SSL/TLS

Create a CertBot conf file for each domain.

# save as /www/letsencrypt/example_com.conf file

# domains to retrieve certificate
domains = example.com,www.example.com

# increase key size
rsa-key-size = 4096

# the CA endpoint server
server = https://acme-v01.api.letsencrypt.org/directory

# the email to receive renewal reminders, IIRC
email = letsencrypt@example.com

# turn off the ncurses UI, we want this to be run as a cronjob
text = True

Now install CertBot for letsEncrypt certificates from the EPEL.

# download, install, and Enable EPEL
wget -r --no-parent -A 'epel-release-*.rpm' http://dl.fedoraproject.org/pub/epel/7/x86_64/Packages/e/
rpm -Uvh dl.fedoraproject.org/pub/epel/7/x86_64/Packages/e/epel-release-*.rpm
yum-config-manager --enable epel*
yum repolist all

# install certbot
yum install certbot

# run certbot for each domain
certbot --standalone --config /www/letsencrypt/example_com.conf certonly

# allow write to cron file
chmod 664 /etc/crontab

Add the following line to the /etc/crontab file. This will run the Certbot certificate renew every day at 8am. By default Let’s Encrypt certificates last 90 days and must be renewed.

0 8 * * * root certbot renew --no-self-upgrade

INSTALL PHP and MYSQL/MariaDB

Since this server is only serving Nginx and Node I will not install the Apache httpd server. This also installs from the Amazon Linux extras repo.

# install php and mysql/mariadb
amazon-linux-extras install lamp-mariadb10.2-php7.2
yum install -y php mariadb-server php-mysqlnd
# php modules (gd needed for WordPress)
sudo yum install php-pear php-gd php-mbstring
# modify mariaDB config file to allow remote bind
chmod 0664 /etc/my.cnf.d/mariadb-server.cnf
# uncomment the following line in the mariadb-server.cnf file
bind-address=0.0.0.0
# start the mariadb server
systemctl start mariadb
# follow the prompts to create a root password and remove anon access
mysql_secure_installation
# set MariaDB to autostart
systemctl enable mariadb
# create a test php script 
echo "<?php phpinfo(); ?>" > /www/html/example.com/phpinfo.php

# modify php config file
chmod 0664 /etc/php-fpm.d/www.conf

Modify the following lines within the /etc/php-fpm.d/www.conf file.

user = nginx
group = nginx
listen.owner = nginx
listen.group = nginx
listen.mode = 0664

Next login locally to the MySQL server to create a remote access user

# login locally to the mysql server
mysql -u root -p mysql
# enter the root password and the following commands to create a remote access user and password
CREATE USER 'remoteuser'@'localhost' IDENTIFIED BY 'remotepassword';
CREATE USER 'remoteuser'@'%' IDENTIFIED BY 'remotepassword';
GRANT ALL PRIVILEGES ON *.* to remoteuser@localhost IDENTIFIED BY 'remotepassword' WITH GRANT OPTION;
GRANT ALL PRIVILEGES ON *.* to remoteuser@'%' IDENTIFIED BY 'remotepassword' WITH GRANT OPTION;
FLUSH PRIVILEGES;
EXIT;

# Finally start the nginx server
service nginx start

That’s All!

to Containerize

Containerizing an application from a monolithic application to container based microservices is a large and wide topic. I like to focus on small topics within the greater goal when looking at cloud architecture. The first is the decision to containerize and how to best achieve a manageable cloud. Of course, the desired end services and application need will drive the ultimate cloud architecture model. First, let’s look at the basic design possibilities.

The first option is a monolithic application. The problems are well known. The size of the application and complexity for the developer soon becomes overwhelming. We will take it for granted this is not desirable.

Next, is a possibility of creating a sort of hybrid cluster. This still has the problem for the developer as the application design becomes too complex. I reviewed the design of a sort “roll-your-own” cluster with a MySQL and node.js in a prior post “to Cluster or not to Cluster”. This process described using the Cluster function within node.js. This is a quick and dirty solution to programmatically rolling-your-own cluster for node.js. There is also the PM2 solution to manage a cluster application on node.js. This provides the solution of maximizing resources to utilize processing power on multiple threads but does not provide any of the solutions for a scalable containerized application.

Next, the type (or flavor) of scaleable container service is open for debate. There is docker swarm, Amazon AWS Elastic Container Service, build-your-own Kubernetes, as well as many others. One promising managed service from AWS is EKS (Elastic Kubernetes Service.) I won’t try to review these options but in a future post simply provide a step by step basic process to look at cloud architecture for a Kubernetes homogeneous solution. This provides for decisions based on cloud or service needs without fixating on AWS solutions. As with any developing, the team of developers can drive the solution to some extent, as there are many flavors of best-practices for cloud containerized services.

There are also many other cloud-based services including GCP (Google Cloud Platform), IBM, etc but Kubernetes on AWS has become a leader. In one recently published report, Kubernetes on AWS has 63% of market share of all container services used today.

My future posts focused on Kubernetes will focus on container imaging from Docker Cloud but Google Cloud container registry is also a good option. There are other options like RKT, but of course, Docker is the most widely used and supported.

AWS Architecture

A while ago I did some application design that discussed the specific need of utilizing pooled connections and clusters when using node.js and mySQL. These ideas along with many others like kubernetes containers, continuous deployment, etc are small steps to create a long-term solution of a well-architectured framework.

As a sort of exercise let’s define the terminology that is important in the planned architecture of a cloud-based system.

  • Responsiveness: reacting quickly and positively
  • Resiliency: capacity to recover quickly from demand
  • Elasticity: the ability to stretch or duplicate to adapt to high demand and return to normal when demand no longer exists.
  • Availability: the state of maintaining uptime

Some other pillars of a well-architected framework include:

  • Security: ability to protect information, systems, and assets while delivering business needs
  • Performance Efficiency: the ability to use computer resources efficiently to meet system requirements
  • Cost Optimization: the ability to avoid or eliminate unneeded cost

While these terms at times overlap and intersect to different degrees for the purpose of creating a well-designed cloud infrastructure, it is always important to consider these pillars in determining best practices. If your best-practices includes AWS it probably includes EC2, S3, and RDS. Although it also must also include budget, cost optimization and return on investment (ROI) utilizing tools to track costs, such as through tagging and IAM controls, you can innovate without overspending.

Additionally, a good architecture will utilize performance tools to track and maintain top efficiency within the cloud resources utilizing Amazon Cloud Watch or 3rd party monitors. Finally, audits and tools are also needed to maintain data integrity and security.

Regardless of the underlying technology, there are many best practices. Some might think of it as varying as individual opinions, everyone has an opinion. What is important for an organization is to select best practices that work for the entire organization from the users to the operators to the developers and designers. The success of any architecture depends on a cohesive operation and sound principles.

This AWS Architecture blog has some great insight to AWS design and architecture to achieve responsiveness, resiliency, and elasticity.

I particularly like the discussion of scaling your application one-step at a time.

SSL/TSL on Apache on AWS EC2 Linux 2

SSL is the historical term to describe httpS (secure) websites but what we are really enabling is TSL.

sudo yum install -y mod_ssl

Then restart Apache

sudo systemctl restart httpd

Then on the EC2 system console ensure that port 443 is enabled for the security group.

Also modify the etc/http/conf.d/ssl.conf file to match your virtual hosts like this:

<VirtualHost *:443>
 ServerName mydomain.com:443
 ServerAlias www.mydomain.com
 DocumentRoot "/var/www/html/mydomain.com"
 SSLEngine on
 SSLCertificateFile /etc/pki/tls/certs/localhost.crt
 SSLCertificateKeyFile /etc/pki/tls/private/localhost.key
</VirtualHost>

next setup CertBot to use the free Let’s Encrypt certificate service

sudo wget -r --no-parent -A 'epel-release-*.rpm' http://dl.fedoraproject.org/pub/epel/7/x86_64/Packages/e/

sudo rpm -Uvh dl.fedoraproject.org/pub/epel/7/x86_64/Packages/e/epel-release-*.rpm

sudo yum-config-manager --enable epel*

sudo yum install -y certbot python2-certbot-apache

Then run certbot and follow prompts.

next setup an update CRON job

Edit /etc/crontab with the following

39 1,13 * * * root certbot renew --no-self-upgrade

then restart cron

sudo systemctl restart crond

LAMP, Node, and Nginx on AWS AMI Linux 2

I have in the past done install directions and details of installing LAMP on AWS Linux and another post on installing Node.js. I will now do a quick update on the basics of installing LAMP and Node.js on the newest Linux 2 AMI on Amazon.

The basic steps are the same to setup your Linux on AWS just select the Linux 2 AMI on EC2. Be sure to generate your private key and use a tool like Putty to login with the user ec2-user. Don’t forget you will need to convert your pem private key to ppk format using a tool like PuttyGen before using it in Putty.

The Lamp install is described in more detail here

http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ec2-lamp-amazon-linux-2.html

Basically, do the following:

sudo yum update -y
sudo amazon-linux-extras install lamp-mariadb10.2-php7.2
sudo yum install -y httpd php mariadb-server php-mysqlnd
sudo systemctl enable httpd
sudo usermod -a -G apache ec2-user
sudo chown -R ec2-user:apache /var/www
sudo chmod 2775 /var/www && find /var/www -type d -exec sudo chmod 2775 {} \;
find /var/www -type f -exec sudo chmod 0664 {} \;
echo "<?php phpinfo(); ?>" > /var/www/html/phpinfo.php

You should now be able to view your apache default web page root and the php page http://my.public.dns.amazonaws.com/phpinfo.php

You still need to do the database install.

For the Node.js install we use nvm with the following

curl -o- https://raw.githubusercontent.com/creationix/nvm/v0.33.8/install.sh | bash
. ~/.nvm/nvm.sh
nvm install 9.3.0

I choose to create a node directory for programming in the root and then do the same or similar group security as above for the Apache www/html directory.

Finally I will setup Nginx reverse proxy as described in a prior post but with Linux 2 Nginx is now available as an AWS “extra” use the following command.

amazon-linux-extras install nginx1.12

Follow my prior post to configure Nginx

Recruiters please note..

Hi,

I am available for phone interviews for Boston based positions. I split my time between Boston and North Carolina. If I am not in Boston, I am available to return on request for onsite interviews with actual direct hiring managers. I am available for immediate hire.

{UPDATE 3/5/2018 I am in the Boston area and available for onsite interview}

Todd

my Resume:
https://www.dropbox.com/s/teuwj86cdn7d7ap/Todd%20Rodzen%20-%20resume%20-%203-5-2018.docx?dl=0

my Coding and AWS systems design blog:
https://trodzen.wordpress.com

my Personal Portfolio:
http://todd.rodzen.com

moving jsfiddle Vue.js code to a single html page

If you are new to js often times simple test code is shown in an online code editor like jsfiddle.net. It’s a great way to quickly get up and running on the desired task like your great Hello World example. But then how do you move that code to a served web page?

Of course there are many ways. You could use node and npm  There are better methods for the long term if you are building say a full app but if you want to dump your test code on your own server because you need access to other local resources like you images directory then this is one method to copy the Vue.js code.

When using a simple Vue.js code and using a CDN to load Vue you must do a couple things in a single HTML page.

  1. load the Vue.js CDN
  2. load the new Vue function when the page loads (window.onload is one method)
  3. It’s good practice to also hide all the elements when loading with the v-cloak directive. This way your Vue code doesn’t splash on the browser while Vue is doing its work.

Here’s that great fiddle.

jsfiddle

https://jsfiddle.net/trodzen/pz3gbmjm/

Here’s the code when moved to a single HTML page run on an AWS Linux Apache server.

vue
Gist Github code

Obviously, this is a very simple task. This shows a quick way to move a section of code to your own server for further development. The window.load function runs your Vue code. The v-cloak directives also hide your code until Vue is done loading.

The original purpose of the Hello World jsfiddle was to load “Hello World” in the data in the message variable. Vue then replaces code in the DOM inside {{ }} (“mustache”) variables with the data. But more importantly, this creates a reactive link to that Vue “message” variable so the data can be changed.

AWS create new default su

The steps to create a user with the same rights as ec2-user are:

create the account

useradd NEWACCOUNT

set a password for the account

passwd NEWACCOUNT

add the account to the sudo group

usermod -aG sudo NEWACCOUNT

log in with the account

su – NEWACCOUT

create a .ssh directory

mkdir .ssh

log out of NEWACCOUNT

exit

now you are back in root, copy the authorized_keys file, and set security on .ssh

cp /home/ec2-user/.ssh/authorized_keys /home/NEWACCOUNT/.ssh/authorized_keys

chmod 700 /home/NEWACCOUNT/.ssh

Log all the way out of the system, and try and log in with the NEWACCOUNT.

Once logged in invoke sudo su to ensure it has the correct rights. You should get an error message.

The last step is to replace ec2-user with NEWACCOUNT in the file:

/etc/sudoers.d/cloud-init

The easiest way is with the nano editor, but there are many other editors in linux.

Redis on Production

Let’s do some fine tuning for Redis in production

  1. Turn on vm over commit memory. edit the /etc/sysctl.conf file

sudo chmod 664 /etc/sysctl

Then use the editor to add this to the bottom of the file.

vm overcommit_memory = 1

Without getting into the pros and cons, vm over commit is fully explained here https://redis.io/topics/faq

Next, we setup the system services to start the Redis node(s). It’s common to have at least a master and slave on one server so let’s allow multiple services to run on the same server and have them auto-start. And while we’re at it, let’s make the service start/stop commands work for Redis nodes with AUTH passwords. Oh, and let’s fix the transparent_hugepage default (set it to never, as recommended by Redis.) This is all explained in my last post Redis Linux Service

Also don’t forget to turn off the debug logging used for development.

UUID vs Auto Increment

What is the best method to create a key in today’s advanced Javascript node.js style applications? Do you rely on the old tried and true method to use auto-increment on the database primary key? or is a UUID better? One thought to help answer that; is a sequential key useful? Especially in the situation where the unique key may start out as only in the application or only on the client session store (ie. Redis key memory store.) In this situation, a sequential key is not useful and creating the auto increment key takes an additional step using INCR on Redis or INSERT on MySQL which could also create an unnecessary round trip to your database.

On the other hand, the UUID v4 implementation which creates a unique randomized UUID may appear to be a CPU time slice consuming operation, but one stackoverflow user did some testing, as posted here,:

uuid.png

You can see the green and yellow lines of increased connections. As connections increase AUTO INCR method creates an increased latency; while the UUID is a steady same or lower process time slice.

I didn’t come up with this one but found it may possibly be the smallest UUID v4 generator code.

exports.uuid = function b(a){return a?(a^Math.random()*16>>a/4).
   toString(16):([1e7]+-1e3+-4e3+-8e3+-1e11).replace(/[018]/g,b)}

There is no one answer. It’s always good to have multiple available methods but be sure to consider the uses and weigh the options.

Redis Linux Service

Redis has a service install script install_server.sh. It takes a few prompts and adds a new Redis service on Linux. One issue I found when working with Redis passwords, the service script doesn’t handle start/stop correctly. In a prior post, I detail the installation process for Redis on Amazon Linux. Here’s a fix to the services script. The issue is with the stop. There is no password passed to the service script on shutdown. With passwords and no modification you are limited to doing the following:

service redis_6379 start
redis-cli -p 6379 -a YourPassword shutdown

The standard “service stop” command doesn’t work but here’s an update to the service script that could be implemented in the install_server.sh script by the Redis team.. My edits are tagged with #tlr and should be changed in the redis_6379 file in the /etc/init.d directory (or the origin install_server.sh script, if you want to get even more fancy. The install_server.sh script is used to create the redis_6379 start/stop script.)

#!/bin/sh
#Configurations injected by install_server below....

NAME=`basename ${0}` #tlr

EXEC=/usr/local/bin/redis-server
CLIEXEC=/usr/local/bin/redis-cli

#PIDFILE=/var/run/redis_6379.pid #tlr
#CONF="/etc/redis/6379.conf" #tlr
#REDISPORT="6379" #tlr

PIDFILE=/var/run/${NAME}.pid #tlr
CONF="/etc/redis/${NAME#*_}.conf" #tlr
REDISPORT="${NAME#*_}" #tlr

PassVar=$(grep "requirepass " $CONF | cut -d' ' -f1 | tr -d '\012\015') #tlr
# PassVar is the requirepass variable name (with or without the # comment) #tlr
if [ $PassVar = "requirepass" ] #tlr
then #tlr
 requirepass=$(grep "requirepass " $CONF | cut -d' ' -f2 | tr -d '\012\015') #tlr
else #tlr
 # password commented output #tlr
 requirepass="" #tlr
fi #tlr

###############
# SysV Init Information
# chkconfig: - 58 74
# description: redis_6379 is the redis daemon.
### BEGIN INIT INFO
# Provides: redis_6379
# Required-Start: $network $local_fs $remote_fs
# Required-Stop: $network $local_fs $remote_fs
# Default-Start: 2 3 4 5
# Default-Stop: 0 1 6
# Should-Start: $syslog $named
# Should-Stop: $syslog $named
# Short-Description: start and stop redis_6379
# Description: Redis daemon
### END INIT INFO

case "$1" in
 start)
 if [ -f $PIDFILE ]
 then
 echo "$PIDFILE exists, process is already running or crashed"
 else

echo "never" > /sys/kernel/mm/transparent_hugepage/enabled #tlr
echo "never" > /sys/kernel/mm/transparent_hugepage/defrag #tlr
# by addding hugpage here it overides other places set on reboot (ie.onAWS)

echo "Starting Redis server..."
 $EXEC $CONF
 fi
 ;;
 stop)
 if [ ! -f $PIDFILE ]
 then
 echo "$PIDFILE does not exist, process is not running"
 else
 PID=$(cat $PIDFILE)
 echo "Stopping ..."

# $CLIEXEC -p $REDISPORT shutdown #tlr

if [ -z $requirepass ] #tlr
then #tlr
 $CLIEXEC -p $REDISPORT shutdown #tlr
else #tlr
 echo "Using .conf File Password AUTH" #tlr
 $CLIEXEC -p $REDISPORT -a $requirepass shutdown #tlr
fi #tlr

while [ -x /proc/${PID} ]
 do
 echo "Waiting for Redis to shutdown ..."
 sleep 1
 done
 echo "Redis stopped"
 fi
 ;;
 status)
 PID=$(cat $PIDFILE)
 if [ ! -x /proc/${PID} ]
 then
 echo 'Redis is not running'
 else
 echo "Redis is running ($PID)"
 fi
 ;;
 restart)
 $0 stop
 $0 start
 ;;
 *)
 echo "Please use start, stop, restart or status as first argument"
 ;;
esac

These changes simply allow for multiple services by creating symbolic links for additional nodes to the original redis_6379 service start/stop script. Additional Redis node services could be added by doing the following

ln -s /etc/init.d/redis_6379 /etc/init.d/redis_6101

This creates a start / stop script for the additional node by linking to the original 6379 service start/stop script. You will need an additional service for each node (master or slave) that you are running on the instance. Also, duplicate and modify your /etc/redis/xxxx.conf file as needed. In a prior post, I detail the configuration for a Redis Cluster with Passwords. Finally, issue a command to add the service:

service --add redis_6101

You can now reboot the Linux instance and your additional services will still be running. You can now also use the command $ service redis_6101 restart without an error due to passwords.

Redis Cluster with Passwords

Do a little work with Redis Clusters and you will see in multiple places developers trying to get Redis node instances with passwords to work in a Redis cluster environment. The fact is it’s not supported. It’s not an option for good reason. There is a second back data channel that essentially makes the password AUTH meaningless. On top of that, passwords on a memory key store is well, meaningless for a good hacker. If you can throw thousands of passwords at the instance in ONE SECOND then the brute force hack is pretty easy. Maybe future versions of Redis will start to take password retries into consideration.

On the other hand, there are good reasons for a password on any service. A couple reasons come to mind: 1. You simply want to stop inadvertent prying eyes, such as an employee within the company that has access to the machine and the redis-cli command tool. 2. Maybe you post your passwords on a sticky note next to the computer room monitor so the password itself is not a concern but the person that has access to the machine but not the computer room should be staying out of the data. 3. You have multiple people that work on the machine and you want to protect your instance so a co-developer doesn’t accidently access and delete your Redis node. The list could probably go on much longer. One thing for sure, even if you assign a password to your Redis instance if you open the port up to the public you are opening yourself up to a hack. On the other hand, if it has a password and you are only using it for testing and development, maybe it’s not a big deal.  The better option is to use SSH to tunnel to your Redis server over the internet. That has its own issues.

One reason I chose to setup a cluster with AUTH passwords is I wanted to build apps on my laptop running locally on my laptop node.js server. I want the app to connect to my remote MYSQL development/production database and the same type of situation for the memory key store. That way, in theory, you can develop and test a version on the laptop. You can push it to the development EC2 server without any code changes and it should also work because it would be using the same MYSQL database (connecting to hostname mysql.mydomain.com) and Redis Cluster connection to a hostname redis.mydomain.com. (it won’t be using a local cluster or node on my laptop during development.)

With a Redis cluster environment, there is a back channel communications port for the cluster for each Redis node instance. The communications port is the node’s port with a 1 in front of the number. So if you have a node sitting on port 6101 there is also a back channel cluster communications port of 16101. We don’t use it. It’s only used by the Redis server. So In my situation above, I will not open the communications port to the public.

six-node-redis-cluster

Furthermore, why go to all this trouble if you are just working on a development application. Well, in theory, your development application will soon be a minimum viable product (MVP) and that won’t happen, or be much more difficult later, if you develop an application with code using a single memory store environment and then have to transition to a whole new API client for production. It’s better to develop an application once the right way. If you are developing an application that will have widespread use you know the cluster environment is needed. It may be a question of your development process and some won’t want to take this approach. But if you develop with a single node you might expect a multi-stage redevelopment as clusters are needed down the road and that adds a few steps.

With my development scenario, I am using the same server for all six nodes with 3 masters and 3 slaves. Again it’s not needed unless you start moving these nodes to additional EC2 instances or your application usage grows to handle the larger demand. With this design you can always add additional nodes later without application coding changes.

So here are the steps to create the cluster:

  1. create a minimum 6 Redis node instances with different hosts or ports using the following changes to the Redis conf file. To do this I created a /redis/data directory and copied the initial install 6379.conf file to the new port name in the /etc/redis directory. Then change each with the following
    port 6101
    
    pidfile /redis/data/redis_pid_6101.pid
    logfile /redis/data/redis_log_6101.log
    
    dbfilename dump_6101.rdb
    appendfilename "appendonly_6101.aof"
    cluster-config-file nodes_6101.conf
    
    requirepass myWickedLong256CharacterHashPassword
    
    dir /redis/data
    
    protected-mode no
    appendonly yes
    cluster-enabled yes
    
    # USE CLUSTER SYS INSTALL DEFAULTS BELOW
    cluster-node-timeout 15000
    cluster-slave-validity-factor 10
    cluster-migration-barrier 1
    cluster-require-full-coverage yes
    
    # USE OTHER SYS INSTALL DEFAULTS

    * create a .conf file for each port 6101 – 6106

  2. start each node with the redis-server command
    /usr/local/bin/redis-server /etc/redis/6101.conf

    * start each port 6101 – 6106

  3. Now we need to hack the redis-trib.rb progam with the following changes:
    This code change starts around line 57 and goes to line 125. You can cut, copy, and past as long as you get the exact same section of code (using Redis version 3.2.6) or simply scan through my code for the lines added and changed that are tagged with # tlr <start/end> comments

    class ClusterNode
     def initialize(addr)
     s = addr.split(":")
     if s.length < 2  puts "Invalid IP or Port (given as #{addr}) - use IP:Port format"  exit 1  end # tlr start pwd = nil  if s.length == 3  pwd = s.pop  end # tlr end port = s.pop # removes port from split array  ip = s.join(":") # if s.length > 1 here, it's IPv6, so restore address
     @r = nil
     @info = {}
     @info[:host] = ip
     @info[:port] = port
     @info[:slots] = {}
     @info[:migrating] = {}
     @info[:importing] = {}
     @info[:replicate] = false
    
    # tlr start
    @info[:password] = pwd 
    # tlr end
    
    @dirty = false # True if we need to flush slots info into node.
     @friends = []
     end
    
    def friends
     @friends
     end
    
    def slots
     @info[:slots]
     end
    
    def has_flag?(flag)
     @info[:flags].index(flag)
     end
    
    def to_s
     "#{@info[:host]}:#{@info[:port]}"
     end
    
    def connect(o={})
     return if @r
     print "Connecting to node #{self}: " if $verbose
     STDOUT.flush
     begin
    
    # tlr start
     if @info[:password] != nil
     @r = Redis.new(:host => @info[:host], :port => @info[:port], :timeout => 60, :password=>@info[:password])
     @r.ping
     else
     @r = Redis.new(:host => @info[:host], :port => @info[:port], :timeout => 60)
     @r.ping
     end
    # tlr end (the 2 lines in the else section are not changed from original)
    
    rescue
     xputs "[ERR] Sorry, can't connect to node #{self}"
     exit 1 if o[:abort]
     @r = nil
     end
     xputs "OK" if $verbose
     end
  4. next run the redis-trib.rb program to combine your nodes into one cluster. This may be a super long command from the command line especially if you have 256 character passwords but it works. (do it all on one line)
    /redis/redis-3.2.6/src/redis-trib.rb create --replicas 1 
    127.0.0.1:6101:my256charPassword 127.0.0.1:6102:my256charPassword 
    127.0.0.1:6103:my256charPassword 127.0.0.1:6104:my256charPassword 
    127.0.0.1:6105:my256charPassword 127.0.0.1:6106:my256charPassword

    * I did notice this produced a few errors as shown below but they are simply the process verification errors and the nodes are working fine.

    >>> Creating cluster
    >>> Performing hash slots allocation on 6 nodes...
    Using 3 masters:
    127.0.0.1:6101
    127.0.0.1:6102
    127.0.0.1:6103
    Adding replica 127.0.0.1:6104 to 127.0.0.1:6101
    Adding replica 127.0.0.1:6105 to 127.0.0.1:6102
    Adding replica 127.0.0.1:6106 to 127.0.0.1:6103
    
    (slot master/slave identifiers)
    
    Can I set the above configuration? (type 'yes' to accept): yes
    >>> Nodes configuration updated
    >>> Assign a different config epoch to each node
    >>> Sending CLUSTER MEET messages to join the cluster
    Waiting for the cluster to join.....
    [ERR] Sorry, can't connect to node 127.0.0.1:6105
    [ERR] Sorry, can't connect to node 127.0.0.1:6106
    [ERR] Sorry, can't connect to node 127.0.0.1:6103
    [ERR] Sorry, can't connect to node 127.0.0.1:6102
    [ERR] Sorry, can't connect to node 127.0.0.1:6104
    >>> Performing Cluster Check (using node 127.0.0.1:6101)
    M: 4f531ed4bcfd058b688a8692138fbdcc01a9dc7e 127.0.0.1:6101
     slots:0-5460 (5461 slots) master
     0 additional replica(s)
    [OK] All nodes agree about slots configuration.
    >>> Check for open slots...
    >>> Check slots coverage...
    [ERR] Not all 16384 slots are covered by nodes.

    A few more edits would fix the warning errors. 🙂 Since this is a one time command to initially setup your cluster, it’s not an issue. To add nodes to your existing cluster in the future you will user the rediscli command line tool with the CLUSTER MEET command.

  5. Confirm the cluster is working with the rediscli command setting a value.
    /usr/local/bin/redis-cli -c -p 6101 -a my256CharPassword
    SET foo bar
    GET foo
    CLUSTER SLOTS
    CLUSTER INFO

    You might notice foo gets pushed to a slot on the 2nd master. try SET a a and then try SET z z. You can also connect to any of the six nodes (6101 – 6106) to verify the sets with a GET command. (GET foo)

That’s all there is to it. You can open the 6101 – 6106 port to your local laptop and start developing on your local machine using the node.js ioredis client package on NPM at https://www.npmjs.com/package/ioredis

ps. Of course that’s not all! 🙂 additional code changes would be needed for example the slave to master login with AUTH.

Redis Session and MySQL Login

The following does a Redis session store and MySQL user register and login as well as a simple message post. This is run on a node.js server

package.json

{
 "name": "users",
 "version": "1.0.0",
 "description": "Register User",
 "main": "app.js",
 "script": "./app.js",
 "watch": true,
 "ignore_watch": ["node_modules"],
 "keywords": [
"login"
 ],
 "author": "Todd Rodzen",
 "license": "MIT",
 "dependencies": {
 "async": "^1.2.1",
 "body-parser": "^1.13.0",
 "connect-redis": "^2.3.0",
 "cookie-parser": "^1.3.5",
 "ejs": "^2.3.1",
 "express": "^4.14.0",
 "express-session": "^1.11.3",
 "mysql": "^2.7.0",
 "redis": "^0.12.1"
 }
}

app.js

/**
 Loading all dependencies.
**/
var express = require("express");
var redis = require("redis");
var mysql = require("mysql");
var session = require('express-session');
var redisStore = require('connect-redis')(session);
var bodyParser = require('body-parser');
var cookieParser = require('cookie-parser');
var path = require("path");
var async = require("async");
var client = redis.createClient();
var app = express();
var router = express.Router();

// Always use MySQL pooling.
// Helpful for multiple connections.

var pool = mysql.createPool({
 connectionLimit : 100,
 host : 'hmmmmm',
 user : 'you',
 password : 'ssshhhhh',
 database : 'hmmmm',
 debug : false
});

app.set('views', 'view');
app.engine('html', require('ejs').renderFile);

// IMPORTANT
// Here we tell Express to use Redis as session store.
// We pass Redis credentials and port information.
// And express does the rest ! 

app.use(session({
 secret: 'topics-session',
 store: new redisStore({ host: 'localhost', port: 6379, client: client,ttl : 260}),
 saveUninitialized: false,
 resave: false
}));
app.use(cookieParser("secretSign#143_!223"));
app.use(bodyParser.urlencoded({extended: false}));
app.use(bodyParser.json());

// This is an important function.
// This function does the database handling task.
// We also use async here for control flow.

function handle_database(req,type,callback) {
 async.waterfall([
 function(callback) {
 pool.getConnection(function(err,connection){
 if(err) {
 // if there is error, stop right away.
 // This will stop the async code execution and goes to last function.
 callback(true);
 } else {
 callback(null,connection);
 }
 });
 },
 function(connection,callback) {
 var SQLquery;
 switch(type) {
 case "login" :
 SQLquery = "SELECT * from user_login WHERE user_email='"+req.body.user_email+"' AND `user_password`='"+req.body.user_password+"'";
 break;
 case "checkEmail" :
 SQLquery = "SELECT * from user_login WHERE user_email='"+req.body.user_email+"'";
 break;
 case "register" :
 SQLquery = "INSERT into user_login(user_email,user_password,user_name) VALUES ('"+req.body.user_email+"','"+req.body.user_password+"','"+req.body.user_name+"')";
 break;
 case "addStatus" :
 SQLquery = "INSERT into msg_text(user_id,msg_text) VALUES ("+req.session.key["user_id"]+",'"+req.body.status+"')";
 break;
 case "getStatus" :
 SQLquery = "SELECT * FROM msg_text WHERE user_id="+req.session.key["user_id"];
 break;
 default :
 break;
 }
 callback(null,connection,SQLquery);
 },
 function(connection,SQLquery,callback) {
 connection.query(SQLquery,function(err,rows){
 connection.release();
 if(!err) {
 if(type === "login") {
 callback(rows.length === 0 ? false : rows[0]);
 } else if(type === "getStatus") {
 callback(rows.length === 0 ? false : rows);
 } else if(type === "checkEmail") {
 callback(rows.length === 0 ? false : true);
 } else {
 callback(false);
 }
 } else {
 // if there is error, stop right away.
 // This will stop the async code execution and goes to last function.
 callback(true);
 }
 });
 }],
 function(result){
 // This function gets call after every async task finished.
 if(typeof(result) === "boolean" && result === true) {
 callback(null);
 } else {
 callback(result);
 }
 });
}

/**
 --- Router Code begins here.
**/

router.get('/',function(req,res){
 res.render('index.html');
});

router.post('/login',function(req,res){
 handle_database(req,"login",function(response){
 if(response === null) {
 res.json({"error" : "true","message" : "Database error occured"});
 } else {
 if(!response) {
 res.json({
 "error" : "true",
 "message" : "Login failed ! Please register"
 });
 } else {
 req.session.key = response;
 res.json({"error" : false,"message" : "Login success."});
 }
 }
 });
});

router.get('/home',function(req,res){
 if(req.session.key) {
 res.render("home.html",{ email : req.session.key["user_name"]});
 } else {
 res.redirect("/");
 }
});

router.get("/fetchStatus",function(req,res){
 if(req.session.key) {
 handle_database(req,"getStatus",function(response){
 if(!response) {
 res.json({"error" : false, "message" : "There is no status to show."});
 } else {
 res.json({"error" : false, "message" : response});
 }
 });
 } else {
 res.json({"error" : true, "message" : "Please login first."});
 }
});

router.post("/addStatus",function(req,res){
 if(req.session.key) {
 handle_database(req,"addStatus",function(response){
 if(!response) {
 res.json({"error" : false, "message" : "Status is added."});
 } else {
 res.json({"error" : false, "message" : "Error while adding Status"});
 }
 });
 } else {
 res.json({"error" : true, "message" : "Please login first."});
 }
});

router.post("/register",function(req,res){
 handle_database(req,"checkEmail",function(response){
 if(response === null) {
 res.json({"error" : true, "message" : "This email is already present"});
 } else {
 handle_database(req,"register",function(response){
 if(response === null) {
 res.json({"error" : true , "message" : "Error while adding user."});
 } else {
 req.session.key = response;
 res.json({"error" : false, "message" : "Registered successfully."});
 }
 });
 }
 });
});

router.get('/logout',function(req,res){
 if(req.session.key) {
 req.session.destroy(function(){
 res.redirect('/');
 });
 } else {
 res.redirect('/');
 }
});

app.use('/',router);

app.listen(4201,function(){
 console.log("I am running at 4201");
});

view/index.html (code)
https://github.com/trodzen/MySQL-Redis-Session-Register/blob/master/view/index.html

view/home.html (code)
https://github.com/trodzen/MySQL-Redis-Session-Register/blob/master/view/home.html

You will need a working Redis db structure and MySQL and the two files used on the select/update SQL statements.

Try it, it’s easy.
That’s All Folks!

Powered by WordPress.com.

Up ↑