Search

Todd Rodzen

Agile Application Development, DevOps, Cloud Architecture Engineering

to Containerize

Containerizing an application from a monolithic application to container based microservices is a large and wide topic. I like to focus on small topics within the greater goal when looking at cloud architecture. The first is the decision to containerize and how to best achieve a manageable cloud. Of course, the desired end services and application need will drive the ultimate cloud architecture model. First, let’s look at the basic design possibilities.

The first option is a monolithic application. The problems are well known. The size of the application and complexity for the developer soon becomes overwhelming. We will take it for granted this is not desirable.

Next, is a possibility of creating a sort of hybrid cluster. This still has the problem for the developer as the application design becomes too complex. I reviewed the design of a sort “roll-your-own” cluster with a MySQL and node.js in a prior post “to Cluster or not to Cluster”. This process described using the Cluster function within node.js. This is a quick and dirty solution to programmatically rolling-your-own cluster for node.js. There is also the PM2 solution to manage a cluster application on node.js. This provides the solution of maximizing resources to utilize processing power on multiple threads but does not provide any of the solutions for a scalable containerized application.

Next, the type (or flavor) of scaleable container service is open for debate. There is docker swarm, Amazon AWS Elastic Container Service, build-your-own Kubernetes, as well as many others. One promising managed service from AWS is EKS (Elastic Kubernetes Service.) I won’t try to review these options but in a future post simply provide a step by step basic process to look at cloud architecture for a Kubernetes homogeneous solution. This provides for decisions based on cloud or service needs without fixating on AWS solutions. As with any developing, the team of developers can drive the solution to some extent, as there are many flavors of best-practices for cloud containerized services.

There are also many other cloud-based services including GCS (Google Container Service), IBM, etc but Kubernetes on AWS has become a leader. In one recently published report, Kubernetes on AWS has 63% of market share of all container services used today.

My future posts focused on Kubernetes will utilize container operations from Docker Cloud. There are other options like RKT, but of course, Docker is the most widely supported.

Advertisements

AWS Architecture

This AWS Architecture blog has some great insight to AWS design and architecture to achieve responsiveness, resiliency, and elasticity.

I particularly like the discussion of scaling your application one-step at a time. A while ago I did some application design that discussed the specific need of utilizing pooled connections and clusters when using node.js and mySQL. These were small steps to create a long-term solution of a well-architectured framework.

As a sort of exercise let’s define the terminology that is important in the planned architecture of a cloud-based system.

  • Responsiveness: reacting quickly and positively
  • Resiliency: capacity to recover quickly from demand
  • Elasticity: the ability to stretch or duplicate to adapt to high demand and return to normal when demand no longer exists.
  • Availability: the state of maintaining uptime

Some other pillars of a well-architected framework include:

  • Security: ability to protect information, systems, and assets while delivering business needs
  • Performance Efficiency: the ability to use computer resources efficiently to meet system requirements
  • Cost Optimization:  the ability to avoid or eliminate unneeded cost

While these terms at times overlap and intersect to different degrees for the purpose of creating a well-designed cloud infrastructure, it is always important to consider these pillars in determining best practices. If your best-practices includes AWS it probably includes EC2, S3, and RDS. Although it also must also include budget, cost optimization and return on investment (ROI) utilizing tools to track costs, such as through tagging and IAM controls, you can innovate without overspending. Additionally, a good architecture will utilize performance tools to track and maintain top efficiency within the cloud resources utilizing Amazon Cloud Watch or 3rd party monitors. Finally, audits and tools are also needed to maintain data integrity and security.

Regardless of the underlying technology, there are many best practices. Some might think of it as varying as individual opinions, everyone has an opinion. What is important for an organization is to select best practices that work for the entire organization from the users to the operators to the developers and designers. The success of any architecture depends on a cohesive operation and sound principles.

LAMP, Node, and Nginx on AWS AMI Linux 2

I have in the past done install directions and details of installing LAMP on AWS Linux and another post on installing Node.js. I will now do a quick update on the basics of installing LAMP and Node.js on the newest Linux 2 AMI on Amazon.

The basic steps are the same to setup your Linux on AWS just select the Linux 2 AMI on EC2. Be sure to generate your private key and use a tool like Putty to login with the user ec2-user. Don’t forget you will need to convert your pem private key to ppk format using a tool like PuttyGen before using it in Putty.

The Lamp install is described in more detail here

http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ec2-lamp-amazon-linux-2.html

Basically, do the following:

sudo yum update -y
sudo amazon-linux-extras install lamp-mariadb10.2-php7.2
sudo yum install -y httpd php mariadb-server php-mysqlnd
sudo systemctl enable httpd
sudo usermod -a -G apache ec2-user
sudo chown -R ec2-user:apache /var/www
sudo chmod 2775 /var/www && find /var/www -type d -exec sudo chmod 2775 {} \;
find /var/www -type f -exec sudo chmod 0664 {} \;
echo "<?php phpinfo(); ?>" > /var/www/html/phpinfo.php

You should now be able to view your apache default web page root and the php page http://my.public.dns.amazonaws.com/phpinfo.php

You still need to do the database install.

For the Node.js install we use nvm with the following

curl -o- https://raw.githubusercontent.com/creationix/nvm/v0.33.8/install.sh | bash
. ~/.nvm/nvm.sh
nvm install 9.3.0

I choose to create a node directory for programming in the root and then do the same or similar group security as above for the Apache www/html directory.

Finally I will setup Nginx reverse proxy as described in a prior post but with Linux 2 Nginx is now available as an AWS “extra” use the following command.

amazon-linux-extras install nginx1.12

Follow my prior post to configure Nginx

Recruiters please note..

Hi,

I am available for phone interviews for Boston based positions. I split my time between Boston and North Carolina. If I am not in Boston, I am available to return on request for onsite interviews with actual direct hiring managers. I am available for immediate hire.

{UPDATE 2/15/2018 I am in the Boston area and available for onsite interview}

Todd

my Resume:
https://www.dropbox.com/s/5sl1r565v5yg4v9/Todd%20Rodzen%20-%20resume%20-%202-15-2018.docx?dl=0

my Coding and AWS systems design blog:
https://trodzen.wordpress.com

my Personal Portfolio:
http://todd.rodzen.com

Redis on Production

Let’s do some fine tuning for Redis in production

  1. Turn on vm over commit memory. edit the /etc/sysctl.conf file

sudo chmod 664 /etc/sysctl

Then use the editor to add this to the bottom of the file.

vm overcommit_memory = 1

Without getting into the pros and cons, vm over commit is fully explained here https://redis.io/topics/faq

Next, we setup the system services to start the Redis node(s). It’s common to have at least a master and slave on one server so let’s allow multiple services to run on the same server and have them auto-start. And while we’re at it, let’s make the service start/stop commands work for Redis nodes with AUTH passwords. Oh, and let’s fix the transparent_hugepage default (set it to never, as recommended by Redis.) This is all explained in my last post Redis Linux Service

Also don’t forget to turn off the debug logging used for development.

UUID vs Auto Increment

What is the best method to create a key in today’s advanced Javascript node.js style applications? Do you rely on the old tried and true method to use auto-increment on the database primary key? or is a UUID better? One thought to help answer that; is a sequential key useful? Especially in the situation where the unique key may start out as only in the application or only on the client session store (ie. Redis key memory store.) In this situation, a sequential key is not useful and creating the auto increment key takes an additional step using INCR on Redis or INSERT on MySQL which could also create an unnecessary round trip to your database.

On the other hand, the UUID v4 implementation which creates a unique randomized UUID may appear to be a CPU time slice consuming operation, but one stackoverflow user did some testing, as posted here,:

uuid.png

You can see the green and yellow lines of increased connections. As connections increase AUTO INCR method creates an increased latency; while the UUID is a steady same or lower process time slice.

I didn’t come up with this one but found it may possibly be the smallest UUID v4 generator code.

exports.uuid = function b(a){return a?(a^Math.random()*16>>a/4).
   toString(16):([1e7]+-1e3+-4e3+-8e3+-1e11).replace(/[018]/g,b)}

There is no one answer. It’s always good to have multiple available methods but be sure to consider the uses and weigh the options.

Redis Linux Service

Redis has a service install script install_server.sh. It takes a few prompts and adds a new Redis service on Linux. One issue I found when working with Redis passwords, the service script doesn’t handle start/stop correctly. In a prior post, I detail the installation process for Redis on Amazon Linux. Here’s a fix to the services script. The issue is with the stop. There is no password passed to the service script on shutdown. With passwords and no modification you are limited to doing the following:

service redis_6379 start
redis-cli -p 6379 -a YourPassword shutdown

The standard “service stop” command doesn’t work but here’s an update to the service script that could be implemented in the install_server.sh script by the Redis team.. My edits are tagged with #tlr and should be changed in the redis_6379 file in the /etc/init.d directory (or the origin install_server.sh script, if you want to get even more fancy. The install_server.sh script is used to create the redis_6379 start/stop script.)

#!/bin/sh
#Configurations injected by install_server below....

NAME=`basename ${0}` #tlr

EXEC=/usr/local/bin/redis-server
CLIEXEC=/usr/local/bin/redis-cli

#PIDFILE=/var/run/redis_6379.pid #tlr
#CONF="/etc/redis/6379.conf" #tlr
#REDISPORT="6379" #tlr

PIDFILE=/var/run/${NAME}.pid #tlr
CONF="/etc/redis/${NAME#*_}.conf" #tlr
REDISPORT="${NAME#*_}" #tlr

PassVar=$(grep "requirepass " $CONF | cut -d' ' -f1 | tr -d '\012\015') #tlr
# PassVar is the requirepass variable name (with or without the # comment) #tlr
if [ $PassVar = "requirepass" ] #tlr
then #tlr
 requirepass=$(grep "requirepass " $CONF | cut -d' ' -f2 | tr -d '\012\015') #tlr
else #tlr
 # password commented output #tlr
 requirepass="" #tlr
fi #tlr

###############
# SysV Init Information
# chkconfig: - 58 74
# description: redis_6379 is the redis daemon.
### BEGIN INIT INFO
# Provides: redis_6379
# Required-Start: $network $local_fs $remote_fs
# Required-Stop: $network $local_fs $remote_fs
# Default-Start: 2 3 4 5
# Default-Stop: 0 1 6
# Should-Start: $syslog $named
# Should-Stop: $syslog $named
# Short-Description: start and stop redis_6379
# Description: Redis daemon
### END INIT INFO

case "$1" in
 start)
 if [ -f $PIDFILE ]
 then
 echo "$PIDFILE exists, process is already running or crashed"
 else

echo "never" > /sys/kernel/mm/transparent_hugepage/enabled #tlr
echo "never" > /sys/kernel/mm/transparent_hugepage/defrag #tlr
# by addding hugpage here it overides other places set on reboot (ie.onAWS)

echo "Starting Redis server..."
 $EXEC $CONF
 fi
 ;;
 stop)
 if [ ! -f $PIDFILE ]
 then
 echo "$PIDFILE does not exist, process is not running"
 else
 PID=$(cat $PIDFILE)
 echo "Stopping ..."

# $CLIEXEC -p $REDISPORT shutdown #tlr

if [ -z $requirepass ] #tlr
then #tlr
 $CLIEXEC -p $REDISPORT shutdown #tlr
else #tlr
 echo "Using .conf File Password AUTH" #tlr
 $CLIEXEC -p $REDISPORT -a $requirepass shutdown #tlr
fi #tlr

while [ -x /proc/${PID} ]
 do
 echo "Waiting for Redis to shutdown ..."
 sleep 1
 done
 echo "Redis stopped"
 fi
 ;;
 status)
 PID=$(cat $PIDFILE)
 if [ ! -x /proc/${PID} ]
 then
 echo 'Redis is not running'
 else
 echo "Redis is running ($PID)"
 fi
 ;;
 restart)
 $0 stop
 $0 start
 ;;
 *)
 echo "Please use start, stop, restart or status as first argument"
 ;;
esac

These changes simply allow for multiple services by creating symbolic links for additional nodes to the original redis_6379 service start/stop script. Additional Redis node services could be added by doing the following

ln -s /etc/init.d/redis_6379 /etc/init.d/redis_6101

This creates a start / stop script for the additional node by linking to the original 6379 service start/stop script. You will need an additional service for each node (master or slave) that you are running on the instance. Also, duplicate and modify your /etc/redis/xxxx.conf file as needed. In a prior post, I detail the configuration for a Redis Cluster with Passwords. Finally, issue a command to add the service:

service --add redis_6101

You can now reboot the Linux instance and your additional services will still be running. You can now also use the command $ service redis_6101 restart without an error due to passwords.

Redis Cluster with Passwords

Do a little work with Redis Clusters and you will see in multiple places developers trying to get Redis node instances with passwords to work in a Redis cluster environment. The fact is it’s not supported. It’s not an option for good reason. There is a second back data channel that essentially makes the password AUTH meaningless. On top of that, passwords on a memory key store is well, meaningless for a good hacker. If you can throw thousands of passwords at the instance in ONE SECOND then the brute force hack is pretty easy. Maybe future versions of Redis will start to take password retries into consideration.

On the other hand, there are good reasons for a password on any service. A couple reasons come to mind: 1. You simply want to stop inadvertent prying eyes, such as an employee within the company that has access to the machine and the redis-cli command tool. 2. Maybe you post your passwords on a sticky note next to the computer room monitor so the password itself is not a concern but the person that has access to the machine but not the computer room should be staying out of the data. 3. You have multiple people that work on the machine and you want to protect your instance so a co-developer doesn’t accidently access and delete your Redis node. The list could probably go on much longer. One thing for sure, even if you assign a password to your Redis instance if you open the port up to the public you are opening yourself up to a hack. On the other hand, if it has a password and you are only using it for testing and development, maybe it’s not a big deal.  The better option is to use SSH to tunnel to your Redis server over the internet. That has its own issues.

One reason I chose to setup a cluster with AUTH passwords is I wanted to build apps on my laptop running locally on my laptop node.js server. I want the app to connect to my remote MYSQL development/production database and the same type of situation for the memory key store. That way, in theory, you can develop and test a version on the laptop. You can push it to the development EC2 server without any code changes and it should also work because it would be using the same MYSQL database (connecting to hostname mysql.mydomain.com) and Redis Cluster connection to a hostname redis.mydomain.com. (it won’t be using a local cluster or node on my laptop during development.)

With a Redis cluster environment, there is a back channel communications port for the cluster for each Redis node instance. The communications port is the node’s port with a 1 in front of the number. So if you have a node sitting on port 6101 there is also a back channel cluster communications port of 16101. We don’t use it. It’s only used by the Redis server. So In my situation above, I will not open the communications port to the public.

six-node-redis-cluster

Furthermore, why go to all this trouble if you are just working on a development application. Well, in theory, your development application will soon be a minimum viable product (MVP) and that won’t happen, or be much more difficult later, if you develop an application with code using a single memory store environment and then have to transition to a whole new API client for production. It’s better to develop an application once the right way. If you are developing an application that will have widespread use you know the cluster environment is needed. It may be a question of your development process and some won’t want to take this approach. But if you develop with a single node you might expect a multi-stage redevelopment as clusters are needed down the road and that adds a few steps.

With my development scenario, I am using the same server for all six nodes with 3 masters and 3 slaves. Again it’s not needed unless you start moving these nodes to additional EC2 instances or your application usage grows to handle the larger demand. With this design you can always add additional nodes later without application coding changes.

So here are the steps to create the cluster:

  1. create a minimum 6 Redis node instances with different hosts or ports using the following changes to the Redis conf file. To do this I created a /redis/data directory and copied the initial install 6379.conf file to the new port name in the /etc/redis directory. Then change each with the following
    port 6101
    
    pidfile /redis/data/redis_pid_6101.pid
    logfile /redis/data/redis_log_6101.log
    
    dbfilename dump_6101.rdb
    appendfilename "appendonly_6101.aof"
    cluster-config-file nodes_6101.conf
    
    requirepass myWickedLong256CharacterHashPassword
    
    dir /redis/data
    
    protected-mode no
    appendonly yes
    cluster-enabled yes
    
    # USE CLUSTER SYS INSTALL DEFAULTS BELOW
    cluster-node-timeout 15000
    cluster-slave-validity-factor 10
    cluster-migration-barrier 1
    cluster-require-full-coverage yes
    
    # USE OTHER SYS INSTALL DEFAULTS

    * create a .conf file for each port 6101 – 6106

  2. start each node with the redis-server command
    /usr/local/bin/redis-server /etc/redis/6101.conf

    * start each port 6101 – 6106

  3. Now we need to hack the redis-trib.rb progam with the following changes:
    This code change starts around line 57 and goes to line 125. You can cut, copy, and past as long as you get the exact same section of code (using Redis version 3.2.6) or simply scan through my code for the lines added and changed that are tagged with # tlr <start/end> comments

    class ClusterNode
     def initialize(addr)
     s = addr.split(":")
     if s.length < 2  puts "Invalid IP or Port (given as #{addr}) - use IP:Port format"  exit 1  end # tlr start pwd = nil  if s.length == 3  pwd = s.pop  end # tlr end port = s.pop # removes port from split array  ip = s.join(":") # if s.length > 1 here, it's IPv6, so restore address
     @r = nil
     @info = {}
     @info[:host] = ip
     @info[:port] = port
     @info[:slots] = {}
     @info[:migrating] = {}
     @info[:importing] = {}
     @info[:replicate] = false
    
    # tlr start
    @info[:password] = pwd 
    # tlr end
    
    @dirty = false # True if we need to flush slots info into node.
     @friends = []
     end
    
    def friends
     @friends
     end
    
    def slots
     @info[:slots]
     end
    
    def has_flag?(flag)
     @info[:flags].index(flag)
     end
    
    def to_s
     "#{@info[:host]}:#{@info[:port]}"
     end
    
    def connect(o={})
     return if @r
     print "Connecting to node #{self}: " if $verbose
     STDOUT.flush
     begin
    
    # tlr start
     if @info[:password] != nil
     @r = Redis.new(:host => @info[:host], :port => @info[:port], :timeout => 60, :password=>@info[:password])
     @r.ping
     else
     @r = Redis.new(:host => @info[:host], :port => @info[:port], :timeout => 60)
     @r.ping
     end
    # tlr end (the 2 lines in the else section are not changed from original)
    
    rescue
     xputs "[ERR] Sorry, can't connect to node #{self}"
     exit 1 if o[:abort]
     @r = nil
     end
     xputs "OK" if $verbose
     end
  4. next run the redis-trib.rb program to combine your nodes into one cluster. This may be a super long command from the command line especially if you have 256 character passwords but it works. (do it all on one line)
    /redis/redis-3.2.6/src/redis-trib.rb create --replicas 1 
    127.0.0.1:6101:my256charPassword 127.0.0.1:6102:my256charPassword 
    127.0.0.1:6103:my256charPassword 127.0.0.1:6104:my256charPassword 
    127.0.0.1:6105:my256charPassword 127.0.0.1:6106:my256charPassword

    * I did notice this produced a few errors as shown below but they are simply the process verification errors and the nodes are working fine.

    >>> Creating cluster
    >>> Performing hash slots allocation on 6 nodes...
    Using 3 masters:
    127.0.0.1:6101
    127.0.0.1:6102
    127.0.0.1:6103
    Adding replica 127.0.0.1:6104 to 127.0.0.1:6101
    Adding replica 127.0.0.1:6105 to 127.0.0.1:6102
    Adding replica 127.0.0.1:6106 to 127.0.0.1:6103
    
    (slot master/slave identifiers)
    
    Can I set the above configuration? (type 'yes' to accept): yes
    >>> Nodes configuration updated
    >>> Assign a different config epoch to each node
    >>> Sending CLUSTER MEET messages to join the cluster
    Waiting for the cluster to join.....
    [ERR] Sorry, can't connect to node 127.0.0.1:6105
    [ERR] Sorry, can't connect to node 127.0.0.1:6106
    [ERR] Sorry, can't connect to node 127.0.0.1:6103
    [ERR] Sorry, can't connect to node 127.0.0.1:6102
    [ERR] Sorry, can't connect to node 127.0.0.1:6104
    >>> Performing Cluster Check (using node 127.0.0.1:6101)
    M: 4f531ed4bcfd058b688a8692138fbdcc01a9dc7e 127.0.0.1:6101
     slots:0-5460 (5461 slots) master
     0 additional replica(s)
    [OK] All nodes agree about slots configuration.
    >>> Check for open slots...
    >>> Check slots coverage...
    [ERR] Not all 16384 slots are covered by nodes.

    A few more edits would fix the warning errors. 🙂 Since this is a one time command to initially setup your cluster, it’s not an issue. To add nodes to your existing cluster in the future you will user the rediscli command line tool with the CLUSTER MEET command.

  5. Confirm the cluster is working with the rediscli command setting a value.
    /usr/local/bin/redis-cli -c -p 6101 -a my256CharPassword
    SET foo bar
    GET foo
    CLUSTER SLOTS
    CLUSTER INFO

    You might notice foo gets pushed to a slot on the 2nd master. try SET a a and then try SET z z. You can also connect to any of the six nodes (6101 – 6106) to verify the sets with a GET command. (GET foo)

That’s all there is to it. You can open the 6101 – 6106 port to your local laptop and start developing on your local machine using the node.js ioredis client package on NPM at https://www.npmjs.com/package/ioredis

ps. Of course that’s not all! 🙂 additional code changes would be needed for example the slave to master login with AUTH.

Redis Session and MySQL Login

The following does a Redis session store and MySQL user register and login as well as a simple message post. This is run on a node.js server

package.json

{
 "name": "users",
 "version": "1.0.0",
 "description": "Register User",
 "main": "app.js",
 "script": "./app.js",
 "watch": true,
 "ignore_watch": ["node_modules"],
 "keywords": [
"login"
 ],
 "author": "Todd Rodzen",
 "license": "MIT",
 "dependencies": {
 "async": "^1.2.1",
 "body-parser": "^1.13.0",
 "connect-redis": "^2.3.0",
 "cookie-parser": "^1.3.5",
 "ejs": "^2.3.1",
 "express": "^4.14.0",
 "express-session": "^1.11.3",
 "mysql": "^2.7.0",
 "redis": "^0.12.1"
 }
}

app.js

/**
 Loading all dependencies.
**/
var express = require("express");
var redis = require("redis");
var mysql = require("mysql");
var session = require('express-session');
var redisStore = require('connect-redis')(session);
var bodyParser = require('body-parser');
var cookieParser = require('cookie-parser');
var path = require("path");
var async = require("async");
var client = redis.createClient();
var app = express();
var router = express.Router();

// Always use MySQL pooling.
// Helpful for multiple connections.

var pool = mysql.createPool({
 connectionLimit : 100,
 host : 'hmmmmm',
 user : 'you',
 password : 'ssshhhhh',
 database : 'hmmmm',
 debug : false
});

app.set('views', 'view');
app.engine('html', require('ejs').renderFile);

// IMPORTANT
// Here we tell Express to use Redis as session store.
// We pass Redis credentials and port information.
// And express does the rest ! 

app.use(session({
 secret: 'topics-session',
 store: new redisStore({ host: 'localhost', port: 6379, client: client,ttl : 260}),
 saveUninitialized: false,
 resave: false
}));
app.use(cookieParser("secretSign#143_!223"));
app.use(bodyParser.urlencoded({extended: false}));
app.use(bodyParser.json());

// This is an important function.
// This function does the database handling task.
// We also use async here for control flow.

function handle_database(req,type,callback) {
 async.waterfall([
 function(callback) {
 pool.getConnection(function(err,connection){
 if(err) {
 // if there is error, stop right away.
 // This will stop the async code execution and goes to last function.
 callback(true);
 } else {
 callback(null,connection);
 }
 });
 },
 function(connection,callback) {
 var SQLquery;
 switch(type) {
 case "login" :
 SQLquery = "SELECT * from user_login WHERE user_email='"+req.body.user_email+"' AND `user_password`='"+req.body.user_password+"'";
 break;
 case "checkEmail" :
 SQLquery = "SELECT * from user_login WHERE user_email='"+req.body.user_email+"'";
 break;
 case "register" :
 SQLquery = "INSERT into user_login(user_email,user_password,user_name) VALUES ('"+req.body.user_email+"','"+req.body.user_password+"','"+req.body.user_name+"')";
 break;
 case "addStatus" :
 SQLquery = "INSERT into msg_text(user_id,msg_text) VALUES ("+req.session.key["user_id"]+",'"+req.body.status+"')";
 break;
 case "getStatus" :
 SQLquery = "SELECT * FROM msg_text WHERE user_id="+req.session.key["user_id"];
 break;
 default :
 break;
 }
 callback(null,connection,SQLquery);
 },
 function(connection,SQLquery,callback) {
 connection.query(SQLquery,function(err,rows){
 connection.release();
 if(!err) {
 if(type === "login") {
 callback(rows.length === 0 ? false : rows[0]);
 } else if(type === "getStatus") {
 callback(rows.length === 0 ? false : rows);
 } else if(type === "checkEmail") {
 callback(rows.length === 0 ? false : true);
 } else {
 callback(false);
 }
 } else {
 // if there is error, stop right away.
 // This will stop the async code execution and goes to last function.
 callback(true);
 }
 });
 }],
 function(result){
 // This function gets call after every async task finished.
 if(typeof(result) === "boolean" && result === true) {
 callback(null);
 } else {
 callback(result);
 }
 });
}

/**
 --- Router Code begins here.
**/

router.get('/',function(req,res){
 res.render('index.html');
});

router.post('/login',function(req,res){
 handle_database(req,"login",function(response){
 if(response === null) {
 res.json({"error" : "true","message" : "Database error occured"});
 } else {
 if(!response) {
 res.json({
 "error" : "true",
 "message" : "Login failed ! Please register"
 });
 } else {
 req.session.key = response;
 res.json({"error" : false,"message" : "Login success."});
 }
 }
 });
});

router.get('/home',function(req,res){
 if(req.session.key) {
 res.render("home.html",{ email : req.session.key["user_name"]});
 } else {
 res.redirect("/");
 }
});

router.get("/fetchStatus",function(req,res){
 if(req.session.key) {
 handle_database(req,"getStatus",function(response){
 if(!response) {
 res.json({"error" : false, "message" : "There is no status to show."});
 } else {
 res.json({"error" : false, "message" : response});
 }
 });
 } else {
 res.json({"error" : true, "message" : "Please login first."});
 }
});

router.post("/addStatus",function(req,res){
 if(req.session.key) {
 handle_database(req,"addStatus",function(response){
 if(!response) {
 res.json({"error" : false, "message" : "Status is added."});
 } else {
 res.json({"error" : false, "message" : "Error while adding Status"});
 }
 });
 } else {
 res.json({"error" : true, "message" : "Please login first."});
 }
});

router.post("/register",function(req,res){
 handle_database(req,"checkEmail",function(response){
 if(response === null) {
 res.json({"error" : true, "message" : "This email is already present"});
 } else {
 handle_database(req,"register",function(response){
 if(response === null) {
 res.json({"error" : true , "message" : "Error while adding user."});
 } else {
 req.session.key = response;
 res.json({"error" : false, "message" : "Registered successfully."});
 }
 });
 }
 });
});

router.get('/logout',function(req,res){
 if(req.session.key) {
 req.session.destroy(function(){
 res.redirect('/');
 });
 } else {
 res.redirect('/');
 }
});

app.use('/',router);

app.listen(4201,function(){
 console.log("I am running at 4201");
});

view/index.html (code)
https://github.com/trodzen/MySQL-Redis-Session-Register/blob/master/view/index.html

view/home.html (code)
https://github.com/trodzen/MySQL-Redis-Session-Register/blob/master/view/home.html

You will need a working Redis db structure and MySQL and the two files used on the select/update SQL statements.

Try it, it’s easy.
That’s All Folks!

Redis on Amazon Linux

The YUM installed Redis version on Amazon Linux is an older version so we will go through the steps to install Redis 3.2.6.

redis

  1. sudo -i
  2. yum update
  3. yum install -y gcc*
  4. yum install -y tcl
  5. mkdir /redis
  6. sudo chmod 2775 /redis
  7. cd /redis
  8. wget http://download.redis.io/releases/redis-3.2.6.tar.gz
  9. tar xzf redis-3.2.6.tar.gz
  10. cd redis-3.2.6
  11. make
  12. make test
  13. make install
  14. cd utils
  15. chmod +x install_server.sh
  16. ./install_server.sh
    install with the following values:

    Welcome to the redis service installer
    This script will help you easily set up a running redis server
    
    Please select the redis port for this instance: [6379]
    Selecting default: 6379
    Please select the redis config file name [/etc/redis/6379.conf]
    Selected default - /etc/redis/6379.conf
    Please select the redis log file name [/var/log/redis_6379.log]
    Selected default - /var/log/redis_6379.log
    Please select the data directory for this instance [/var/lib/redis/6379]
    Selected default - /var/lib/redis/6379
    Please select the redis executable path [] /usr/local/bin/redis-server
    Selected config:
    Port : 6379
    Config file : /etc/redis/6379.conf
    Log file : /var/log/redis_6379.log
    Data dir : /var/lib/redis/6379
    Executable : /usr/local/bin/redis-server
    Cli Executable : /usr/local/bin/redis-cli
  17. chkconfig –level 2345 redis_6379 on
  18. chmod 2775 /etc/redis
  19. chmod 664 /etc/redis/6379.conf
  20. edit the /etc/redis/6379.conf file to set a password

That’s All!

ps. Want to set a password on the Redis store? It’s not recommended because a password doesn’t do much to secure a fast memory based key storage when 1000’s of password auth attempts can be thrown at it PER SECOND!  But on the other hand, if you want to add a password to prevent simply prying eyes like inhouse staff that won’t go through the trouble of building a password hacking program. Maybe you think the added password might prevent an inadvertent command to you Redis database like an accidental delete. Regardless the reason, here’s what you need to do:

edit the /etc/redis/6379.conf file
1. add your password to the password line and uncomment it. make it realy long.
2. turn protect mode off in the same file be commenting that line out.
3. (optional) comment the bind statement to allow connections from any interface. If you do this, you better control that port somewhere else, maybe with an AWS security group.

edit the  /etc/init.d/redis_6379 file and add the following command in the start and stop case procedures:

echo "Using Auth Password"
CLIEXEC="/usr/local/bin/redis-cli -a mywickedLong256character?Password"

Now you can do a sudo service redis_6379 restart command.

MongoDB on Amazon Linux

Here are the steps to install MongoDB on an Amazon Linux EC2 Server Instance. FYI The prepackaged YUM Amazon package does not work. Don’t install without a new repo file.

mongodb-standard-logo-565

Do the following commands in a Putty terminal.

sudo chmod 2775 /etc/yum.repos.d

Using Sublime create a text file call mongodb-org-2.6.repo with the follow and using Filezilla updload it to /etc/yum.repos.d directory.

[mongodb-org-3.4]
name=MongoDB Repository
baseurl=https://repo.mongodb.org/yum/amazon/2013.03/mongodb-org/3.4/x86_64/
gpgcheck=1
enabled=1
gpgkey=https://www.mongodb.org/static/pgp/server-3.4.asc

Do the following commands:

sudo yum install -y mongodb-org
sudo service mongod start
sudo chkconfig mongod on # this turns on auto start on reboot

This process is further explained at https://docs.mongodb.com/manual/tutorial/install-mongodb-on-amazon/

Note:

On a small system you will also need to do the following:

chmod 664 mongod.conf

Then edit the /etc/mongod.conf file and add the follow lines to the :storage block to allow small file blocking.

storage:
   mmapv1:
      smallFiles: true

Then start the service with:

mongod -f /etc/mongod.conf # this checks the config file

AMI Build All-in-One

Full build process

  1. Create an EC2 Linux Instance base – Amazon Linux AMI 2016.09.1 (HVM), SSD Volume Type – ami-0b33d91d
  2. Install the LAMP Stack default Apache port set to 8080 as it will be served to an Nginx reverse proxy server on the same instance (Apache 2.4, MySQL, PHP 6.7)
  3. Install the MEAN Stack
  4. Install Nginx Reverse Proxy Server
  5. Install ColdFusion 2016 update 3 Server

The server is setup and available for Free with a service contract from GTK Solutions.

Building a public Amazon AMI

A frustrating issue with building a public Amazon AMI is the authorize key that you use to build and modify the instance must be removed (which removes your own access to the instance.) The private key must be removed before it is shared.

aws-logo-01

It’s like the old problem, which comes first the chicken or the egg. So you remove the key but now you can’t login to your own system. You can only rebuild an EC2 from the ami image. Beyond that, you only do an rm to delete the file but the block of key data is still there in the EBS disk image. Someone could easily unpack the block and undelete the file to restore the authorize key file, connect to your private instances and run up your AWS bill or worse.

What’s the solution? Using additional EBS connections to create an image. Here is the procedure:

  1. Create a new 1gb EBS volume, attach, and mount it on the running instance, say under /keys Use the Amazon EBS guide to format and attach the EBS volume
  2. Copy your authorized_keys to the /keys on the new EBS
  3. Delete all sensitive files and all authorized_keys (from the primary EBS) Also delete the bash.history file and any other logs or passwords.
     sudo chmod 660 /root/.bash_history

     

  4. Exit Putty terminal windows and using Filezilla save empty history files to /root/.bash_history and /home/ec2-user/.bash_history
  5. Delete /tmp files
  6. Do not snapshot the live EBS volume as it still contains the deleted files and you don’t want to make them public in the new AMI. Instead,
  7. Create a new EBS volume, attach, and mount it on the running instance, say under /ebsimage
  8. Copy the root file system over to the new EBS volume. This only copies the current view of the undeleted files and does not copy the blocks containing the deleted files or any other modified file information. The command might look something like:
    rsync -axvSHAX --exclude 'ebsimage' / /ebsimage/
    
  9. Copy you authorize_keys back to your primary EBS
  10. unmount and detach the new EBS volume.
  11. Create an EBS snapshot of the new EBS volume.
  12. Register the EBS snapshot as a new AMI.

Lets Encrypt

The following is a re-post excerpt from Brennen Bearnes at https://www.digitalocean.com/community/tutorials/how-to-set-up-a-node-js-application-for-production-on-ubuntu-16-04 with great thanks!

Install Let’s Encrypt and Dependencies

Let’s Encrypt is a new Certificate Authority that provides an easy way to obtain free TLS/SSL certificates.

You must own or control the registered domain name that you wish to use the certificate with. If you do not already have a registered domain name, you may register one with one of the many domain name registrars out there (e.g. Namecheap, GoDaddy, etc.).

If you haven’t already, be sure to create an A Record that points your domain to the public IP address of your server. This is required because of how Let’s Encrypt validates that you own the domain it is issuing a certificate for. For example, if you want to obtain a certificate for example.com, that domain must resolve to your server for the validation process to work.

For more detail on this process, see How To Set Up a Host Name with DigitalOcean and How To Point to DigitalOcean Nameservers from Common Domain Registrars.

Although the Let’s Encrypt project has renamed their client to certbot, the name of the package in the Ubuntu 16.04 repositories is simply letsencrypt. This package will be completely adequate for our needs.

To install the package, type:

  • sudo apt-get install letsencrypt

The letsencrypt client should now ready to use on your server.

Retrieve Initial Certificate

Since nginx is already running on port 80, and the Let’s Encrypt client needs this port in order to verify ownership of your domain, stop nginx temporarily:

  • sudo systemctl stop nginx

Run letsencrypt with the Standalone plugin:

  • sudo letsencrypt certonly –standalone

You’ll be prompted to answer several questions, including your email address, agreement to a Terms of Service, and the domain name(s) for the certificate. Once finished, you’ll receive notes much like the following:

IMPORTANT NOTES:
 - Congratulations! Your certificate and chain have been saved at
   /etc/letsencrypt/live/your_domain_name/fullchain.pem. Your cert will expire
   on 2016-08-10. To obtain a new version of the certificate in the
   future, simply run Let's Encrypt again.
 - If you like Let's Encrypt, please consider supporting our work by:

   Donating to ISRG / Let's Encrypt:   https://letsencrypt.org/donate
   Donating to EFF:                    https://eff.org/donate-le

Note the path and expiration date of your certificate, highlighted in the example output. Your certificate files should now be available in /etc/letsencrypt/your_domain_name/.

Configure Nginx for HTTPS

You’ll need to add some details to your Nginx configuration. Open /etc/nginx/sites-enabled/defaultin nano (or your editor of choice):

  • sudo nano /etc/nginx/sites-enabled/default

Replace its contents with the following:

/etc/nginx/sites-enabled/default
# HTTP - redirect all requests to HTTPS:
server {
        listen 80;
        listen [::]:80 default_server ipv6only=on;
        return 301 https://$host$request_uri;
}

# HTTPS - proxy requests on to local Node.js app:
server {
        listen 443;
        server_name your_domain_name;

        ssl on;
        # Use certificate and key provided by Let's Encrypt:
        ssl_certificate /etc/letsencrypt/live/your_domain_name/fullchain.pem;
        ssl_certificate_key /etc/letsencrypt/live/your_domain_name/privkey.pem;
        ssl_session_timeout 5m;
        ssl_protocols TLSv1 TLSv1.1 TLSv1.2;
        ssl_prefer_server_ciphers on;
        ssl_ciphers 'EECDH+AESGCM:EDH+AESGCM:AES256+EECDH:AES256+EDH';

        # Pass requests for / to localhost:8080:
        location / {
                proxy_set_header X-Real-IP $remote_addr;
                proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
                proxy_set_header X-NginX-Proxy true;
                proxy_pass http://localhost:8080/;
                proxy_ssl_session_reuse off;
                proxy_set_header Host $http_host;
                proxy_cache_bypass $http_upgrade;
                proxy_redirect off;
        }
}

Exit the editor and save the file.

Check the configuration for syntax errors by typing:

  • sudo nginx -t

When no errors are detected, start Nginx again:

  • sudo systemctl start nginx

You can test your new certificate and Nginx configuration by visiting http://your_domain_name/ in your browser. You should be redirected to https://your_domain_name/, without any security errors, and see the “Hello World” printed by your Node.js app.

Set Up Let’s Encrypt Auto Renewal

Warning: You can safely complete this guide without worrying about certificate renewal, but you will need to address it for any long-lived production environment.

You may have noticed that your Let’s Encrypt certificate is due to expire in 90 days. This is a deliberate feature of the Let’s Encrypt approach, intended to minimize the amount of time that a compromised certificate can exist in the wild if something goes wrong.

The Let’s Encrypt client can automatically renew your certificate, but in the meanwhile you will either have to repeat the certificate retrieval process by hand, or use a scheduled script to handle it for you. The details of automating this process are covered in How To Secure Nginx with Let’s Encrypt on Ubuntu 16.04, particularly the section on setting up auto renewal.

Reverse proxy on node.js

A reverse proxy is an important part of the puzzle of a production application. The process is to create a reverse proxy Nginx server that interacts with the world and dishes out the requests from the user to a farm of back-end application node.js or Apache servers. The actual backend application server can be secured to only communicate with the reverse proxy server, therefore limiting its vulnerability to attacks.

The good thing about the agile application design is you don’t have to modify your code for reverse proxy except to understand different processes may want to be broken up to different servers or a farm of servers. Therefore creating small single use back-end applications is preferred over a single larger more complex back end server design that does everything in one process. For example serving email and user logins are certainly better designed by different application processes.

Another benefit of running a Nginx reverse proxy is the single reverse proxy can server applications and website from both Apache servers and node.js servers, therefore, mydomain.com might be served by the Apache server while mydomain.com/app might be served by the node.js server.

A Nginx based reverse proxy server is installed with the following:

sudo yum install nginx
sudo chmod 664 /etc/nginx/nginx.conf

Then use Filezilla to add the following lines to the location / {} directives in the /etc/nginx/nginx.conf file

location / {
 proxy_pass http://localhost:8080;
 proxy_http_version 1.1;
 proxy_set_header Upgrade $http_upgrade;
 proxy_set_header Connection 'upgrade';
 proxy_set_header Host $host;
 proxy_cache_bypass $http_upgrade;
 }

location /node {
 proxy_pass http://localhost:4200;
 proxy_http_version 1.1;
 proxy_set_header Upgrade $http_upgrade;
 proxy_set_header Connection 'upgrade';
 proxy_set_header Host $host;
 proxy_cache_bypass $http_upgrade;
 }
Restart the Nginx reverse proxy server
sudo service nginx restart

Add auto start to the nginx service with

chkconfig nginx on

I prefer to start Nginx as a reverse proxy on port 80 and change the default root of httpd.conf to 8080 Therefore unless it’s a specifically defined location route it will default proxy through Nginx to the apache server.

That’s All.

Powered by WordPress.com.

Up ↑