Search

Todd Rodzen

Agile Application Development, DevOps, Cloud Architecture Engineering

to Cluster or not to Cluster

This blog will review cluster environment setup to handle node multi-threaded service instances. In our last post, we reviewed the methods of using pooled connections for MySQL connector in node.js on a Linux server running on an Amazon AWS EC2 instance. The conclusion was that it is necessary from the beginning of the application development process to write javascript back-end application server code that uses MySQL pooling methods. This is a significant change in the application code from the non-pooled connection method API. It is important to understand the techniques you intend to use before setting down to write your production code for a back-end application server because it’s important to write your application once and do it right the first time.

Now let’s take a look at clusters. This is a built-in part of the node.js methods. (We are not talking about the clusters add-on module from npmjs.com that goes by the same name.) Using the built in clusters methods for node.js is described in multiple blog posts here and here. The documentation and posts describe setting up a cluster.js that runs a master and fork process servers that run your app.js code multiple times utilizing multiple core processor threads. This increases throughput and creates a multi-threaded environment. The simplified node.js cluster environment code is shown here.

var cluster = require('cluster');
var numCPUs = require('os').cpus().length;

if (cluster.isMaster) {

//  for (var i = 0; i < numCPUs; i++){
  for (var i = 0; i < 10; i++){
    cluster.fork();
  }

  cluster.on('exit',function(worker, code, signal) {
    console.log('worker ' + worker.process.pid + ' died');
  });
} else {

    //change this line to Your Node.js app entry point.
    require("./app.js");
}

In simplified terms, the code above creates multiple forks using a Master node cluster to run multiple instances (multi-threaded) of your node.js application. We did some testing on our Amazon AWS EC2 t2.micro (free tier) and using our async-test.js test, from our prior blog post, using an external RDS MySQL database with pooled MySQL connections. We used Siege of up to 500 concurrent connections and hit it constantly for 1 minute resulting in thousands of hits. What we found is a single process environment resulted in database connection errors while a cluster run environment of 10 processes (multi-threaded) running the same async-test resulted in no connection errors. This proves a cluster multi-threaded environment is important.

When you look at my code above you can see I altered the code to force 10 process forks (instead of using numCPUs.) Remember I am running this on a virtual shared server using Amazon AWS. When using the os.cpus ().length the os method reports back 1 core process thread.  This is the number of process threads available as designed by AWS. But a little digging into the full array results of the os.cpus () method reveals my AWS t2.micro is actually running on a XEON E5 2670 2.5ghz processor with 10 cores and 20 threads. If you did this on a single dedicated machine running that processor you would get a result of 20 for the os.cpus ().length method. Actually, Amazon does a lot of behind the scenes stuff to throttle the processes. But getting more than what you pay for is not at issue here. (Even though you may be getting it all for free through the free tier.) The issue is a working production application design that doesn’t fail at critical points like peak database traffic. So what we found is the processor even on the AWS t2.micro (free tier) was able to better handle the traffic on multiple clustered processes and it could handle a huge amount of traffic. One seige test resulted in no connection errors of over 8000 hits in 30 seconds. Since it is a shared server it is doubtful 20 threads would be useful. It’s interesting to note Amazon actually defines the t2.micro with 1 vCPU and 6 CPU credits/hour which is a throttle mechanism and different from actual core processing threads. Amazon uses processor credits to ensure you get what you pay for or to throttle your application as needed. As in life, you only get what you pay for! 🙂

But do we write code and change our application design to handle the cluster methods. No! We don’t change our code to use the cluster methods. There is a better way and it’s all handled for us using the PM2 module from npmjs.com. No need to create code like the cluster.js sample above. This multi-threading cluster environment, base node system control, monitoring, and performance optimization is likely a place where you don’t need to reinvent the wheel, there is already significant well-designed products to handle these functions. To start, just install the PM2 module with

npm install -g pm2

A useful additional tool in connection wth PM2 is the https://app.keymetrics.io dashboard monitor. You can go there and create a bucket and server connection to your PM2 server to generate metrics data and external monitoring and control. Use the PM2 documentation for all the PM2 commands but some of the useful commands are

Pm2 start app.js

Pm2 stop all

Pm2 start app.js -i 4  # to start 4 cluster instances of your app.js

Pm2 list  # to list all running process threads

There is another module called forever but we found PM2 is much more advanced, robust, and well supported.

One final item to do is setup PM2 to run on startup. Do the following command which creates a line item to copy and paste to your terminal window. The one line of code it creates will automatically start PM2 on reboot.

pm2 startup

What we do need to do in our application design is write good strong well-designed code that handles multiple instances running and save session or environment variables that can be accessed by all cluster process instances in a multi-threaded environment. In a later post we will cover using Redis as a global external store for process variables. As well codeforgeek provides a great tutorial. This is the real important part of application development. The two most important parts that must be designed to handle multi-threaded clusters and multi-instances are session like variables and database connection transactions. For example, three related SQL inserts or Redis store SETS must complete before another process tries to select that same and related set of data.

In conclusion, I recommend installing pm2 and use it from the start of the agile application development process. An added benefit of using pm2 in development is added logging and debugging methods.

Advertisements

to Pool or not to Pool

Using Node.js with MySQL Module I have done some testing. Under a real word situation stress on the server from multiple connections could result in a database connection failure. The answer is pooled connections. Here is my test code.

Non-pooled Connection

// test-sync.js
var express = require('express')
var app = express()

app.get('/test-sync', function (req, res) {
// console.log('1. Received Get')
 res.send('Hello World!')

var mysql = require('mysql');
var connection = mysql.createConnection({
 host : 'localhost',
 user : 'mazu',
 password : '',
 database : 'mazudb'
});
connection.connect(function(err){
if(!err) {
// console.log("2. Database Connected"); 
} else {
 console.log("Error connecting database ... nn"); 
}
});

sql="SELECT * FROM `test-mysql-Customers` LIMIT 2"
connection.query(sql, function(err, rows, fields) {
connection.end();
 if (!err) {
// console.log("3. SQL Completed");
} else
 console.log('Error while performing Query.');
 });
});

app.listen(4200, function () {
 console.log('Example app listening on port 4200!')
})

Pooled Connection

var express = require("express");
var mysql = require('mysql');
var app = express();

var pool = mysql.createPool({
 connectionLimit : 100, //important
 host : 'localhost',
 user : 'mazu',
 password : '',
 database : 'mazudb',
 debug : false
});

function handle_database(req,res) {
 
 pool.getConnection(function(err,connection){
 if (err) {
 res.json({"code" : 100, "status" : "Error in connection database"});
 var tlog = json({"code" : 100, "status" : "Error in connection database"});
 console.log(tlog);
 return;
 }

connection.setMaxListeners(0)

// console.log('connected as id ' + connection.threadId + ' connection.getMaxListeners()' + connection.getMaxListeners());
 
 sql="SELECT * FROM `test-mysql-Customers` LIMIT 2"

connection.query(sql,function(err,rows){
 connection.release();
 if(!err) {
 res.json(rows);
 } 
 });

connection.on('error', function(err) { 
 var tlog = json({"code" : 100, "status" : "Error in connection database"});
 console.log(tlog);
 res.json({"code" : 200, "status" : "Error in connection database"});
 return; 
 });
 });
}

app.get("/test-async",function(req,res){-
 handle_database(req,res);
});

app.listen(4200);

The primary difference between the two methods is the first one does a synchronous creatConnection, connect, query, and end; while the second pooled method does a createPool connection that creates a queue for the queries. When a get request is received it then does a getConnection that uses one process from the pooled process queue, it does its query, and finally a release of that one process in the pooled queue.

A stress test on these methods results in similar throughput. Essentially the same number of transactions can get through but with non-pooled connections, the likelihood of database connection errors is higher. I used a siege tool stress test with the following

siege -c200 -t60s -d3 http://localhost:4200/test-sync

It resulted in about 8000 hits but on the synchronous non-pooled method it resulted in about 16 database connection error, while the pooled method resulted in no connection errors. This test was with an Amazon EC t2.micro with Linux and an external RDS MySQL database. Obviously, database connection errors are bad! This proves a pooled connection is the way to go.

Express npm module

The Express module available from http://npmjs.com is a common tool to quickly build applications and can be used for back end node.js APIs. Let’s get started with Express on an Amazon Linux EC2 node.js server, do the following commands

mkdir -p -v /node/async-test && cd $_
npm init  # and answer the questions

npm install mysql --save
npm install express --save

Create the helloworld.js file in /node/async-test directory with the following Express module get and listen commands:

var express = require('express')
var app = express()

app.get('/helloworld', function (req, res) {
 res.send('Hello World!')
})

app.listen(4200, function () {
 console.log('Example app listening on port 4200!')
})

Run the node.js application with:

node helloworld.js

Now test at a web browser with:

http://myhost.amazonaws.com:4200/helloworld

The result on the web browser is Hello World! Go to http://expressjs.com/ for complete documentation on the Express module.

Linux Siege package

This describes the steps to install Siege on the Amazon AWS EC2 Linux server.

We will be using a tool called Siege to test the server with 100’s or 1000’s of simultaneous calls. Let’s first turn on the Extra Packages for Enterprise Linux (EPEL) repository from the Fedora project. By default, this repository is present in /etc/yum.repos.d on Amazon Linux instances, but it is not enabled.

sudo yum-config-manager --enable epel

Note: For information on enabling the EPEL repository on other distributions, such as Red Hat and CentOS, see the EPEL documentation at https://fedoraproject.org/wiki/EPEL.

Now install Siege with the following command and answer Y to the two prompts.

sudo yum install siege

Now you can lay Siege to a server with the following command

siege -c25 -t20s -d3 http://localhost:4200

-c25 = 25 simultaneous users
-t20s = do it continuously for 20 seconds
-d3 (default) = a random time delay for each request of 1 to 3 seconds.
url request = http://localhost:4200

The full documentation on Siege is available at https://www.joedog.org/siege-manual/

 

node.js to MySQL Connector

Here we will setup the node.js server to connect to a MySQL database.

Using a Putty / SSH terminal window on the Linux node.js server first go to your working directory for your node-js application then install the mysql module using npm as shown below. Details of the mysql package we are using are found at https://www.npmjs.com/package/mysql

It is important to note npm install just installs the package to a directory called /node_modules/ within your current working directory. Since we don’t want a package that may be modified or removed to adversely affect our application we will install the package directly in the /node application working directory.

cd /node
npm install mysql

Then using Sublime or text editor create a test-mysql.js file with the below code. Next using Filezilla or a SSH FTP client upload the file to the /vnode directory.

const readline = require('readline');
const rl = readline.createInterface({
input: process.stdin,
output: process.stdout
});
rl.question('mysql host name? ', (hostanswer) => {
rl.question('mysql user name? ', (useranswer) => {
rl.question('mysql password? ', (passwordanswer) => {
rl.question('mysql database? ', (dbanswer) => {
rl.close();


var mysql = require('mysql');
var connection = mysql.createConnection({
host : hostanswer,
user : useranswer,
password : passwordanswer,
database : dbanswer,
multipleStatements: true
});

connection.connect();

var sql="DROP TABLE IF EXISTS `test-mysql-Customers`; CREATE TABLE `test-mysql-Customers` (`CompanyName` varchar(45) NOT NULL,`City` varchar(45) DEFAULT NULL, `Country` varchar(45) DEFAULT NULL,PRIMARY KEY (`CompanyName`)); INSERT INTO `test-mysql-Customers` VALUES ('Company1','City1','Country1'),('Company2','City2','Country2'); SELECT * from `test-mysql-Customers`";

connection.query(sql, function(err, rows, fields) {
if (!err) {
console.log('The mysql return data: ', rows);
console.log('The mysql test completed successfully. test-mysql-Customers table was created.');
} else
console.log('Error while performing Query.');
});

connection.end();


});
});
});
});

Next go to the terminal window still in the /node working directory and run the test-mysql.js server application. When prompted enter a valid mysql server. In a previous post I created a mysql database on an Amazon RDS server. An RDS or any valid mysql server will work. The test simply creates a database table on the server called test-mysql-Customer with 3 fields and 2 records.

node test-mysql.js

The results will be the returned SQL string data and the final line will have:

The mysql test completed successfully. test-mysql-Customers table was created.

That’s all folks!

Next we will explore creating an asynchronous real word server.

Why node

One of greatest advantages of node is a block-less environment.  Better described by the node.js wiki:

Node.js is primarily used to build network programs such as Web servers.[43] The biggest difference between Node.js and PHP is that most functions in PHP block until completion (commands execute only after previous commands have completed), while functions in Node.js are designed to be non-blocking (commands execute in parallel, and use callbacks to signal completion or failure).[43]

This non-blocking feature is ideal for real-time updates like chat rooms and social media.

node.js HelloWorld port listener

This first node.js server will run a http server port listener and allow you to connect to the port listener from a web browser to test your node.js server.

Create a hello-server.js file with Sublime text editor with the following

// hello-server.js

const http = require('http') 
const port = 4200

const requestHandler = (request, response) => { 
 console.log(request.url)
 response.end('Hello Node.js Server!')
}

const server = http.createServer(requestHandler)

server.listen(port, (err) => { 
 if (err) {
 return console.log('something bad happened', err)
 }

 console.log(`server is listening on ${port}`)
})

Upload the file to the node.js server then go to the same directory in a terminal window. Run the server with the command

node hello-server.js

You can then point a web browser to http://aws.mydomain.com:4200/
Your browser will receive a page

Hello Node.js Server!

Your terminal window will respond with the following results

server is listening on 4200
/
/favicon.ico

The / is the path of the page requested and the /favicon.ico is requested by the browser as well. To end the terminal window server be sure to use CTRL-C if by accident you don’t close it properly you will leave the process running locking the port. A future node.js hello-server.js command will result in an EADDRINUSE error. To release the processes do the following

ps aux | grep node

this will give you a list of active processes. the PID number is the first column in the list. You can then kill each PID with the following command

kill -9 PID

In a later post we will create an actual port listener that can respond to a real world situation with multiple requests and create an asynchronous port listener.

Node.js Server on Linux EC2

The following steps are used to install a node.js backend server on a Linux EC2 Instance.

NOTE: with AWS EC2 Linux I prefer to do the install with root so first do a SUDO -i to get to root use mode. Then after the install is done you can copy the changes in the .profilerc from the root user to the ec2-user to make nvm and node commands available to both environments.

  1. Install the Node Version Manager (nvm) from https://github.com/creationix/nvm.
    curl -o- https://raw.githubusercontent.com/creationix/nvm/v0.33.0/install.sh | bash
  2. To download, compile, and install the latest release of node, do this:
    nvm install node

    Then in any new shell just use the installed version:

    nvm use node
  3. Now go to Sublime text editor and create an hello.js file with:
    // hello.js
    console.log('hello world from Node.js')
  4. Upload the Hello World server node.js to a directory on the server. I created a directory called /node
  5. Now you can run your node.js Hello World test
    [ec2-user@ip-172-31-24-194 node.js]$ cd /node
    [ec2-user@ip-172-31-24-194 node.js]$ node hello.js
    hello world from Node.js
  6. Finally, you should update npm (Node Package Manager) that comes with node to the latest version. Node Package Manager works with packages available at http://npmjs.com. More on the NPM in a later post.
    npm install npm@latest -g
    # FYI - use the next comand to check the current verion of npm
    npm -v

That’s All ! node.js server is setup.

You can use the which node command to find your current node.js version

[ec2-user@ip-172-31-59-183 node.js]$ which node
~/.nvm/versions/node/v7.4.0/bin/node

I also install the latest nightly build of node with the following command. You will then have the latest LTS version plus the nightly build and can switch between them with the NVM USE command.

NVM_NODEJS_ORG_MIRROR=https://nodejs.org/download/nightly nvm install 8

In a later post we will setup a MySQL to node database connector.

ColdFusion on EC2 Server

Here we will install ColdFusion 2016 on an Amazon EC2 Linux server with Apache 2.4.

Get the ColdFusion installation .bin file from Adobe at https://www.adobe.com/products/coldfusion/download-trial/try.html be sure to select Linux 64bit on the download.

  1. Adjust the authority of the /opt directory to allow ec2-user (group www) permission
    sudo mkdir /opt/cf
    sudo chown -R root:www /opt
    sudo chmod 2775 /opt
    find /opt -type d -exec sudo chmod 2775 {} \;
    find /opt -type f -exec sudo chmod 0664 {} \;
  2. Then send the ColdFusion installation .bin to /opt/cf directory on the EC2 server with Filezilla.
  3. Install the JDBC mysql database driver
    sudo yum install mysql-connector-java.noarch

    Enter Y at the prompt to complete the driver installation

    Is this ok [y/d/N]: y
  4. Go to the /opt/cf directory, set execute authority on the .bin file, and start the ColdFusion .bin installation
    cd /opt/cf
    chmod 777 ColdFusion_2016_WWEJ_linux64.bin
    sudo ./ColdFusion_2016_WWEJ_linux64.bin
  5. Follow the prompts and take the defaults on the ColdFusion installation.
    1. hit [enter] 31 times
    2. Y [enter] to Accept the terms of the license
    3. 3 [enter] to select Developer edition
    4. default 1 [enter] to select server configuration
    5. 3 [enter] to select development profile
    6. default 5 [enter] to continue installation
    7. [enter] default remote component admin profile
    8. [type an remote component administrator password] [enter]
    9. re-enter the remote component administrator password [enter]
    10. default n [enter] to access add-on services remotely
    11. /opt/cf absolute path install folder
    12. Y [enter] the path is correct
    13. 1 [enter] add web server configuration
    14. 1 [enter] Apache
    15. /etc/httpd/conf [enter] Apache configuration directory
    16. /usr/sbin/httpd [enter] Apache program binary file
    17. /etc/init.d/httpd [enter] control file used to start and stop apache server
    18. N [enter] configure advanced settings
    19. 4 [enter] Continue with installation
    20. N [enter[  configure websocket proxy
    21. ec2-user [enter] runtime user name
    22. 2 [enter] skip open local open office installation
    23. [type an administrator password] [enter]
    24. re-enter the administrator password [enter]
    25. Y [enter] enable RDS
    26. [type an RDS administrator password] [enter]
    27. re-enter the RDS administrator password [enter]
    28. Y [enter] automatically check for updates
      Pre-installation Summary
      ------------------------
      Installation Type:
       Server configuration
      
      Licensing:
       Developer Edition
      
      Installation Directories:
       Product: /opt/cf
      
      Server Information:
       Admin Port: 8500
       Web Server: Apache (/etc/httpd/conf)
       Connector Port: 80
       ColdFusion Server Profile: Development Profile
       ColdFusion Solr Search Services: installed
       ColdFusion Pdfg Services: installed
       RDS: enabled
      
      Disk Space Information (for Installation Target):
       Required: 1,325 MB
       Available: 5,045 MB
    29. [enter] to continue
  6. Start the ColdFusion server
    cd /opt/cf/cfusion/bin
    sudo ./coldfusion start
  7. Using a web browser navigate to the admin page using your host domain
    http://aws-ec2instance-public-dns:8500/CFIDE/administrator/index.cfm
  8. enter the admin password
  9. hit OK button to complete the installation
  10. Download the mysql JDBC connector .tar file from http://dev.mysql.com/downloads/connector/j/ and unpack it on your local machine. Then use Filezilla to copy the mysql-connector-java-5.1.40-bin.jar file to the /opt/cf/cfusion/lib directory
  11. restart the cold fusion server
    cd /opt/cf/cfusion/bin
    sudo ./coldfusion restart

    MySQL 5 ODBC database driver will now work. Now you can configure a database in data sources of the ColdFusion Administrator.

This completes the ColfFusion server installation.

LAMP on Linux Amazon EC2

As you recall in the recent post Create an EC2 Linux Server I used the standard Amazon Linux AMI image to create an EC2 server. Now let’s get it to host some things.

  1. first step is to allow it to update it initial set of packages type
     sudo yum update
  2. at the prompt type hit Yes
    Is this ok [y/d/N]: y
  3. sudo is the comand to “run as root user” it get’s old typing sudo all the time so lets set it to always use root with the following command
    sudo -i
  4. install the http server, php, and mysql driver
    yum install -y httpd24 php70 mysql56-server php70-mysqlnd
  5. Use the chkconfig command to configure the Apache web server to start at each system boot.
    chkconfig httpd on

    Tip

    The chkconfig command does not provide any confirmation message when you successfully enable a service. You can verify that httpd is on by running the following command.

    chkconfig --list httpd httpd 0:off 1:off 2:on 3:on 4:on 5:on 6:off

    Here, httpd is on in runlevels 2, 3, 4, and 5 (which is what you want to see).

  6. To allow ec2-user to manipulate all files add the ec2-user to the root group. As write ability is needed in the future you can simply add group write permission.
    sudo usermod -a -G root ec2-user
  7. Amazon uses a different method by creating a www group. My method is simpler by only using the root group but maybe not as secure. The www group method is defined below. It comes from Amazon here.To allow ec2-user to manipulate files in the /var/www directory, you need to modify the ownership and permissions of the directory and files. There are many ways to accomplish this task; in this tutorial, you add a www group to your instance, and you give that group ownership of the /var/www directory and add write permissions for the group. Any members of that group will then be able to add, delete, and modify files for the web server.

    To set file permissions

    1. Add the www group to your instance.
      [ec2-user ~]$ sudo groupadd www
    2. Add your user (in this case, ec2-user) to the www group.
      [ec2-user ~]$ sudo usermod -a -G www ec2-user

      Important

      You need to log out and log back in to pick up the new group. You can use the exit command, or close the terminal window.

    3. Log out and then log back in again, and verify your membership in the www group.
      1. Log out.
        [ec2-user ~]$ exit
      2. Reconnect to your instance, and then run the following command to verify your membership in the www group.
        [ec2-user ~]$ groups ec2-user wheel www
    4. Change the group ownership of /var/www and its contents to the www group.
      [ec2-user ~]$ sudo chown -R root:www /var/www
    5. Change the directory permissions of /var/www and its subdirectories to add group write permissions and to set the group ID on future subdirectories.
      [ec2-user ~]$ sudo chmod 2775 /var/www [ec2-user ~]$ find /var/www -type d -exec sudo chmod 2775 {} \;
    6. Recursively change the file permissions of /var/www and its subdirectories to add group write permissions.
      [ec2-user ~]$ find /var/www -type f -exec sudo chmod 0664 {} \;

    Now ec2-user (and any future members of the www group) can add, delete, and edit files in the Apache document root. Now you are ready to add content, such as a static website or a PHP application.

  8. Create a PHP test file in the www server document root
    echo "" > /var/www/html/phpinfo.php
  9. Change the group of the http Apache server config files to allow editing
    chmod 664 /etc/httpd/conf/httpd.conf

    or using Amazon’s method:

  10. sudo chown -R root:www /etc/httpd/conf
    sudo chmod 2775 /etc/httpd/conf
    find /etc/httpd/conf -type d -exec sudo chmod 2775 {} \;
    find /etc/httpd/conf -type f -exec sudo chmod 0664 {} \;
  11. Using Filezilla or WinSCP do a SCP connection to the server and navigate to the http apache server config directory /etc/httpd/conf and edit the httpd.conf file. Add the virtual host directives to the bottom of the file.UU
     # This first-listed virtual host is also the default for *:80
     #ServerName www.mydomain.com
     DocumentRoot "/var/www/html"
    
    
    #
    # ServerName www.mydomain2.com
    # DocumentRoot "/var/www/html/mydomain2"
    #
    
    #
    # ServerName www.mydomain3.com
    # ServerAlias www.mydomain4alias.com
    # DocumentRoot "/var/www/html/mydomain3
    #

    Uncomment the ServerName and enter your own domain. and also the 2nd and 3rd virtual hosts are fully commented out with the # (just remove the #’s to create the virtual host.)  name the ServerName it with your additional domain name and root directory. Add as many virtual hosts as needed. When done editing upload the file back to /etc/httpd/conf

  12. BONUS: Do you need GD image library support and OpCode Cash both used by drupal? do the following:
     yum install php70-gd
    sudo yum install php70-opcache
    
    
    # Install additional commonly used php packages
    sudo yum install php70-imap
    sudo yum install php70-mbstring
    sudo yum install php70-pdo
    sudo yum install php70-pecl-apcu
  13. Start the Apache web server.
    [ec2-user ~]$ sudo service httpd start Starting httpd: [ OK ]
  14. Use a browser to navigate to the server root and to phpinfo.php page you should get the amazon linux test page and the PHP information page show below.linux-test-pagephp-test-page

The Apache 2.4 webserver is now running with PHP and virtual hosts. If you wish to install mysql and phpMyAdmin on this server follow the directions on Amazon http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/install-LAMP.html I will use the RDS server and mysql Workbench instead.

Security Group for EC2

Now let’s manage the security group for the Amazon Linux EC2 Instance

Notice all traffic is allowed with sg-e70fdxxxx this is the security group of the RDS server. a reciprocal allow all traffic is set on the RDS server that allows it to communicate with the EC2 server.

Here’s a list of the ports and usage.

Port 80 http html server, port 443 is https secure html server

port 8575 and 8577 this is the Adobe ColdFusion license manager

port 22 is SCH allowing terminal and file transfer

port 20-21 FTP server

port 1024-1048 ?

port 5005 ColdFusion Line Debugger

port 8500 is the standard ColdFusion administrator using the CF built-in web server

Port 4200 is user defined used by node.js server. Port 4200-4220 is unassigned with no official use and a good option for node.js.

Port 80 (tcp out – push metrics) & 43554 (tcp in/out – reverse interaction) is used by keymetrics.io dashboard monitor

Port 7101-7106 Redis Server nodes in cluster mode (7101-7103 Masters, 7104-7106 corrensponding slave)

Port 17101-17106 Corresponding control channel for Redis server nodes

Create an Elastic IP

Before using the assigned Amazon EC2 server IP address on your own domain you may want to assign a more static IP using Amazon Elastic IPs. This will allow you to create additional EC2 servers and reassign the IP to the new server without messing with DNS entries.

  1. click Elastic IPs under Network Security on the right side of the Amazon EC2 dashboard
  2. click Allocate new Address then select the instance ID to associate with this Elastic IP address

When you return to the EC2 dashboard instances you will notice the address is now associated with the new IP address. This address may now be assigned a host A record on your domain DNS (example mazu.gtk.link )

Now I can go to my domain registrar to manage the DNS. For a .link type domain I use https://uniregistry.com/ for most other domains I use https://www.dynadot.com/

  1. Create the domain with the registry handling the nameservers (example gtk.link)
  2. Then navigate to managing DNS records and create an A (host) record for the EC2 server. (example mazu.gtk.link)
  3. The RDS server can be assigned a CNAME record pointing to the RDS amazon host name.  (example rds.gtk.link)

Create a mysql RDS server

One quick an easily managed service from Amazon is RDS for a database server. The RDS server is straight forward and makes future development easy. Here I will create the first mysql database.

  1. Go to https://console.aws.amazon.com/rds/home?region=us-east-1#
  2. select RDS Instances
  3. select Launch DB Instance button
  4. select mysql tab (it is free eligible) and hit the select button
  5. select mysql production environment and select the next button
  6. select the latest DB engine version from the drop down selection
  7. enter 20 for allocated storage
  8. enter a DB instance identifier. I tend to use nonmeaningful names here, for example use the name greek goddess. (example mazudb)
  9. enter a master username and master password (user: mazu) and select the next step button
  10. on the Configure Advances Settings form you will see VPC security group with Create a new Security Group Selected. (more on this later)
  11. enter a database name (example mazudb) and select the Launch DB Instance button

In a later post I will discuss allowing an Amazon EC2 virtual host connect to the server by changing the security group.

Now I can connect to the RDS mysql server from my laptop with MySQL Workbench

Create an EC2 Linux Server

This process will create an Amazon EC2 Linux Server.

  1. Go to the EC2 Dashboard https://console.aws.amazon.com/ec2/v2/home?region=us-east-1# and click Launch Instance button
  2. Select Free Tier checkbox and select the first AMI (Amazon Linux AMI xxxx.xxx.xx (HVM), SSD Volume Type)ec2-management-console
  3. Select t2.micro. The default values for the Instance Details, Storage, Tags, and Security Groups is sufficient. click the Review and Launch button.
  4. Next you need to create a key pair. This is the key you will use to connect to the server either through SSH terminal connection or a SCP file transfer connection using WinSCP or Filezilla.key pair.jpeg
  5. download your key pair to your laptop and keep it secure and safe, maybe back it up to an online service like drop box

Amazon uses key pairs to connect to your server instead of passwords. Since the user will always connect with a key no actual user administrator password is needed. The neat thing about keys pairs they can be used on multiple to EC2 instances. feel free to select use an existing key pair if you already created one.

When you return to the EC2 instance dashboard you will see your instance complete initialization.

ec2-instance

On your windows laptop install Putty a free SSH terminal client from http://www.chiark.greenend.org.uk/~sgtatham/putty/

To connect to EC2 on Putty you will need to convert your key to PPK format (using WinSCP, tools, Run PuTTYgen, Select Conversions, Import Key, .pem file, and select save private key) then on Putty connection select the host name found on the EC2 instance dashboard and select SSH connection type. On the connection from under Category on the left, scroll down to Connection / SSH / Auth and then select browse to select your PPK private key file.

When you correctly connect to the EC2 instance with Putty you will see a Login As prompt in your terminal window. you can login as ec2-user. (just type ec2-user and hit enter.)

In the next post I detail the use of Elastic IPs for the EC2 server.

In a later post I will detail the steps to setup LAMP, Apache2.4, virtual hosts, Coldfusion, and node.js on a linux EC2 server.

A new wordpress blog

This new blog is added to initially record some procedures used. I hope to take a few extra minutes to the development process to record best practices.

Powered by WordPress.com.

Up ↑