These CentOS articles will take you from a 'barebones' CentOS 5.1 Cloud to a secured and up to date Cloud ready for your server software (or whatever you use the Cloud for).

Not only that, you will have a better understanding of what is going on and, more importantly, why it's going on.

Log in

On your LOCAL computer, edit the SSH known_hosts file and remove any entries that point to your Cloud IP address. If this is a brand new Cloud then you will not need to do this, but a reinstall will result in a different signature.

nano ~/.ssh/known_hosts

If you are not using Linux or a Mac on your LOCAL computer, the location of the known_hosts file will differ. Please refer to your own OS for details of where this file is kept.

As soon as you have your IP address and password for your VPS login via SSH:

ssh root@

User administration

Now we're logged in to the VPS, immediately change your root password


Add an admin user (We've used the name demo here but any name will do).

adduser demo

You will need to specifically set the password for your new user:

passwd demo

As you know we never log in as the root user (this initial setup is the only time you would need to log in as root). As such, the main administration user (demo) needs to have sudo (Super User) privileges so he can, with a password, complete administrative tasks.

To do this, we're going to add the main user to the 'wheel' group. Once that is done, we need to edit the 'sudoers' file, using visudo, and ensure the 'wheel' group has the correct privileges.

So firstly, add the user to the wheel group:

usermod -a -G wheel demo

Next, give the 'visudo' command:


Near the bottom of the file you will see this line:

## Allows people in group wheel to run all commands
# %wheel  ALL=(ALL)       ALL

Simply uncomment (remove the '#') so it looks like this:

## Allows people in group wheel to run all commands
%wheel  ALL=(ALL)       ALL

Now members of the 'wheel' group have full sudo privileges.
SSH keygen

One effective way of securing SSH access to your Cloud is to use a public/private key. This means that a 'public' key is placed on the server and the 'private' key is on our local workstation. This makes it impossible for someone to log in using just a password - they must have the private key.

The first step is to create a folder to hold your keys. On your LOCAL workstation:

mkdir ~/.ssh

That's assuming you use Linux or a Mac and the folder does not exist. For Windows users we have a separate article for key generation using Puttygen.

To create the ssh keys, on your LOCAL workstation enter:

ssh-keygen -t rsa

If you do not want a passphrase then just press enter when prompted.

That created two files in the .ssh directory: id_rsa and The pub file holds the public key. This is the file that is placed on the Cloud.

The other file is your private key. Never show, give away or keep this file on a public computer.
SSH copy

Now we need to get the public key file onto the Cloud.

We'll use the 'scp' command for this as it is an easy and secure means of transferring files.

Still on your LOCAL workstation enter this command:

scp ~/.ssh/ demo@

When prompted, enter the demo user password.

Change the IP address to your Cloud and the location to your admin user's home directory (remember the admin user in this example is called demo).
SSH Permissions

OK, so now we've created the public/private keys and we've copied the public key onto the Cloud.

Now we need to sort out a few permissions for the ssh key.

On your Cloud, create a directory called .ssh in your home folder and move the pub key into it.

mkdir /home/demo/.ssh
mv /home/demo/ /home/demo/.ssh/authorized_keys

Now we can set the correct permissions on the key:

chown -R demo:demo /home/demo/.ssh
chmod 700 /home/demo/.ssh
chmod 600 /home/demo/.ssh/authorized_keys

Again, change the 'demo' user and group to your admin user and group.

Done. It may seem a long set of steps but once you have done it once you can see the order of things: create the key on your local workstation, copy the public key to the Cloud and set the correct permissions for the key.
SSH config

Next we'll change the default SSH configuration to make it more secure:

nano /etc/ssh/sshd_config

Use can use this ssh configuration as an example.

The main things to change (or check) are:

Port 30000                           <--- change to a port of your choosing
Protocol 2
PermitRootLogin no
PasswordAuthentication no
X11Forwarding no
UsePAM no
UseDNS no
AllowUsers demo

The main thing is to move it from the default port of 22 to one of your choosing, turn off root logins and define which users can log in.

PasswordAuthentication has been turned off as we setup the public/private key earlier. Do note that if you intend to access your Cloud from different computers you may want leave PasswordAuthentication set to yes. Only use the private key if the local computer is secure

Right, now we have the basics of logging in and securing SSH done.

Next thing is to set up our iptables so we have a more secure installation. To start with, we're going to have three ports open: ssh, http and https.

Let's have a look at the default rules:

iptables -L

This will output something similar to this:

Chain INPUT (policy ACCEPT)
target     prot opt source               destination        
RH-Firewall-1-INPUT  all  --  anywhere             anywhere            

Chain FORWARD (policy ACCEPT)
target     prot opt source               destination        
RH-Firewall-1-INPUT  all  --  anywhere             anywhere            

Chain OUTPUT (policy ACCEPT)
target     prot opt source               destination        

Chain RH-Firewall-1-INPUT (2 references)
target     prot opt source               destination        
ACCEPT     all  --  anywhere             anywhere            
ACCEPT     icmp --  anywhere             anywhere            icmp any
ACCEPT     esp  --  anywhere             anywhere            
ACCEPT     ah   --  anywhere             anywhere            
ACCEPT     udp  --  anywhere            udp dpt:mdns
ACCEPT     udp  --  anywhere             anywhere            udp dpt:ipp
ACCEPT     tcp  --  anywhere             anywhere            tcp dpt:ipp
ACCEPT     all  --  anywhere             anywhere            state RELATED,ESTABLISHED
ACCEPT     tcp  --  anywhere             anywhere            state NEW tcp dpt:ssh
REJECT     all  --  anywhere             anywhere            reject-with icmp-host-prohibited

While we won't go into detail about each rule shown here, we like to use our own default iptable rules for the sake of consistency between the articles.

Let's go ahead and remove the current ruleset:

iptables -F

We can then input several commands that allow local connections and keep established connections (which is how the root SSH connection on port 22 will still work when we apply the SSH port changes and apply the new iptables rules).

We'll also open port 80 and port 443 (the normal HTTP and HTTPS ports) and, of course, allow connections to our custom SSH port (30000).

Then we'll allow pings to the Cloud and, effectively, reject all other attempts to connect to a port.

So, on the command line, enter these:

iptables -A INPUT -i lo -j ACCEPT
iptables -A INPUT -i ! lo -d -j REJECT
iptables -A INPUT -m state --state ESTABLISHED,RELATED -j ACCEPT
iptables -A OUTPUT -j ACCEPT
iptables -A INPUT -p tcp --dport 80 -j ACCEPT
iptables -A INPUT -p tcp --dport 443 -j ACCEPT
iptables -A INPUT -p tcp -m state --state NEW --dport 30000 -j ACCEPT
iptables -A INPUT -p icmp -m icmp --icmp-type 8 -j ACCEPT
iptables -A INPUT -j REJECT
iptables -A FORWARD -j REJECT

If you are new to iptables then that may seem a little intimidating but, as we recommend with most things, take it one step at a time.

Have a look at each line and see what it does. I actually think they are all pretty self explanatory but don't be afraid to do some research if you are not sure.

Anyway, let's have a look at what rules are in place now:

iptables -L

See the difference between the first time we entered that command and now? Again, have a look at each line of the output and see where is marries up with the rules we entered earlier.

As previously mentioned, If you are unhappy or have made a mistake you can flush the rules and start again with a:

iptables -F


Although the rules are up and running, they are only active for the current session. If the Cloud was rebooted they would be lost.

As such, let's ensure they are restarted on a Cloud reboot:

service iptables save

The output confirms the rules were added to the correct file:

Saving firewall rules to /etc/sysconfig/iptables:          [  OK  ]

Feel free to have a look at that file and familiarize yourself with the syntax.
Logging in with the new user

Now we have our basic firewall humming along and we've set the ssh configuration. Now we need to test it. Reload ssh so it uses the new ports and configurations:

/etc/init.d/sshd reload

Don't logout as root yet...

On your LOCAL computer, open a new terminal and log in using the administration user (in this case, demo) to the port number you configured in the sshd_config file:

ssh -p 30000 demo@

The reason we use a new terminal is that if you can't login you will still have the working connection to try and fix any errors.

Cloudhost also has the excellent ajax console so if it all goes horribly wrong, you can log into your Cloud from the Cloudhost management area.

You will be greeted with a plain terminal prompt like this:

[demo@yourvpsname ~]$


We now know that the firewall and ssh_config works and we can log in.

Let's move on to part 2 which includes updating the install and installing some base programmes.