Thursday, March 31, 2022

Linux User Management



User management is a fundamental concept in any operating system and information systems in general. Linux operating system keeps a specific user hierarchy divided in groups and permissions. Those permissions can be divided in two categories. The first is the file permissions , this is quite obvious because Linux is a "file-centric" operating system. The second is the sudo permission which will be analyzed further. On top of this hierarchy we find the user named "root". root user is the absolute administrator of the operating system with read write and execute permissions on every single file and control permissions on every single service/process. But don't forget that great power always comes with great responsibility, so basic security guidelines recommend to restrict root user as much as possible usually by preventing ssh access, avoiding running potentially insecure services as root or working in command line as a root user.


Users and Groups

Starting with the basics lets see the add/remove users procedures and the groups. You can add the user named penguin by giving the following command:

#useradd penguin

Of course we'll need a password for this user as well so:

#passwd penguin

This command will prompt you to give desired password twice for confirmation. 

Then we may like to add this user on a specific group named operators: 

#usermod -a -G operators penguin

Now you can open the file /etc/passwd and observe that there's a line appended at the end containing the username we've just created followed by a couple of numbers representing the user id and the group id of this user. By default when you create a user a group with the same name is created as well, and you can confirm this by observing the file /etc/group where respectively you'll find a line containing the group name and the group id.
Another interesting file to observe is at /etc/shadow. Here you'll find all the usernames of the system followed by a specific string. This string contains a cryptographic hash of the password of the respective username.


To know thyself

A phrase that puzzled a lot of philosophers and thinkers dating back in ancient Greece. But in our case in a Linux OS environment things are much simpler, where the commands 

#whoami 

returns the name of the user we're currently logged in

#who 

returns all the names of the currently logged in users


The magic word

The magic word was mentioned earlier in the prologue and is SUDO. "sudo" command allows a non-root user to perform privileged operations such as restarting system services or reading files owned by root. You can do this by simply giving sudo before any command eg:

 #sudo systemctl restart sshd

 Of course this command alone can not have any results if the user is not declared as a privileged user, commonly known as sudoer. So how you give the power to a user to be a sudoer ? Most systems have a sudo group by default. So if you have root privileges you can simply add this user to sudo group as described above. However if you're on a distro or a system that does not have a sudoer group, you can simply create a group as described above and configure it as a sudo group. To make a simple group a sudoer group you can give:

 #visudo

This command will open the /etc/sudoers file which keeps information about sudo access. There you can append the following line:

%operators ALL=(ALL) ALL

and in that way you transform the operator group we created before, to a  sudoer group.


Navigating between users.

At last if you own more than one user you can change between them by giving 

 #su - username

of course you'll be asked for the targeted user's password.











Tuesday, October 19, 2021

Find. The art of searching in the Linux filesystem







There's no doubt that searching files is a primary need on every system, whether is a website ,a storage, a database or the entire internet. Linux OS provides several very useful tools for the user to perform detailed and effective search operation against the entire filesystem and beyond.


Linux Find

"find" command is the main Linux filesystem search tool. The structure of the command is the following: 

#find |directory| |-switch1| |-switch2|.... |-switchN|

So the first part is just the command find, on the second part is declared the directory where the search will be performed .Then on the third part consists of one ore more switches according to the search terms such as filename, filetype, creation time etc. On the last command switch the N switch, there's an option to execute commands over the search results.

A few examples

#find /home/user1/bucket -name tool 

This is a simple search over the folder bucket under user1 home directory, searching for the item (file or folder) named "tool".

#find /home/user1/bucket -name tool -type f

Now the system will search only for files named tool. If we change the "type f" to "type d" then it will search for directories named tool. 

#find /home/user1/bucket -name *.conf -type f

Of course there's also a wildcard option, which in this case we search for all the .conf files in the bucket directory.

#find /home/user1/bucket -name *.conf -type f -mtime 7

Here again the system will search only for .conf files but only .conf files that have been modified 7 days ago.

#find /home/user1/bucket -name *.conf -type f -mtime 7  -exec cp {} /home/user2/ \;

At last we can take the above search and perform a copy over the detected .conf files modified 7 days ago, with destination the home directory of user2 user.


Locate

Mlocate is an ultra fast utility which can help the user to easily find any file on the system without even having a clue in which directory it may reside. It achieves that by indexing all files with corresponding paths on a single database. Although it lacks the capabilities of find command, it surpasses it in speed and simplicity. So let's start by installing this utility.

#yum install mlocate for CentOS , RHEL, Amazon Linux

#apt install mlocate for Ubuntu and Debian based

#dnf install mlocate for Fedora


After the installation we need to force a database indexing by giving:

#updatedb

The update of the database has to be manually always. The only way that is performed automatically is only during a system reboot.


Now lets say that I need to change my DNS resolver. I just remember that the name of the conf file is resolv.conf but I don't have a clue where this file is so I can just give:

#locate resolv.conf

and within fractions of seconds I got the result which of course is: /etc/resolv.conf


Now let's assume that I've installed apache, but this is my first time with this program and I don't even know where are the related directories have been installed. After updating the database with the updatedb command I can give 

#locate -i apache 

And I got all the files and directories , with full path, containing the word :apache". Switch -i is for ignoring case. 


Grep

Grep is a very powerful command to search inside files. It is very using for log reading, scripting even manipulating the contents of a file. The basic syntax is the following:

#grep nameserver resolv.conf

This command will search inside resolv.conf file for the word nameserver and it will return the entire line of each finding. eg the output will be something like:

nameserver 1.1.1.1

nameserver 8.8.8.8


#grep -v nameserver resolv.conf  

Will return the exact opposite of the match, so it will hide the above result and display all the rest of the file.

#grep -o nameserver resolv.conf 

Will return only the matching words so in our case the output result will be:

nameserver 

nameserver 

#grep -A 1 1.1.1.1 resolv.conf 

Will return one line after the specified match  "nameserver  8.8.8.8" and

#grep -B 8.8.8.8 resolv.conf

Will return one line before the match, thus "nameserver 1.1.1.1"


Last, but not least a very powerful option is :

#grep -r nameserver /etc

This command will search recursively the entire /etc/ directory for the string "nameserver" and it will return the full path of each file along with the line containing that string eg in the resolv.conf case the output will be: 

/etc/resolv.conf:nameserver 1.1.1.1

/etc/resolv.conf:nameserver 8.8.8.8 










Tuesday, July 23, 2019

IPtables , The Legendary Firewall

A Brief History

Iptables based on netfilter framework has become the default for firewall software in Linux for nearly two decades. Netfilter/iptables framework is a kernel module supported since the 2.3 version, developed by Rusty Russel back on 1999. Here you can check his personal blog: https://rusty.ozlabs.org


A Strong Security Solution

Iptables is a very reliable and secure software, and it is remarkable that is not used only as a local machine firewall. Linux OS, together with iptables, installed on a machine can be used as a hardware router/firewall solution also. In addition to that there are even some open source software appliances used as a router/firewall based on Linux and iptables.


Netfilter Architecture

The Netfilter architecture is divided in the three following layers:Tables-Chains-Rules, see picture below

 

At the lower level we have the tables, which represent the type of packet processing that is happening through the firewall. The basic tables that are frequently used are the following:

- Filter: Table for packet filtering.
- NAT: Table for NAT rules.
- Mangle: Table for mangling packets.

At the next layer there are the Chains, which are simply lists of rules associated with each particular table. And finally, at the top there are the actual firewall rules controlling the access to the system.


Iptables in use


So let's try to see if we have any iptables rules loaded in our system:
give

#iptables -L

and you'll get something like this:


Chain INPUT (policy ACCEPT)
target                                              prot opt source               destination        
KUBE-FIREWALL                           all  --  anywhere             anywhere           

Chain FORWARD (policy DROP)
target                                            prot opt source               destination        
DOCKER-ISOLATION                   all  --  anywhere             anywhere           
DOCKER                                       all  --  anywhere             anywhere           
ACCEPT                                        all  --  any where             anywhere     ctstate RELATED,ESTABLISHED
ACCEPT                                        all  --  anywhere             anywhere           
ACCEPT                                        all  --  anywhere             anywhere           

Chain OUTPUT (policy ACCEPT)
target                                          prot opt source               destination        
KUBE-FIREWALL                       all  --  anywhere             anywhere           

Chain DOCKER (1 references)
target                                         prot opt source               destination        

Chain DOCKER-ISOLATION (1 references)
target                                         prot opt source               destination        
RETURN                                    all  --  anywhere             anywhere           

Chain KUBE-FIREWALL (2 references)
target                                         prot opt source               destination        
DROP                                          all  --  anywhere             anywhere         




On this particular host there are some rules generated from Docker and Kubernetes deployment.
First column is the Chain. Second column (prot) describes the protocol involved with the rule, third column (opt) is the ip options. Finally source and destination represents the source/destination IP or subnet involved with the rule.

Now lets say we need to blacklist an IP address so that our host is blocking every incoming and outgoing packet to this particular IP


#iptables -A INPUT -s <ipaddress to block> -j DROP


In that rule the firewall simply drops every incoming packet from the blacklisted ip address.  The rule is categorized under the FILTER table, and under the INPUT chain. The -s switch is used to filter the source ip address.


#iptables -A OUTPUT -d <ipaddress to block> -j DROP


In that rule the firewall drops every outgoing packet with destination IP same to the  IP address that is blacklisted. This rule is categorized under the FILTER table as well, but this time is under the OUTPUT chain. Now instead of source , we have destination ip address, thus the -d switch.


In some other occasion we may need to allow ssh connections to our host

#iptables -A INPUT  -p tcp --dport 22 -j ACCEPT


or block ping requests

#iptables -A INPUT -p icmp --icmp-type echo-request -j DROP

So you get the idea how this works.


Iptables as a router.

In that case it is possible to use a Linux box as a gateway to route LAN traffic to internet. For this operation you can utilize chains PREROUTING and POSTROUTING .


PREROUTING chain controls the incoming network packets from the LAN to the Linux box. So in practice you use PREROUTING for port forwarding in most cases
for example:


#iptables -t nat -A PREROUTING -i eth0 -p tcp --dport 80 -j DNAT --to 192.168.1.10:80
 

this rule is allowing a web server to operate , by simply redirecting HTTP traffic from outside the LAN (internet) to the port 80 of the web servers IP (192.168.1.10).

POSTROUTING chain controls the outgoing network packets from the Linux box to the internet and is the chain which routes all the LAN traffic outside , so here we talking about the NAT process, so the rule goes

#iptables -t nat -A POSTROUTING -o eth0 -j MASQUERADE
 

Based on that you can create your own firewall appliance just by using a single PC with a couple of network interfaces.








Thursday, June 7, 2018

Logical Volume Manager LVM

To LVM or not to LVM?

The Linux Volume Manager aka LVM, at first view, is a simple storage management tool. However it differs from other disk management tools (eg:parted,fdisk) because it basically operates in a layer between the actual storage hardware and the filesystem. So the concept now becomes about Volume Management, rather than disk management.

  With LVM you can do magic, you can simply extend any partition by adding one or more hard disks without rebooting, neither interrupting any service on the running operating system. You can add more disk space literally "on the fly". Imagine you're facing a situation where you have a production server, or a VM, running out of space suddenly for some reason (logs,database data, fileserver storage etc). So you need to save the day by keeping the system running, then LVM can do the job. In addition to that LVM gives you the capability to take snapshots of the files which is very useful for backup, and can be used also to create Software RAID on your storage.

All of the above sounds amazing indeed but beware, there are some pitfalls you ought to consider before get involved in this sorcery.

LVM , as we mentioned before, is an extra layer between your physical partition and the actual filesystem. Well that extra layer apparently requires some extra kernel resources, and adds a small performance reduction. Now the tricky part is that LVM increases complexity and that can make the data recovery impossible. For example imagine that you lose a hard disk, mapping two different folders on different partitions on different filesystems. Yes that will cause a great mess.


LVM Anatomy






As we can see from the image below the whole architecture consists out of 3 layers

1. The Physical Volume layer.
 Simply the physical partition with added LVM metadata.

2. The Volume Group Layer.
 The pool of disks ,where they can be allocated to Logical Volume Layer.
 
3. The Logical Volume Layer.
 Here we have the logical partitions. A logical volume is perceived from the Linux operating system as a normal hard disk, but in fact is a virtual disk (not to be confused with Virtual Machine hard drive), which contains one or more hard drives in a lower layer.


LVM hands on

And now, after acquired a basic understanding of the architecture, we're ready to play with the spells (commands) to create LVM magic.

So taking as an example the above diagram lets assume we have a system with 2 hard drives and each disk is formatted in two partitions.

disk1: /dev/sda1 and /dev/sda2
disk2: /dev/sdb1 and /dev/sdb2

Lets start from bottom to top by creating the physical volumes, give :

# pvcreate /dev/sda1 /dev/sda2 /dev/sdb1 /dev/sdb2

You can confirm the new physical volumes by giving:

# pvdisplay for more detailed information


Proceeding to the upper layer enter the command:

# vgcreate volgroup /dev/sda1 /dev/sda2 /dev/sdb1 /dev/sdb2

to create the volume group.
And again to confirm the results enter:

# vgs  to examine the volume group created.

Now there's the main course ,where we create the Logical Volumes, lets assume we have a 500 GB hard disk:

# lvcreate -L 120G -n lvhome volgroup

creates a logical partition 120 GB from the volgroup pool.

# lvcreate -L 380G -n lvstorage volgroup

or

# lvcreate -l 100%FREE -n lvstorage volgroup

which is more accurate because it simply uses all the remaining free space of the volume group to create the logical volume.

again #lvs  to check the newly created logical volumes.


LVM layer is ready, the only thing left now is to create the filesystem on top.

# mkfs.ext4 /dev/volgroup/lvhome

and

# mkfs.ext4 /dev/volgroup/lvstorage

finally we need to mount those volumes to the desirable folders

#mount /dev/volgroup/lvhome /root
#mount /dev/volgroup/lvstorage /storage

and don't forget to put them on fstab to make the mounts work after the reboot.


LVM Magic

As it was mentioned before a very strong advantage of using LVM is to add more disk space without interrupting the system. This can be as follows:
Lest assume we need to add another 500GB (/dev/sdc) hard drive to expand the /storage folder, which is the lvstorage logical partition. 

After creating the partition /dev/sdc1 you need to create also the corresponding physical volume

# pvcreate /dev/sdc1

and then contain the physical volume in the volume group

# vgextend volgroup /dev/sdc1

and continue by expanding the storage volume group

#lvextend /dev/volgroup/lvstorage /dev/sdc1

Finally we need to extend the filesystem of the logical partition to acquire the new additional space

# resize2fs dev/volgroup/lvstorage

give

# df -h to confirm the canges


Now you're a bit wiser to decide whether you need LVM or not and if its finally worth the effort. I'm curious about your opinion on this, until then...

May the source be with you!














Wednesday, April 11, 2018

Nginx

If you're involved in Linux and Web stuff you may heard sometime about Nginx. Well Nginx is a "state of the art" platform. It differs from your common web server because it can be used also as a reverse proxy, load balancer, email proxy or even for video streaming.

In this article we will examine the set-up and configuration of Nginx starting using it as a simple web server and then scaling up to web proxy and load balancer.

So lets start the installation, but first ,if you use a CentOS box like me, you have to make sure you have the "epel" repository installed. It's a very useful extra repository created for the enterprise Linux, which contains plenty of extra software including Nginx as well. 

To obtain and install that repo just give

#yum install epel-release-latest-7.noarch.rpm

Now we're ready for Nginx. On my CentOS server to install I just give the command:

#yum install nginx

 Now if you just navigate to /etc/nginx you can see the nginx.conf which is the main configuration file.


Nginx as a web server

We can start with the case which Nginx is used as a simple webserver. The basic configuration inside the nginx.conf is the following:


http {
    log_format  main  '$remote_addr - $remote_user [$time_local] "$request" '
                      '$status $body_bytes_sent "$http_referer" '
                      '"$http_user_agent" "$http_x_forwarded_for"';
                       access_log  /var/log/nginx/access.log  main;
         server{
 #server stanza configuration section
         }
        }


The http stanza contains some default information about logging and the server block information which goes as follows:

  server {
        listen       80 default_server;
       
        root         /usr/share/nginx/html;

        error_page 404 /404.html;
            location = /40x.html {
        }

        error_page 500 502 503 504 /50x.html;
            location = /50x.html {
        }
    }

Here the "listen" directive defines the listening port of the web server,  root the root html directory of the web server, location and at last there are some default error pages defined to be displayed in case of an HTTP error request.


Nginx as a reverse proxy

Now we want to use the Nginx so that it can handle all incoming http requests and distribute them among the servers in the insight network. So on the main nginx.conf inside the server stanza we add the  following:

server_name mywebserver.com

location /uri/path/ {
                  proxy pass http://mywebserver.local;
                              }

The "server_name" directive is essential if you have multiple servers, with different server-names apparently. If this is defined ,Nginx processes  the host header according to the configuration stated below server_name.
"location" directive checks the request URI, and forwards all the requests to the address specified by "proxy_pass" directive". In that case where mywebserver.local you can also put IP address and port e.g: 192.168.1.200:8080.


Nginx as a load balancer

As it was mentioned above Nginx can be a very effective load balancer using several different load balancing algorithms (round robin by default). So to set up a simple load balancer, on the nginx.conf, we must go under the http stanza configuration and give the following:


    upstream mywebsite {
        server mywebserver1.com;
        server mywebserver2.com;
        server mywebserver3.com;
        server mywebserver4.com;
    }



All the magic here is been done by the upstream directive which defines the upstream servers where the traffic is distributed. Those servers are listed below defined by the classic server directive. By default uses the round robin algorithm but you can simply change that , by adding under the upstream directive.

least conn; 

for the least connected load balancing or

ip hash;  

for ip hash load balancing.


Nginx SSL 

It is essential to use https in your server http is insecure, obsolete and is going to be abandoned soon. You can count on Nginx to handle all the SSL procedure whether is a webserver or a proxy. To do this under the server stanza on the main configuration you need to add the following lines.

listen   443;

ssl    on;
ssl_certificate    /etc/nginx/conf/mywebsite.com-bundle.crt
ssl_certificate_key    /etc/nginx/conf/mywebsite.key;

Now the "listen" directive is on 443 (SSL), it follows the "SSL on", and then we simply declare the directory that we hold the SSL bundle certificate and the SSL key.


Nginx management and control.

After every configuration change you have to restart the nginx service in order for that to be applied, to do this simply give:

# systemctl restart nginx

But..beware, you have to be very sure that your configuration is correct otherwise the server will fail to start resulting your website or websites to be down. To avoid this you have the option to test your configuration before the restart by giving.

# nginx -t

You can also apply your configuration changes without restarting by giving.

# nginx -s reload

And don't forget to make sure that you have Nginx to run on system startup.

# systemctl enable nginx

So this is enough info for a good start, for additional plenty of information you can always visit https://www.nginx.com/.

enjoy


















Friday, January 5, 2018

Network Tools





Computing co-exists with networking. Thus to operate a Linux system you’ll find yourself very often involved with network operations. Those operations may be between your system and the outside world (whether is a LAN or the Internet) but they may also be inside your own kernel network stack.

One of my favorite packages ever is the net-tools package. It is a set of very useful tools for configuring and gathering information about your network resources.
So let’s start by installing the package, I’ll use my centos 7 server for the demonstration

          #yum –y install net-tools

Now let’s find and inspect the package to see what we got:

          #rpm -qa | grep net-tools

Which gives the exact version of the package (net-tools-2.0-0.22.20131004git.el7.x86_64 )

To inspect that we give:

          #rpm -ql net-tools-2.0-0.22.20131004git.el7.x86_64

Here we get a long file list with man pages, language files, services etc, but we will focus on some binaries of the output list of the previous command:
/bin/netstat
/sbin/arp
/sbin/ifconfig
/sbin/iptunnel
/sbin/route

My favorite here is Netstat. This command operates like a radar for your system, monitoring every single incoming and outgoing network connection. So let’s play with that by giving:

          #netstat –an

By examining the output, we spot two sections. The first section displays the “Active Internet connections (servers and established)” which is obviously the connections in and out of the machine.
Proto     Recv-Q  Send-Q                 Local Address         Foreign Address          State
tcp          0              0                           0.0.0.0:22                      0.0.0.0:*                  LISTEN

Proto is the protocol type it can be tcp or udp, Recv-Q  Send-Q is the  count of bytes in queue ready to be received or sent accordingly, for this particular socket. Local address is the address of our machine and foreign address is the address of the remote connected machine. In this example is zero because the socket is in listening mode, this you can check by the last column “State” which displays the TCP protocol state the time you hit the command. Local address can be 127.0.0.1 or the machine’s unique local ip or machine’s one of multiple ip addresses.

The second section of the output has the pattern:
ProtoRefCnt              Flags              Type                State               I-Node   Path
unix  2                    [ ACC ]            STREAM      LISTENING     17930    /var/run/lsm/ipc/sim

Here the Protocol column is always UNIX which represents a UNIX socket. This kind of socket is used only for process interconnection and not for external networking. The “Flags” column lists the opening TCP Flag of the connection, the “Type” states if the connection is  a stream or a datagram, “State” is the current TCP state, next column is the I-node number where the process file is located, and “Path” is the path of the process file.

Arp is a tool to get information about the apr table on the machine, just for the redord ARP stands for Addresss Resolution Protocoll and is basically maps an ip address to a physical MAC address. So by giving:

          #arp 

We get the following structure
Address                  HWtype             HWaddress                       Flags Mask            Iface
gateway                  ether                d1:68:0a:4a:f2:da               C                         enp1s0

Here we can see this mapping the MAC address (HWaddress) of the gateway connected to our Ethernet (HWtype ) interface enp1s0 (Iface).

Ifconfgig is an interface manipulation tool. With this you can change the IP settings (address ,netmask ,broadcast etc),enable or disable the interface, enter promiscuous mode or add an alias. 
So lets give:

          #ifconfig virbr0-nic

virbr0-nic: flags=4098<BROADCAST,MULTICAST>  mtu 1500
        ether 52:54:00:0f:48:4d  txqueuelen 1000  (Ethernet)
        RX packets 0  bytes 0 (0.0 B)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 0  bytes 0 (0.0 B)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

“virbr0-nic” is the virtual bridge interface of my KVM Hypervisor. Here we can see the type of the interface the MAC address and some statistics about packet transmission.

Iptunnel is a tool to create tunnels for ipv4 packet encapsulation. The use of it is a bit complicated and I hope I can cover it in a future article

Route is a tool to examine and manipulate your machines routing table. Giving

          # route

We have the following output:
Destination          Gateway         Genmask             Flags        Metric Ref     Use  Iface
default                    gateway         0.0.0.0                    UG          100    0          0    enp1s0
10.0.81.0                0.0.0.0         255.255.255.0             U           100    0          0    enp1s0

This is basically the kernel routing table which shows the network path that a packet follows to reach its destination. The first line is the default route which is the route the packet follows when no other path is specified. Now by analyzing the columns of the routing table we can get information about each route:
Destination is the host or network address the packet is finally destined to, Gateway is the node that each packet uses in order to reach an outside network, Genmask is the netmask of the network, the Flags column indicates information about the state or type of route, Metric is the distance of the target, Ref the number of references to this route, Use is the count of lookups for the route and iface the network interface.

At last, of course I can’t exclude from the article traceroute and dig, although they’re not in network-tools packet.
So if we traceroute a host we get a numbered list of hostnames which are simply the hops the packet passes through in order to reach the final host destination.
Dig is a very powerful tool which gives detailed dns information about an internet address,
Bonus command: 

           #dig +short myip.opendns.com @resolver1.opendns.com

which gives us our external IP address

Of course there are many other network commands and tools, but using the commands mentioned above is a very good toolset that will help you to identify your network surroundings and troubleshoot possible anomalies.

Saturday, December 2, 2017

SSH Key Based Authentication






There is a big debate whether is better to use passwords or SSH keys to login on your Linux systems.
Well, in my opinion key based passwordless authentication is mandatory when you have to deal with network automation and mass configuration tasks, like Ansible scripting, or automated secure copy (scp). Is also easier than typing passwords all the time and more productive especially in large scale infrastructures. But when it comes to security things are more complicated.

First of all, a few words about SSH. SSH or secure shell is a network protocol which uses public-key cryptography to establish secure connections between a server and a client. It is commonly used in Linux and Unix systems of course, but also in most of the major cloud services.
All we need to implement this is to create a (public – private) key pair. I keep my private key secret in my system and I send my public key on the server, the algorithm matches the key and I can be authenticated. 

 Now let’s do some magic and make our machines login and send files through ssh without the use of a password. So I’m going to log in to my charming Linux Mint desktop, and then create that pair by giving 




# ssh-keygen –t rsa


Now we get an interactive prompt asking us to enter some info:

Generating public/private rsa key pair.

Enter the file in which to save the key (/home/user/.ssh/id_rsa):

Here we just pressing enter to accept this directory. Usually Linux systems keep the key-pair in the hidden ssh directory under the home directory of the user.

Enter passphrase (empty for no passphrase):

Enter same passphrase again:

Here we can give a passphrase to encrypt our private key for extra security.
After that we get the funny randomart image on our terminal which indicates that our key-pair is ready.

Now if we navigate on our keystore directory we can find our key-pair





# ls –ltr /home/user/.ssh/



Id_rsa.pub is the public and id_rsa the private key accordingly.
As it said before we need to keep our private key secret, and all we have to do is to send the public key to the server we want to login. 

On the server side now, we navigate to the user profile we need to use for auto-login. On my Centos server is /home/remoteuser/.ssh/
Now there should be a file authorized keys, if not create it with 644 permissions:



#touch authorized_keys

#chmod 644 authorized_keys

Finally copy your public rsa key and paste it (plane text) inside this file. Now you must be able to login without password, try



# ssh remoteuser@mysever.local

 Enjoy
As an epilogue I can say SSH (key-only) based authentication is great in respect of security and can keep your servers unaffected from brute force attacks or man in the middle attacks.
But what happens if a private key is leaked or a client workstation gets compromised?It's pretty the same as losing the keys of your house.
So the choice is yours to decide according to your environment and your needs.