Tag Archives: Linux - Page 5

Setup shared keys for passwordless ssh login

In this post I will show how I setup passwordless logins to my servers. This is especially useful when you want to do logins in a script e.g. a rsync script for backup to another computer through ssh

First we have to start with creating a private/public key pair on the computer that we will do the login from

ssh-keygen -t rsa -b 1024

I use default options here. NOTE: Do NOT set a passphrase. If you do then the key pair expects this passphrase every time you use it which means that you will not be able to make passwordless logins with this key pair. Just press Enter twice and the key pair will be created without a passphrase.

After a while the key pair is created and you should have a id_rsa and id_rsa.pub in your home .ssh directory

Now we need to get your public key (id_rsa.pub) to the server we want to login to

cat .ssh/id_rsa.pub | ssh <servername> "cat >> ~/.ssh/authorized_keys"

That is it! Now you should be able to login to the server without using a password. Try it with:

ssh <servername>

No password should now be asked for

Tested on Debian 5.0 (arm/pc) and OS X 10.6.8

Advanced cool Linux commands

I will here save strange combinations of commands that has helped me in my daily work

Get all unique dates in a file where the dates apear in the second column (but not on all rows)

awk -F " " '{print $2}' <filename> | egrep "^[0-9]{4}" | sort | uniq

With awk I select the second ($2) column in the file. Columns are separated with space (” “) .
Egrep selects all rows that starts with 4 digits like in “2011-07-07”
The sort command sorts all rows – needed for the uniq command
The uniq command removes all duplicated rows

Get all unique rows that match regexp “<xml-tag>.*</xml-tag>”

 egrep "<xml-tag>.*</xml-tag>" /path/to/file | sort | uniq 

With egrep I get all rows that matches the regular expression
The sort command sorts all rows returned from egreg which is needed for the uniq command
The uniq command removes all duplicated rows

Get a list of occurrences of unique rows in a gziped textfile based on date (in second column) of rows containing a search string

zgrep search_string filename.gz | awk -F " " '{print $2}' | sort | uniq -c   

This will give you a list of dates together with the sum of occurrences of search_string like this:

    909 2011-07-01
   1608 2011-07-02
   1604 2011-07-03
   2775 2011-07-04
   2765 2011-07-05
   1757 2011-07-06
   3716 2011-07-07
   2785 2011-07-08
   1711 2011-07-09
   1655 2011-07-10

With zgrep we grep in a gziped file without unzipping it first
With awk we select the second column (in this case a YYYY-MM-DD formated date) on the row
The sort is only needed if the dates do not come in order
The uniq -c gives us the list of occurrences of the uniq dates (grouped together to one row per unique date)

Sum up integrer values in a specific column in a file

awk -F " " '{tot+=$1} END {print tot}' /path/to/the/numbers

Here the values to sum up are in the first column ($1) in the file. The -F ” “ option tells awk to consider a singel space ” “ to be the column separator

Get min/max integer from a file with integers (one per row)

awk -F " " 'value=="" || $5 < value {value=$5} END {print value}' /path/to/file

This will give you the min value of the numbers in the first column in the file. The -F " " option tells awk to consider a singel space " " to be the column separator. To look for the max value just change the <

Create pretty print copies of XML-onliners using xmllint

for f in * ; do xmllint.exe --format "$f" --output "prettyprint/${f%}.df" ; done

This will run all files in current directory through xmllint with --format option and place them as new files in a folder called prettyprint

Bash script simulating the ‘tree’ command

When trying to get to know a new system I really like the ‘tree‘ command. It gives me a fast and nice overview of the filesystem of the application. Sometimes I work on systems that does not have this great tool available and for those occasions I made this bash script:

#!/bin/bash
olddir=$PWD;
declare -i dirdepth=0;
function listfiles {
        cd "$1";
        for file in *
        do
                for ((i=0; $i < $dirdepth; i++))
                do
                        ##Tab between each level
                        printf "\t";
                done
                ## Print directories with brackets ([directory])
                if [ -d "$file" ]
                then
                        printf "\1[$file]\n";
                else
                        printf "$file\e[0m\n";
                fi

                ##Work our way thru the system recursively
                if [ -d "$file" ]
                then
                        dirdepth=$dirdepth+1;
                        listfiles "$file";
                        cd ..;
                fi
        done
        ##Done with this directory - moving on to next file
        let dirdepth=$dirdepth-1;
}
listfiles "$1";
##Go back to where we started
cd $olddir;
unset i dirdepth;

To use the script just do the following:

  1. Create a new file called 'tree.sh' (or whatever you like)
  2. Paste the code into the file and save
  3. Make the file executable
  4. Run the file: . tree.sh

This script is tested on OSX 10.6.8, Red Hat Enterprise Linux AS release 3 (Taroon Update 9)