sábado, 27 de junio de 2009

Named Pipes... or how to get two separate applications to interact

Recently, I've been working on an application (bash based) that could gather some information that I need from a host (network interfaces configuration, arp neighborgs, routing policy, pinging some other hosts, etc). Then I wondered that it would be good if I were able to connect to some hosts through SSH and run some commands on those hosts and save the output of those commands as part of the information of the first host. Like an information gatherer of sorts.

I started working on this part of the project and hit a brick wall. When I connect to a host using openSSH's server, I had no problem throwing a bunch of commands at the server and wait for the output to come from ssh and save it. Say, something like:

$ ssh ubuntu@ubuntu <<EOF
> ip link show
> ip addr show
> EOF
ubuntu@ubuntu's password:
1: lo: mtu 16436 qdisc noqueue state UNKNOWN
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
2: eth0: mtu 1500 qdisc pfifo_fast state DOWN qlen 1000
link/ether 00:21:70:94:08:b0 brd ff:ff:ff:ff:ff:ff
3: eth1: mtu 1500 qdisc pfifo_fast state UP qlen 1000
link/ether 00:16:44:d3:f4:bf brd ff:ff:ff:ff:ff:ff
4: pan0: mtu 1500 qdisc noop state DOWN
link/ether 96:4d:e1:83:8a:4d brd ff:ff:ff:ff:ff:ff
1: lo: mtu 16436 qdisc noqueue state UNKNOWN
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
2: eth0: mtu 1500 qdisc pfifo_fast state DOWN qlen 1000
link/ether 00:21:70:94:08:b0 brd ff:ff:ff:ff:ff:ff
3: eth1: mtu 1500 qdisc pfifo_fast state UP qlen 1000
link/ether 00:16:44:d3:f4:bf brd ff:ff:ff:ff:ff:ff
inet 192.168.123.127/24 brd 192.168.123.255 scope global eth1
inet6 fe80::216:44ff:fed3:f4bf/64 scope link
valid_lft forever preferred_lft forever
4: pan0: mtu 1500 qdisc noop state DOWN
link/ether 96:4d:e1:83:8a:4d brd ff:ff:ff:ff:ff:ff

Great. However, I didn't' want to connect to a host with one openSSH service. I had to connect to a router (a HW router, so to speak) that, whenever I sent it more than one command, would break my connection.

After several tries and experiments, I thought about creating one application that would use the ssh client to send commands to the router and get the output of the ssh client. It would wait for the router's prompt to show up before sending another command. Now, that would require not only getting the output of the ssh client, as that is a piece of cake:

$ ssh user@host | ./ssh_handler

That would allow my_ssh_handler to get the output of ssh (in other words, the router) to process it, but I also need to send commands to the ssh, somehow. That's when named pipes show up.

Named pipes allow you to send/receive data from streams that are not the standard input/outputs we get with every process (standard input, standard output, standard error).

Say you have two terminals sessions sitting on the same directory:

Session 1:
$ pwd
/home/ubuntu/pipe experiment

Session 2:
$ pwd
/home/ubuntu/pipe experiment

Let's create a named pipe in this directory in one of those sessions:
$ mkfifo my_pipe
$ ls -l
total 0
prw-r--r-- 1 ubuntu ubuntu 0 2009-06-28 18:00 my_pipe

We now have a pipe in the directory (see the leftmost p in the listing, that means it's a named pipe).

Now, let's try to send something from session 1 to session 2 through the pipe:

Session 1:
$ echo "Hello" > my_pipe

Notice how the process is blocked and doesn't exit. Let's read the content of the file with cat on session 2:

Session 2:
$ cat my_pipe
Hello

And if you go to Session 1, you will see that the echo has finished executing.

Now, let's create two scripts that will exchange information through two pipes. Script 1 will read lines from its standard input sending them and then will receive a line of information. The second will send back exactly the same line prepending: "You said".

Script 1:
# read from the standard input
while read input; do
    echo $input > pipe1
    # read from the other session
    read input < pipe2
    echo $input
done

Script 2:
#read from the other session
while true; do
    cat pipe1 | sed 's/^/You said /' > pipe2
done

When you run script2, it will stay there forever waiting for processes to dump stuff into pipe1

Then we run script1 like this:

$ ( echo HELLO; echo BYE; ) | ./script1
You said HELLO
You said BYE
$

Right on target. Now, I want to explain a very tiny detail. Why did I use an infinite loop on script2? Because if you try with a while read, it would only read a single line from pipe1 and then get an EOF and finish the loop (something related to the way the echo > pipe1 in script1 works, I think).

And then, going back to my problem, how did I get to make the handler? One simple way to put it is:

$ ./ssh_handler | ssh -i certificate user@host > a_pipe

ssh_handler uses its standard output to send commands to ssh. ssh is using a certificate so that I don't have to use password authentication, it gets the commands from its standard input and writes whatever comes out of the ssh server to a_pipe (you guess it, a named pipe). a_pipe is used by ssh_handler to read whatever comes from the ssh server and that's it: Two interacting applications.

sábado, 6 de junio de 2009

SSH Tunnels: Using a service from a nated (twice) box

Hi!

Recently I have being managing a box using a 3rd party application that allowed me to handle a windows box where I could use putty to get SSH access to a linux box. It had to be done this way because both my box and the linux box are nated, so they can't reach each other. Let me say it was a real PITA. The keyboard layouts were getting on my nerves. Some important keys didn't work sometimes... or at all (like ; or @ or ', etc). After a while I was encouraged enough to dig for a solution to get access to the SSH service of the linux box directly (or almost) instead of depending on this mess I was using.

First, let me introduce SSH tunnels before I dig into the actual solution to my problem.

SSH Tunnels

SSH tunnels are used between an ssh client and a server so that there is one parallel trusted (encrypted) connection using SSH as its transport.

When the tunnel is set up, there will be a passive listening side and one active connecting side. On the passive end we set up a port so that the tunnel waits for connections of clients to this port. When a client connects to this port, on the active side there is another connection to (potentially) another host/port and so the tunnel connects the client that used the passive port to the new connection on the active side.

The tunnels can be set up so that either our client is the listening side or the active side. But, it will never be both in a single side. It's either listening or connecting, and the tunneled connection is always established from the listening side to the active side.

So, how do this work? Well, let's do some simple examples.

L Tunnel (client side is the listening end)

On a local tunnel, we set up a port on our side and the SSH server will be the connecting side.

Let's say we want to get access to a HTTP service that is on the SSH server, but we want to use one encrypted transport for the transmission.

Say we will use our local port 8080, and on the other end the HTTP service is listening on port 80, the user to connect to the SSH service is sshuser and the host is sshhost. So, we set up the connection like this:

ssh -nNT -L 8080:localhost:80 sshuser@sshhost

After the tunnel is set up, we can use a web browser to use the http server:

http://localhost:8080

Ok, let's explain the details so we can get the devil out of the equation.

-nNT is used so that SSH doesn't start a SSH terminal session besides the tunnel (as I don't want to use it).

-L 8080:localhost:80 Here is where the tunnel is set up. The first parameter (8080) is the port we want to set up on the listening end (our host for a L Tunnel). Then the interesting part, localhost:80... with this we are telling the active side (on the SSH server for a L Tunnel) that when a client connects to our listening port (8080) we want the other end to connect to host localhost (localhost to the other end, the SSH server, that is) port 80 (http service).

After running that command on our box, we can see this with netstat:

tcp 0 0 127.0.0.1:8080 0.0.0.0:* LISTEN 3310/ssh

As you can see, we set up a listening port on our host on port 8080 and it's only available to processes running on our host (I guess it's possible to fool this a little with a little nating, but it's out of the scope of this article).

Now, we just have to use this port on our host to use the HTTP service on the other end. That's why we say http://localhost:8080.

In this case we used the HTTP service of the same SSH server we used to set up the tunnel. But we could use the HTTP service of yet another host that's accessible to the SSH server. Say there is a host that's accessible to the SSH server through IP 192.168.25.3 (as you can see, it's a private network that could be only accessible to the SSH server and not us from our host). In that case:

ssh -nNT -L 8080:192.168.25.3:80 sshuser@sshhost

Then we use our browser:

http://localhost:8080
R Tunnel (ssh server is the listening end)

Say we want to set up port 2000 on the other end of the tunnel so that when clients connect to it, we will let those clients use our HTTP service on our host. We do basically the same we did before:

ssh -nNT -R 2000:localhost:80 sshuser@sshhost

As you see, the only real change is that we said -R instead of -L. All it does is invert the direction the tunnel is set up (listening side on the SSH server).

After we set it up, on the other end we can use netstat to check if we are listening:

tcp 0 0 127.0.0.1:2000 0.0.0.0:* LISTEN

Then we should be able to browse with a client from the other end of the connection by using port 2000:

http://localhost:2000

As in the case of L Tunnels, the order of the parameters of the tunnel is always the same: port on the listening side:server host on the active side:port on the active side.

And just like in the case of the L Tunnel, we could use a R Tunnel to connect to a host different from the active host of the tunnel. So say I want to enable access to a remote desktop service of a windows box that's on my private network accessible (to me, that is) through IP 172.17.32.67. Let's say I'll use port 3000 on the other side:

ssh -nNT -R 3000:172.17.32.67:3389 sshuser@sshhost

Then on the other side:
rdesktop localhost:3000

And it's done!

Now, let's work on our problem.

Access to a service on a host that's nated from a box that's nated too

Well.... as both boxes are nated, then it's impossible to get them in touch with each other.... directly, that is. But I bet you can use a box that has a SSH service that's accessible to the original two boxes, can't you? I bet you do! And then, we can do this:

On the side of the box that has the SSH service we want to get access to:

ssh -nNT -R 2000:localhost:22 sshuser@sshhost

What we do there is set up port 2000 on the middle box so that when a client connects to it, it will be connecting to the SSH service of the host we are running the command from. In other words, we have forwarded the ssh service of this host to port 2000 of the middle box.

Then, on the box we want to run SSH from to get access to the other box:

ssh -nNT -L 4000:localhost:2000 sshuser@sshhost

What we do is set up port 4000 or our host so that when a client connects to it, there will be a connection on the middle box to its port 2000 (which is the forwarded SSH service of the ending box). In other words we have forwarded the SSH service of the ending box to our port 4000.

Then we can use a ssh client to get in touch with the service we are interested in:

ssh -p 4000 remoteuser@localhost

And it's done! What do you think?

Bash Tricks II: repetitive tasks on files

It's been a while since I wrote for the last time. I found a job (finally) and it's eating up most of my time.

Anyway, I had already written a piece on repetitive tasks before. Yesterday I had to do a thing that required another set of repetitive tricks. I had to find a file that could be included in a number (huge number) of compressed files. Some where named .tar.gz, others where tgz. I didn't want to spend the next month checking each compressed file to see if my target file was there. So I made a one-liner that did the whole thing for me.

First Attempt

( find /mnt/tmp/ -iname '*'.tgz; find /mnt/tmp/ -iname '*'.tar.gz; ) | while read filename; do lines=`tar tzf $filename | grep -i file-pattern | wc -l`; if [ $lines -gt 0 ]; then echo $filename; fi; done

First we have the ()s. These little kids let you run various commands and tie together their outputs so that they make up a single output.

Second we have the while read variable; do x; y; z; done. This construct allows us to read from the standard input line by line placing the content of each line in a variable (multiple variables can be used, in that case a single word from the standard input will be placed in each variable). In our case, we used $filename as our variable (be careful not to use $ on the while read).

Then the ``s. These kids allow us to run a command so that its output can be assigned. In our case, we are listing the files of a tgz file, grepping to find the pattern of the file we are looking for and then counting the lines that come out of grep. The number of lines is what is saved in the variable $lines.

Finally, we are testing to see if the number is lines is greater than 0. If it is, we print the name of the file where we found the file pattern we were looking for.

Second Attempt

Now let's try something a little bit different (though with the same pattern of file search). I have a number of ISOs saved in a box and each one of them has a number of RPMs inside of them. I have to look for this same file I was looking for before.

Basically, it's the same thing we did before, the only thing that's changing is that we will use another level of nesting so that we can mount/umount the iso files. Let's see:

find /var/isos/ -iname '*'.iso | while read iso; do mount -o loop,ro $iso /mnt/tmp; find /mnt/tmp/ -iname '*.rpm' | while read rpm; do lines=`rpm -qlp $rpm | grep -i file-pattern | wc -l`; if [ $lines -gt 0 ]; then echo $iso $rpm; fi; done; umount $iso; done

And that's it! Neat, isn't it?

Now, keep in mind that if you want to do rather simple things with the files, you can ask find to execute some commands on the files it finds. In my case it would have been a little tricky (at least) to write the actions I wanted to do on each file in find's syntax, so I went for the piping solution.