sábado, 27 de junio de 2009

Named Pipes... or how to get two separate applications to interact

Recently, I've been working on an application (bash based) that could gather some information that I need from a host (network interfaces configuration, arp neighborgs, routing policy, pinging some other hosts, etc). Then I wondered that it would be good if I were able to connect to some hosts through SSH and run some commands on those hosts and save the output of those commands as part of the information of the first host. Like an information gatherer of sorts.

I started working on this part of the project and hit a brick wall. When I connect to a host using openSSH's server, I had no problem throwing a bunch of commands at the server and wait for the output to come from ssh and save it. Say, something like:

$ ssh ubuntu@ubuntu <<EOF
> ip link show
> ip addr show
> EOF
ubuntu@ubuntu's password:
1: lo: mtu 16436 qdisc noqueue state UNKNOWN
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
2: eth0: mtu 1500 qdisc pfifo_fast state DOWN qlen 1000
link/ether 00:21:70:94:08:b0 brd ff:ff:ff:ff:ff:ff
3: eth1: mtu 1500 qdisc pfifo_fast state UP qlen 1000
link/ether 00:16:44:d3:f4:bf brd ff:ff:ff:ff:ff:ff
4: pan0: mtu 1500 qdisc noop state DOWN
link/ether 96:4d:e1:83:8a:4d brd ff:ff:ff:ff:ff:ff
1: lo: mtu 16436 qdisc noqueue state UNKNOWN
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
2: eth0: mtu 1500 qdisc pfifo_fast state DOWN qlen 1000
link/ether 00:21:70:94:08:b0 brd ff:ff:ff:ff:ff:ff
3: eth1: mtu 1500 qdisc pfifo_fast state UP qlen 1000
link/ether 00:16:44:d3:f4:bf brd ff:ff:ff:ff:ff:ff
inet 192.168.123.127/24 brd 192.168.123.255 scope global eth1
inet6 fe80::216:44ff:fed3:f4bf/64 scope link
valid_lft forever preferred_lft forever
4: pan0: mtu 1500 qdisc noop state DOWN
link/ether 96:4d:e1:83:8a:4d brd ff:ff:ff:ff:ff:ff

Great. However, I didn't' want to connect to a host with one openSSH service. I had to connect to a router (a HW router, so to speak) that, whenever I sent it more than one command, would break my connection.

After several tries and experiments, I thought about creating one application that would use the ssh client to send commands to the router and get the output of the ssh client. It would wait for the router's prompt to show up before sending another command. Now, that would require not only getting the output of the ssh client, as that is a piece of cake:

$ ssh user@host | ./ssh_handler

That would allow my_ssh_handler to get the output of ssh (in other words, the router) to process it, but I also need to send commands to the ssh, somehow. That's when named pipes show up.

Named pipes allow you to send/receive data from streams that are not the standard input/outputs we get with every process (standard input, standard output, standard error).

Say you have two terminals sessions sitting on the same directory:

Session 1:
$ pwd
/home/ubuntu/pipe experiment

Session 2:
$ pwd
/home/ubuntu/pipe experiment

Let's create a named pipe in this directory in one of those sessions:
$ mkfifo my_pipe
$ ls -l
total 0
prw-r--r-- 1 ubuntu ubuntu 0 2009-06-28 18:00 my_pipe

We now have a pipe in the directory (see the leftmost p in the listing, that means it's a named pipe).

Now, let's try to send something from session 1 to session 2 through the pipe:

Session 1:
$ echo "Hello" > my_pipe

Notice how the process is blocked and doesn't exit. Let's read the content of the file with cat on session 2:

Session 2:
$ cat my_pipe
Hello

And if you go to Session 1, you will see that the echo has finished executing.

Now, let's create two scripts that will exchange information through two pipes. Script 1 will read lines from its standard input sending them and then will receive a line of information. The second will send back exactly the same line prepending: "You said".

Script 1:
# read from the standard input
while read input; do
    echo $input > pipe1
    # read from the other session
    read input < pipe2
    echo $input
done

Script 2:
#read from the other session
while true; do
    cat pipe1 | sed 's/^/You said /' > pipe2
done

When you run script2, it will stay there forever waiting for processes to dump stuff into pipe1

Then we run script1 like this:

$ ( echo HELLO; echo BYE; ) | ./script1
You said HELLO
You said BYE
$

Right on target. Now, I want to explain a very tiny detail. Why did I use an infinite loop on script2? Because if you try with a while read, it would only read a single line from pipe1 and then get an EOF and finish the loop (something related to the way the echo > pipe1 in script1 works, I think).

And then, going back to my problem, how did I get to make the handler? One simple way to put it is:

$ ./ssh_handler | ssh -i certificate user@host > a_pipe

ssh_handler uses its standard output to send commands to ssh. ssh is using a certificate so that I don't have to use password authentication, it gets the commands from its standard input and writes whatever comes out of the ssh server to a_pipe (you guess it, a named pipe). a_pipe is used by ssh_handler to read whatever comes from the ssh server and that's it: Two interacting applications.

4 comentarios:

  1. This is an excellent article. Instead of using a loop you can just use tail -f a_pipe.

    Thank you.

    ResponderEliminar
  2. Blogs are so informative where we get lots of information on any topic. Nice job keep it up!!
    _____________________________

    Architecture Dissertation

    ResponderEliminar
  3. This kind of information is very limited on internet. Nice to find the post related to my searching criteria. Your updated and informative post will be appreciated by blog loving people.

    Dissertation Layout

    ResponderEliminar