jueves, 26 de febrero de 2009

Server sales: If you're falling, dont' fall so hard

And I thought I wasn't going to post anything today.

I just saw the article of ServerWatch on server sales on Q4'2008. I had been waiting for it for a while to see how bad the quarter had been given the economy.. but mostly I wanted to see how the GNU/Linux vs Windows battle on the server space is going.

For Q2 2008 (the IT sector not in decline yet): Windows had a YoY gain of revenue of 1.38% (with market share down to 36.5% from 38.2 one year earlier), UNIX gained 8.22% (market share up to 32.7% from 32.06%) and GNU/Linux had a revenue gain of 4.55% (market share down to 13.4% from 13.6%). GNU/Linux + UNIX make up 46.1% of revenue market share.

For Q3 2008: Windows had a loss of revenue of 2.86% (reaching 40.8% from 40.4%), UNIX lost -8.15% (ouch!.. down to 29.7% from 31.1%) and GNU/Linux barely made it even with a gain of 0.49% (reaching 14% from 13.4%) . GNU/Linux + UNIX held 43.7% of the market.

Q4 was a poem (if you ask me... I doubt Steve Ballmer will agree): Windows nosedived with a loss of 17.07% (double ouch!!... down to 35.3% of the market from 36.6%), GNU/Linux had a loss of 7.92% (reaching 13.6% from 12.7%) and UNIX lost 6.52% (reaching 36.2% from 33.3%). GNU/Linux + UNIX got 49.8% of the market.

The numbers and percentages can be a little misleading... so let me try to mislead you a little more: If you look at the amounts of gross losses/gains of revenue, for example, you will see that in Q2 2008, Windows had an increase of YoY revenue of roughly US$ 70 million, GNU/Linux made a little more with 80 million (it's hard to know cause the numbers are rounded, so perhaps the difference could have been a little less or more) but UNIX had an increase of 350 millions (wow!). Q3 is another story: Windows had a decrease of revenue of 150 million, GNU/Linux broke even and UNIX had a decrease of 330 million. And Q4... well, this is not for the faint of heart: Windows had a YoY decrease of revenue of..... 980 million, GNU/Linux had a decrease of 150 million and UNIX had a decrease of 340 million.

Bottom line: If on good times you are growing well, and on bad times you are shrinking less than your enemy... I think you aren't doing so bad after all, don't you think?

The second bottom line: These are statistics and we know that they can say whatever we want them to say.... so take those numbers with a grain of salt.

Disclaimer: The numbers, though I guess are right, could be wrong. I had to put together numbers from different sources (mostly IDC and Gartner) so that there could be a mistake here or there (though I wouldn't expect it to be so dramatic). Want to get the file I have with the numbers? Let me know and I'll email it to you.. perhaps we could improve it.

miércoles, 25 de febrero de 2009

Windows = Antivirus = Pollution?

I just read an article by the University of Calgary where the author claims (and I think he's correct) that IT is a huge pollutant. For example, we have hardware that becomes obsolete, we have to produce electricity to pump into our gadgets, etc.

Not long ago I read another article where it's calculated (or so they say) how much pollution is produced by each search on google.

But, man.... I just couldn't resist the temptation of asking myself "... then how much power is spent on Windows implementing DRM protection mechanisms?". It has been disputed that it be a lot of energy to implement DRM mechanisms. It has been argued that this is not too much of an effort... that DRM in Vista is rougly a couple of LOC on the whole system. I just couldn't care less about it.. but then the next even more obvious question was "then how much pollution is produced by the usage of antivirus?". And here you won't tell me that it's just a little effort. Antiviruses checking whole computers (millions of them) weekly (at the very least), an operation that can take a while to complete, plus the effort of checking every jpg file that gets into a system. And using the antivirus is no low-cpu-ussage activity. I know that when a computer running windows is dragging behind a turtle for no apparent reason, I could just check processes to see if the antivirus is doing its stuff, if the box hasn't already being been invaded by the random virus that's hot at the time and that is eating all of the CPU sending all those beautiful Xmas mails.

What bothers me the most is that windows users are still paying for the ultimately bad design that was implemented on Windows early on (every .exe you downloaded from internet could be executed right away, default user is administrator, programs that won't run unless the user is an administrator, Firewall? What's that?, the usually long etc.). Vista is barely trying to fix all those problems, and we all know the backslash that things like UAC has been for Windows Vista (at least in its inception).... but we know where Vista is staying in user's preferences... so people are sticking with XP design flaws instead... and seems like it will be a little longer till it fades away into oblivion.

So... coming back to the question: Windows = Antivirus = Pollution? Can anybody try to make a wild guess about how much pollution is produced by antiviruses?

PS And I didn't mention hardware that's not capable of running today's systems. How many times have you being forced to buy more hardware (or another computer) just to get the latest incarnation of Windows to work acceptably well turning your perfectly working system (so far) into digital trash? That's another thing where at least GNU/Linux will help you avoid as well. As a matter of fact, I'm using the very latest release of Kubuntu, patched to use KDE 4.2 (using some of its 3D eyecandy, by the way) in a computer that's a little dated (I guess 4 or 5 years is a safe guess) on a box with a D865GVHZ motherboard (4 years old? Maybe 5?). I wonder if I could run Vista with Aero on this box. I guess that makes up another equation: Windows = New Hardware = Pollution? By the way, I'm sure other OSs will help you avoid those update cycles as well... but my experience is with GNU/Linux, so I won't speak for other OSs.

viernes, 20 de febrero de 2009

Browser Wars: JS performance on my dated box



Well.... let me put it in simple terms: What I'm about to write is not gospel. It's just the result of running a couple of tests on a number of Web Browsers on my rather dated computer. So I don't intend to convince you of dropping your usage of one browser and start using another. It's just another post in the already long (long, long, loooooooong) list of posts comparing the performance of said browsers. Hope you find it helpful &&|| informative (in any way).

So... let's go down to our subject.

I'm comparing IE8b1, IE7, Opera9 on Windows, Opera10 alpha on Windows, Opera9 on GNU/Linux, Opera10 alpha on GNU/Linux, FF3 on Windows, FF3.1 on Windows, FF3 on GNU/Linux (from packages), FF3.1 on GNU/Linux (downloaded from Mozilla's site) and Konqueror (from packages... updated to KDE 4.2). Unfortunately I didn't test Chrome cause when I tried to download its installer, I got the installer of Google Earth instead (go figure!).

The hardware is like this:
- Very dated PC with one Intel(R) Pentium(R) 4 CPU 2.80GHz (hyperthreading disabled), 1 GB of RAM. Do you need to know something else?

The software environments are like this:
- Windows XP SP2 (pretty much unpatched... as a matter of fact, I couldn't care less about it... and don't start nagging me saying that I have to keep it updated. As I just said, I don't care for it. I don't use it).
- Patched Kubuntu intrepid running on xfce for the tests.

So, let's get down to the matter.

I ran the SunSpider and V8 benchmark suites on all of those browsers.

Here are my comments:
- IE8 has been a tremendous improvement over IE7. Still, IE8 is behind every single browser, except for IE7 and Opera9 on GNU/Linux on the SunSpider test. So.... given that every new browser is ahead of IE8, I can't help but wonder if Microsoft is going to provide IE8 with a digital casket along with it when it be released.
- Opera has made an improvement in both tests from 9 to 10 on both GNU/Linux and Windows. But then, some days ago, there was a report on FF being slower on GNU/Linux than on Windows and we see exactly the same thing going on with Opera. Is there a reason why this could happen on both browsers? I don't think the widget toolkits are to blame, as FF is built on Gtk+ and Opera is built on Qt (version 3, by the way... is it possible that building Opera on Qt4 would improve its performance on GNU/Linux?). On both platforms, Opera10 was the best performer on the V8 test.
- FF3.1 had a better performance on both platforms on the SunSpider test (being the improvement more dramatic on Windows), however on V8 the results were mixed, on Windows there was an improvement, but on GNU/Linux there was a performance decrease (not too big, but still a decrease).
- Konqueror certainly has to make great improvements to catch up with the other browsers.

Let the flamefest begin! It's not my intention to start a flame war but I think everyone will want to add their own pepper to the mix. You're welcome to do so on the comments area... just keep it in the "readable by kids" rhetoric.

jueves, 19 de febrero de 2009

Has IE lost the hearts of IT people?

I like to see statistics, specially where FLOSS is gaining ground. I try to take them with the usual dose of a grain of salt (or a handful of salt... depending on the source).

- Web browser market share
- OS market share
- Web server survey
- Most reliable hosters

They all have their problems to be measured, but still they do give an idea of trends, at least.

It's not breaking news that FF has gained a lot of momentum on the browsers front. Some people even say that we are in the "Browsers Wars" all over again. Hitslink provides us with some very attractive statistics on that. During January, IE reached 67.55% of browsing (down from 68.15%), FF reached 21.53% (up from 21.34), Safari reached 8.29% (up from 7.63%), Chrome reached 1.12% (up from 1.04%) and the remaining browsers had less than 1% each. That's fine and dandy, but that comes from a huge market of thousands of sites that are not particularly inclined to IT subjects (my guess). I have a hunch that in sites that are inclined to IT, usage statistics of FF are much higher than those provided by
hitslink.

I have been tracking one site that's devoted to web subjects and that's not inclined towards one browser or another. It's www.w3schools.com and the statistics from it are... well, very different from hitslink's.

Here we find that FF has reached over 45% and, as a matter of fact, for the first time, FF reached more market share than all versions of IE they display in the statistics.... combined. We see an ever growing usage of FF and a steady decline of IE.

There are some things that I have noticed that would explain such a decline:
- It's easier to find people that have heard of/used FF in IT circles.
- People in IT would be more inclined to install other applications besides the ones that come bundled with whatever OS they get (I don't install other applications besides the ones bundled with the OS I get.... I install another OS).
- It's no secret that IE is a resource hog when compared with other browsers. Hell, even IE8 (which is in RC status) comes way behind in terms of performance when compared with any other mainstream browser. Side note: Would FF on GNU/Linux be close to IE8? I guess there's material for an article there. :-)

And that leads me to the question: Has IE lost the hearts of IT people?

I'm more than willing to see other statistics from sites that are technology inclined and that have platform agnostic (or multiplatform) content. Do you have a site like that and would like to share those statistics to the world? Then the comment area is waiting for you to guide us.... or guide me, at the very least.

miércoles, 18 de febrero de 2009

Quickfix: My ISP is blocking connections to one of my ports

Hi!

Recently, one friend of mine wanted to show me something he was creating that he had running on a web service at his own box. He was connected directly to the internet and so he provided me with his address like this:

http://x.x.x.x/resource/

x.x.x.x being his public IP address. It failed miserably. I couldn't see what he had up there. He then told me that people could see it remotely when he was at home, but that he was at his workplace and didn't know what was going on. I told him that probably his ISP was blocking requests to port tcp 80 and that's why it didn't work. I told him (to make sure) to do a brief tcpdump to make sure that traffic wasn't arriving at his box on that port:

tcpdump -i eth0 -n tcp and port 80 and host y.y.y.y

y.y.y.y being my public address, so that only the traffic I was about to send to his box would be displayed. -n is used so that tcpdump doesn't try to do a "reverse name resolution" of our IP addresses. I do this from my box:

telnet x.x.x.x 80

After some seconds, his tcpdump output is completely mute. Well.... it's his ISP the source of the problem after all. I know that normally one ISP won't block all new requests to a destination port, only some ports are banned (for security reasons, I guess). So I wonder that we could simply use another port. I tell him to redo the tcpdump but listening on port 8080. But (without having tried) he complained saying that he didn't want to change the configuration of apache to listen on another port. I tell him to relax and let go. He won't have to do that.

tcpdump -i eth0 -n tcp and port 8080 and host y.y.y.y

Then I redo the telnet to the new port:

telnet y.y.y.y 8080

And there he had some output on tcpdump. I get a connection refused message (because he had no service running on that port). What it all means is that his ISP is not blocking that port. Tip: tcpdump will show you traffic that arrives at a box, doesn't matter if you have netfilter rules on FORWARD or INPUT that will block that traffic or if there's a service running or not on that port.

Now comes the trick: How to make apache listen on that port without changing its configuration?

iptables -t nat -A PREROUTING -i eth0 -p tcp --dport 8080 -j REDIRECT --to-port 80

The REDIRECT target of iptables is used to tell iptables to change the target of those packets, no matter what the target of those packets be, to the box that's processing those packets to a given port. And there you have it. After running that command, I could see what he wanted to show me without having to change anything on his apache configuration.

http://x.x.x.x:8080/resource/

And the content was there.

Oh, you are running windows and want to do the same thing, you say? I guess you have to get yourself an ISA Server to be able to do NAT (though I could be wrong, of course) or download some virus-ridden piece of freeware that you could find out there that will do that... plus turning your computer into a zombie, as a gift feature. In other words: Why don't you get a nice LiveCD (didn't say what distro or what operating system) and start tinkering with a real OS? If you were able to figure out that it was your ISP that was blocking the requests to a port in your own box, it means you already have the basic ingredients.

Take care!

martes, 17 de febrero de 2009

A couple of things on PHP

Hi!

As I promised yesterday on Bash Tricks I, I would be making a spin off article on things related to PHP.

I want to talk about two things, actually.

1 - Performance: Are constants faster than variables?
2 - Security: My usage of a FS_ROOT constant (that could become a variable depending of the results of point 1) to hide all non-starting scripts from apache.

Variables vs Constants
Yesterday I ran a script 1000 times to see if a script that handled a variable or a constant was faster. The one with the constant was a little faster. Let's redo the test but with 10000 times instead:

variable.php (without the php starting tags):
$VARIABLE = 5;
echo $VARIABLE . "\n";

constant.php:
define('CONSTANT', 5);
echo CONSTANT . "\n";

Let's run them then:
echo Variable; time ( i=0; while [ $i -lt 10000 ]; do php variable.php > /dev/null; i=$(( $i + 1)); done ); echo Constant; time ( i=0; while [ $i -lt 10000 ]; do php constant.php > /dev/null; i=$(( $i + 1)); done )
Variable

real 12m49.269s
user 4m31.665s
sys 2m23.161s
Constant

real 12m36.780s
user 4m28.013s
sys 2m27.241s

Well.. not much difference, really.

Now.... does this difference translate into a php script running on apache? To pull it off with bash, I will have to use one of my favorite (and most basic) tricks I've come to use when doing web development: Acting as a web client from a terminal. As a matter of fact, bash won't be the client... but I will certainly use bash in order to run the request a number of times. How does it work? As web developers most probably know, we can use telnet to make requests on a web server. Let's do a basic request: www.yahoo.com but on a yahoo.com and let's ask for the default web page to see what it says:

telnet yahoo.com 80
Trying 68.180.206.184...
Connected to yahoo.com.
Escape character is '^]'.
GET http://www.yahoo.com HTTP/1.0

HTTP/1.1 301 Moved Permanently
Date: Tue, 17 Feb 2009 19:15:04 GMT
Location: http://www.yahoo.akadns.net/
Connection: close
Content-Type: text/html; charset=utf-8

The document has moved here.



Connection closed by foreign host.

After I connected to the web service successfully, I made the requests GET http://www.yahoo.com HTTP/1.0 followed by an empty line, and then the web service replied with the headers, an empty line and then the content of the web page.

Cool.... but I won't be doing that 10000 times to test my scripts on apache, right? Instead of telnet, let's use another tool: netcat. Then we can send netcat the web request to its input stream, effectively sending it to the web server. Like this:
{ echo GET http://www.yahoo.com/ HTTP/1.0; echo; } | netcat yahoo.com 80
HTTP/1.1 301 Moved Permanently
Date: Tue, 17 Feb 2009 19:20:15 GMT
Location: http://www.yahoo.akadns.net/
Connection: close
Content-Type: text/html; charset=utf-8

The document has moved here.

Cool, now I can make as many requests as I want one after the other, and so I can see how long it takes to run to each of the scripts. So, let's see:

echo Variable; time ( i=0; while [ $i -lt 10000 ]; do { echo GET http://localhost/variable.php HTTP/1.0; echo; } | netcat 127.0.0.1 80 > /dev/null; i=$(( $i + 1)); done ); echo Constant; time ( i=0; while [ $i -lt 10000 ]; do { echo GET http://localhost/constant.php; echo; } | netcat 127.0.0.1 80 > /dev/null; i=$(( $i + 1)); done )
Variable

real 2m17.096s
user 0m58.724s
sys 1m8.092s
Constant

real 2m7.158s
user 0m55.163s
sys 1m2.608s

Well.... I notice two things here:
1 There was a roughly 10% reduction when using constants
2 This puts in perspective the Spawning processes is expensive mantra, doesn't it? A reduction of roughly 75% when compared with running the script with the PHP binary.

Parenthesis: I tried with 1000 times instead of 10000 and something weird happened. The tests ran in under 7 seconds each (always with the constants as a winner), but that's almost 20 times faster (instead of the expected 10 times). Any explanations for it? End of parenthesis.

Now, if you wanted to use the value of the variable inside a function without passing it as a variable, you would have to use a "global" directive and then use the value, but there's no need to do that with a constant. Let's see how that changes the result:
variable.php
$VARIABLE = 5;

function printValue() {
global $VARIABLE;
echo $VARIABLE . "\n";
}

printValue();


constant.php
define('CONSTANT', 5);

function printValue() {
echo CONSTANT . "\n";
}

printValue();


Le's run the scripts 1000 times:
echo Variable; time ( i=0; while [ $i -lt 10000 ]; do { echo GET http://localhost/variable.php HTTP/1.0; echo; } | netcat 127.0.0.1 80 > /dev/null; i=$(( $i + 1)); done ); echo Constant; time ( i=0; while [ $i -lt 10000 ]; do { echo GET http://localhost/constant.php; echo; } | netcat 127.0.0.1 80 > /dev/null; i=$(( $i + 1)); done )
Variable

real 2m20.279s
user 0m59.952s
sys 1m7.548s
Constant

real 2m11.901s
user 0m57.440s
sys 1m3.176s

Again the constants are winners with a little less than 10% reduction in time. So I guess that's it on this subject. Constants are winners over and over again.

My usage of FS_ROOT

After so much time dealing with PHP and requires/includes, I came to use a constant that always tells me where the root of the project is on the File System. This value along with a number of others (HTTP_ROOT, DB settings and so on) is in a single script (strangely enough, it's called conf.php). Now, no matter where I start the execution of the script, I know where to find conf.php relative to this starting point and then I just don't care about where the others scripts are... I always call them using FS_ROOT as the root directory of the others scripts:
require_once FS_ROOT . "/model/one_model.php";
require_once FS_ROOT . "/utilities.php";

You get the idea. I guess that's not rocket science.... but then something weird happened. I was integrating phpweby's ip2country into my new project. This module has a script that can update the ip2country information on the DB that has to be deleted for security reasons... and then, it hit me: That script, which is not supposed to be called from the web, could be outside of the space mapped by apache and then I wouldn't have to delete it, would I? And that brought me to another thought: How about if I didn't have to publish ANY of the scripts that I use in my project, besides the starting scripts? And that's when FS_ROOT becomes vital. By using FS_ROOT to locate all the other scripts, they could just be outside of the apache mapped files and so the project is "safer", don't you think?

So, now the project I'm working with has like 20 scripts "inside" apache, and all the other scripts (a whole bunch of them) are safely protected outside of apache's reach. Now, I've never used root cages, so I don't know if a cage would allow this kind of behavior. What do you think of the trick?

Well. That's it for today. I hope you can take advantage of this information.

lunes, 16 de febrero de 2009

Bash Tricks I: (very) Repetitive tasks

Hi!

I'll create a series (don't even know the number of items in the series) where I share with my loyal readers (in mathematical terms, that's an empty set) some handy tricks I've found when working with bash. Probably some of the tricks won't be the most efficient way to carry something out... but I can attest that, at the very least, they do work.

So, here we go.

Repetitive tasks
Sometimes, you might want to repeat a task a number of times. For example, right now I want to find out which one is faster on PHP: Using variables or defining constants

I have two script where I define a constant/variable (depending on the script) and I write its value to stdout. Let's say I want to run the variable.php script 1000 times. What I do is:

i=0; while [ $i -lt 1000 ]; do php variable.php > /dev/null; i=$(( $i + 1 )); done

But what does all that stuff mean? Let's decompose it:
i=0 We are creating a variable called i with the initial value of 0. Tip: When declaring the variable, don't use the preceding $ and don't use spaces between the variable name and the = sign.
while [ $i -lt 1000 ]; do This is fairly common talk to a programmer. We are telling bash to repeat the following commands (until it finds the closing done). while will test if the conditional betwen the []s is true to make another cycle. $i -lt 1000 There you are comparing the value in i with 1000. -lt means less than. You have more operators available (more than, equal, less or equal, more or equal and so on. Check the man page of test to know the kinds of things you can place as the conditional).
php variable.php > /dev/null We are executing the script I created and sending its output to /dev/null so that I don't get to see it (I couldn't care less about it, as I already know what will show up).
i=$(( $i + 1)) Here we increment the value of the variable i. $(( )) is a bash construct to do mathematic evaluation. As in the assignment of the variable i to 0, remember not to leave spaces between the variable and the = and also to skip the $.
done We are telling bash to close the while.

Now, let's time the execution of both scripts (variable vs constant):

echo Variable; time ( i=0; while [ $i -lt 1000 ]; do php variable.php > /dev/null; i=$(( $i + 1)); done ); echo Constant; time ( i=0; while [ $i -lt 1000 ]; do php constant.php > /dev/null; i=$(( $i + 1 )); done )
Variable

real 0m40.515s
user 0m22.217s
sys 0m13.109s
Constant

real 0m39.409s
user 0m22.557s
sys 0m13.277s


As you can see, it's almost the same (40.515 vs 39.409). I will do some more PHP tests that will lead to a spin off of this article... but that will arrive tomorrow and it's not related to bash, so let's go on with another trick.

Another kind of repetitive task you could find yourself doing (specially when programming) is replacing one string pattern for another... and the substitution could span various files.

Say that you need to change the string "mysql_" for "mydb_" (if you are thinking that I did it to change some mysql calls to agnostic calls on a php project, let me say that you might be right). Now, any IDE worth its salary would do it on a fly, but that doesn't mean that we can't do it with bash. I know that sed can change patterns on the fly, so how can we do that on various files? First, let's see how many times the pattern shows up in the files in this directory:
find ./ -type f -exec grep -Hni mysql_ {} ';' | wc -l
352


Now, let's run the substitution command:

find ./ -type f | while read filename; do sed 's/mysql_/mydb_/' $filename > tmp.php; mv tmp.php $filename; done

What did we tell bash to do there? Let's decompose it again:
find ./ -type f We are asking find to find normal files for us (so that we don't get the ./ directory in the listing of files to work on).
Then we have a pipe that connects the stdout of find with the rest of the command.
while read filename; do Instead of doing a test evaluation, we are asking while to go on iterating until it can't read anymore from its standard input. read will read a line from its standard input (in other words, the filename that comes from find one at a time) in the variable filename.
sed 's/mysql_/mydb_/' filename > tmp.php Here's a tricky thing. We can't ask sed to edit the file just like a normal editor and save the changed file. What we do instead is to use it as a filter reading from that file (using the variable as the file name) and write its output to a temporary file (with a fixed name).
mv tmp.php $filename Here we overwrite the original file with the modified file.
And that's the end of the trick. Let's see if we have left any string out:

find ./ -exec grep -Hni mysql_ {} ';' | wc -l
1


Oops! Seems like we made a mistake.
As a matter of fact, we didn't make a mistake. It's just that sed will only change the pattern once per line. And there was a line where the pattern was twice, so we could just go to that file and change it by hand, or we could just simply run once again the oneliner to make the change for us.

Well that's it for the first article in the series. I hope you find it useful. I won't tell you when the second article will be out as I currently don't have the slightest clue of what I will be writing about in it.... but I know there will be more... so stay in touch!

sábado, 14 de febrero de 2009

Conficker affected me... though I don't use Windows

Some years ago, I used to work for a pediatric hospital. It was a pretty cool environment to work in. People in the IT department made up a family... a dysfunctional one.. but a family, nevertheless (I know you still do, guys. Keep it up!). A happy environment where our work really made a difference.. in terms of saving lives.... even if not directly.

This hospital is mostly funded with public money (either from the country's central government or the state's government), and I believe it's probably one of the most efficient and pleasant hospitals (for workers and patients) in the country.

The hospital was built on proprietary software all around. OS (servers and desktops), databases, application development, groupware, etc etc etc. Back in time, when the decision had to be made about what software was going to be used when the hospital was to be opened up, FLOSS wasn't that "popular" in my country, XP hadn't been brought down on its knees by viruses over and over again and I certainly wasn't there to make a dent on that decision (I think that even I wasn't using FLOSS at the time... how things change). And probably using FLOSS would have been more of a geeky, freedom-loving decision than a "business" one.... even if on the long run (a veeery long run, probably), having used FLOSS instead would have paid off (which I'm sure it would if it had been chosen instead).

When I landed my job at the hospital (when it was almost 5 years old) and first arrived at IT and started installing my favorite distro at the time, I felt like in those old Apple advertisements where the Apple guy is surrounded by snoops trying to see what's going on with his computer. They were just drooling to see what was going on. Though some of them had either heard of FLOSS or had brief experiences with it, no one had taken it as their main platform for every day use the way I had. I had to explain a couple of misunderstanding about the development model, business models and so on. I guess we FLOSS supporters have had to deal with that every now and then. A couple of years later, I had managed to get GNU/Linux in a couple of servers, had trained people to use Knoppix for recovering purposes, had made a couple of amazing hacks to fix some special situations (recovery of a broken HP RAID5 comes to mind) and made sure people understood that there are options available and they didn't have to stick with whatever thing was coming out of Redmond. I quit almost two years ago to move to another country. And life goes on. The servers are still up and running, in case you are curious.

I hadn't thought about it for a while, but given the recent Conficker outbreak, I got to rethink of the whole thing again. Now.... there was something that bothered me A LOT.... and it still does (though I'm not working with them anymore, I still feel like I'm a part of the family). They are still developing applications in-house using proprietary frameworks tied to Windows. And here is why it bothers me: They have invested and continue to invest time (hence money.. public money, should I add) on getting themselves tied to one proprietary platform. Every line of code that they add up to their already enormous code stack is another line of code that ties them even tighter to Windows. And that's sad. What's done is done.. there's not much they can do about the code they have already written... but they could be changing the languages they use to develop their applications that could allow them to move to another platform if they so wished later on (doing it gradually). When I was about to leave, the head of development quit his job as well... and that would have been (probably) the best moment to make a push for multiplatform languages, but unfortunately I was quiting as well so there was no change in development frameworks.

It hurts me to see one organization that I care so much about tied to that security hole disguised as an operating system that's Windows. And even more that they still don't take the necessary measures to try to get out of that platform, even if it's one small step at a time. Conficker just reopened that small wound I carry with me.