Threats to Linux: Expertise and acceptance

Filed under
Linux

Do you know what most large Solaris installations have in common? Mis-management. What seems to happen is that the people in charge get there on the basis of large system experience in the eighties and then forcefully apply that expertise regardless of whether it's appropriate to the technology or not. That's what happened to a lot of large business projects started on Solaris in the mid to late ninties, why there was a resurgence in mainframe sales as these projects were written off in 2000 one and two, and why there's now a threat that the same thing is about to happen with Linux.

Linux installations, so far, have mainly been compromised by the expertise evolved to cope with the day to day emergencies associated with managing Microsoft's products. I think that's about to change as the big guys grab "the coming thing" and try to twist it into what they already know.

Look at Linux implementations in (bigger) business or government and in a majority of cases what you see is people trying to treat it as a one for one substitute for Windows - producing rackmounts stuffed with PCs all individually licensed from Red Hat, all running one application each, and all being routinely shut down for patch installation and "preventative reboot."

It's not that the people doing this are dishonest or incompetent - quite the contrary they're honestly doing what they've been taught to do, it's just that they haven't internalized the fundamental truth that Unix isn't Windows and so think their expertise applies. In reality, Linux isn't as good a Windows product as Windows, so the net effect is generally to increase cost to the employer while decreasing benefits.

The mainframers all want to virtualize or partition - despite the fact that these technologies address problems that don't exist on Unix. The windows generation wants to use lockdowns, proxies, anti-virus software, and the rackmount approach to SMP for the same reason: these are the things they know how to do and therefore the things they will do -and so what if the problems these solutions address don't exist in Linux.

It's insanely frustrating to hold a conversation with someone who's deeply committed to this kind of technological miscegenation. Typically you're dealing with someone who looks and sounds like a decent human being you'd be happy to have as a friend or neighbour -until you hit the job spot and what spews out are absolute certainties made up of absolute nonsense.

Recently, for example, I found myself explaining to a bunch of Windows people that DHCP started as Sun's bootp support for diskless devices, entered the Windows world as a means of temporarily assigning an IP address to a Windows 3.11 PC so it could be used to access the internet, and became unnecessary, and therefore inappropriate, for fixed network installations when Microsoft finally adopted TCP/IP.

These were bright people, honest and competent in their own way, but I would have won more converts arguing for the replacement of email by trained mice scurrying around carrying digitally inscribed slices of well aged lunar cheese. As a group they agreed that it would be a good idea to use non routable addresses internally, but nothing was going to change their true and certain knowledge that address allocations must be handled through DHCP.

What's going on with them, and their mainframe predecessors, is management by knowledge accretion -the setting in stone of managerial reflexes gained through thirty years of experience and applied, unchanged, to technology they've never seen before.

As a process, accretion works well for making sandstone, but it's not so smart for IT management -and the consequences are usually bad for the technologies involved because the people responsible for the resulting failures blame the tool far more often than they blame themselves.

By Paul Murphy
ZDNet