But open source tends to be something of an agglomeration of programmers -- some brilliant, some boneheaded -- around a core developer or two. I think it just might be possible to influence the small group of programmers at the core of each open source project to create a culture that develops secure code. In fact, in some ways it might even be easier to do with open source projects because they, for the most part, don't face the arbitrary deadlines of the commercial world.
Open source technologies are "more secure" than software that is developed in a proprietary way, Red Hat's JBoss middleware business unit general manager, Mike Piech, said in a meeting with journalists.
On the one hand, open source software code is freely available, which means that hackers will see how to hack it. But, on the other, there is also a vast community of people working to maintain open source software security.
Commonwealth and state/territory government funded public company, Healthdirect Australia, has used open source software to build an identity and access management (IAM) solution.
The IAM solution allows users to have one identity across all of its websites and applications. For example, users can sign in using their Facebook, LinkedIn or Gmail account.
While the open source approach to software development has proven its value over and over again, the idea of opening up the code for security features to anyone with eyeballs still creates anxiety in some circles. Such worries are ill-founded, though.
One concern about opening up security code to anyone is that anyone will include the NSA, which has a habit of discovering vulnerabilities and sitting on them so it can exploit them at a later time. Such discoveries shouldn't be a cause of concern, argued Phil Zimmermann, creator of PGP, the encryption scheme Yahoo and Google will be using for their webmail.
The logic is understandable - how can a software with source code that can easily be viewed, accessed and changed have even a modicum of security?
Open source software is safer than many believe.
But with organizations around the globe deploying open source solutions in even some of the most mission-critical and security-sensitive environments, there is clearly something unaccounted for by that logic. According to a November 28 2013 Financial News article, some of the world's largest banks and exchanges, including Deutsche Bank and the New York Stock Exchange, have been active in open source projects and are operating their infrastructure on Linux, Apache and similar systems.
GNU community members and collaborators have discovered threatening details about a five-country government surveillance program codenamed HACIENDA. The good news? Those same hackers have already worked out a free software countermeasure to thwart the program.
According to Heise newspaper, the intelligence agencies of the United States, Canada, United Kingdom, Australia, and New Zealand, have used HACIENDA to map every server in twenty-seven countries, employing a technique known as port scanning. The agencies have shared this map and use it to plan intrusions into the servers. Disturbingly, the HACIENDA system actually hijacks civilian computers to do some of its dirty work, allowing it to leach computing resources and cover its tracks.
On the topic of source code liability, Greer suggests that eventually software developers, including medical device development companies, will be responsible for the trouble their software causes (or fails to prevent). I think it’s fair to say that it is impossible to guarantee a totally secure system. You cannot prove a negative statement after all. Given enough time, most systems can be breached. So where does this potential liability end? What if my company has sloppy coding standards, no code reviews, or I use a third-party software library that has a vulnerability? Should hacking be considered foreseeable misuse?