Language Selection

English French German Italian Portuguese Spanish

Christopher Arnold: The Momentum of Openness - My Journey From Netscape User to Mozillian Contributor

Filed under
Moz/FF
Web

Working at Mozilla has been a very educational experience over the past eight years. I have had the chance to work side-by-side with many engineers at a large non-profit whose business and ethics are guided by a broad vision to protect the health of the web ecosystem. How did I go from being on the front of a computer screen in 1995 to being behind the workings of the web now? Below is my story of how my path wended from being a Netscape user to working at Mozilla, the heir to the Netscape legacy. It's amazing to think that a product I used 25 years ago ended up altering the course of my life so dramatically thereafter. But the world and the web was much different back then. And it was the course of thousands of people with similar stories, coming together for a cause they believed in.

The Winding Way West

Like many people my age, I followed the emergence of the World Wide Web in the 1990’s with great fascination. My father was an engineer at International Business Machines when the Personal Computer movement was just getting started. His advice to me during college was to focus on the things you don't know or understand rather than the wagon-wheel ruts of the well trodden path. He suggested I study many things, not just the things I felt most comfortable pursuing. He said, "You go to college so that you have interesting things to think about when you're waiting at the bus stop." He never made an effort to steer me in the direction of engineering. In 1989 he bought me a Macintosh personal computer and said, "Pay attention to this hypertext trend. Networked documents is becoming an important new innovation." This was long before the World Wide Web became popular in the societal zeitgeist. His advice was prophetic for me.

[...]

The Mozilla Project grew inside AOL for a long while beside the AOL browser and Netscape browsers. But at some point the executive team believed that this needed to be streamlined. Mitchell Baker, an AOL attorney, Brendan Eich, the inventor of JavaScript, and an influential venture capitalist named Mitch Kapoor came up with a suggestion that the Mozilla Project should be spun out of AOL. Doing this would allow all of the enterprises who had interest in working in open source versions of the project to foster the effort while Netscape/AOL product team could continue to rely on any code innovations for their own software within the corporation.

A Mozilla in the wild would need resources if it were to survive. First, it would need to have all the patents that were in the Netscape portfolio to avoid hostile legal challenges from outside. Second, there would need to be a cash injection to keep the lights on as Mozilla tried to come up with the basis for its business operations. Third, it would need protection from take-over bids that might come from AOL competitors. To achieve this, they decided Mozilla should be a non-profit foundation with the patent grants and trademark grants from AOL. Engineers who wanted to continue to foster AOL/Netscape vision of an open web browser specifically for the developer ecosystem could transfer to working for Mozilla.

Mozilla left Netscape's crowdsourced web index (called DMOZ or open directory) with AOL. DMOZ went on to be the seed for the PageRank index of Google when Google decided to split out from powering the Yahoo search engine and seek its own independent course. It's interesting to note that AOL played a major role in helping Google become an independent success as well, which is well documented in the book The Search by John Battelle.

Once the Mozilla Foundation was established (along with a $2 Million grant from AOL) they sought donations from other corporations who were to become dependent on the project. The team split out Netscape Communicator's email component as the Thunderbird email application as a stand-alone open source product and the Phoenix browser was released to the public as "Firefox" because of a trademark issue with another US company on usage of the term "Phoenix" in association with software.

Google had by this time broken off from its dependence on Yahoo as a source of web traffic for its nascent advertising business. They offered to pay Mozilla Foundation for search traffic that they could route to their search engine traffic to Google preferentially over Yahoo or the other search engines of the day. Taking "revenue share" from advertising was not something that the non-profit Mozilla Foundation was particularly well set up to do. So they needed to structure a corporation that could ingest these revenues and re-invest them into a conventional software business that could operate under the contractual structures of partnerships with other public companies. The Mozilla Corporation could function much like any typical California company with business partnerships without requiring its partners to structure their payments as grants to a non-profit.

[...]

Working in the open was part of the original strategy AOL had when they open sourced Netscape. If they could get other companies to build together with them, the collaborative work of contributors outside the AOL payroll could contribute to the direct benefit of the browser team inside AOL. Bugzilla was structured as a hierarchy of nodes, where a node owner could prioritize external contributions to the code base and commit them to be included in the derivative build which would be scheduled to be released as a new update package ever few months.

Module Owners, as they were called, would evaluate candidate fixes or new features against their own list of items to triage in terms of product feature requests or complaints from their own team. The main team that shipped each version was called Release Engineering. They cared less about the individual features being worked on than the overall function of the broader software package. So they would bundle up a version of the then-current software that they would call a Nightly build, as there were builds being assembled each day as new bugs were upleveled and committed to the software tree. Release engineering would watch for conflicts between software patches and annotate them in Bugzilla so that the various module owners could look for conflicts that their code commits were causing in other portions of the code base.

Read more

More in Tux Machines

Programming: Git and Qt

  • Understand the new GitLab Kubernetes Agent

    GitLab's current Kubernetes integrations were introduced more than three years ago. Their primary goal was to allow a simple setup of clusters and provide a smooth deployment experience to our users. These integrations served us well in the past years but at the same time its weaknesses were limiting for some important and crucial use cases.

  • GitLab Introduces the GitLab Kubernetes Agent

    The GitLab Kubernetes Agent (GKA), released in GitLab 13.4, provides a permanent communication channel between GitLab and the cluster. According to the GitLab blog, it is designed to provide a secure solution that allows cluster operators to restrict GitLab's rights in the cluster and does not require opening up the cluster to the Internet.

  • Git Protocol v2 Available at Launchpad

    After a few weeks of development and testing, we are proud to finally announce that Git protocol v2 is available at Launchpad! But what are the improvements in the protocol itself, and how can you benefit from that? The git v2 protocol was released a while ago, in May 2018, with the intent of simplifying git over HTTP transfer protocol, allowing extensibility of git capabilities, and reducing the network usage in some operations. For the end user, the main clear benefit is the bandwidth reduction: in the previous version of the protocol, when one does a “git pull origin master”, for example, even if you have no new commits to fetch from the remote origin, git server would first “advertise” to the client all refs (branches and tags) available. In big repositories with hundreds or thousands of refs, this simple handshake operation could consume a lot of bandwidth and time to communicate a bunch of data that would potentially be discarded by the client after. In the v2 protocol, this waste is no longer present: the client now has the ability to filter which refs it wants to know about before the server starts advertising it.

  • Qt Desktop Days 7-11 September

    We are happy to let you know that the very first edition of Qt Desktop Days 2020 was a great success! Having pulled together the event at very short notice, we were delighted at the enthusiastic response from contributors and attendees alike.

  • Full Stack Tracing Part 1

    Full stack tracing is a tool that should be part of every software engineer’s toolkit. It’s the best way to investigate and solve certain classes of hard problems in optimization and debugging. Because of the power and capability it gives the developer, we’ll be writing a series of blogs about it: when to use it, how to get it set up, how to create traces, and how to interpret results. Our goal is to get you capable enough to use full stack tracing to solve your tough problems too. Firstly, what is it? Full stack tracing is tracing on the full software stack, from the operating system to the application. By collecting profiling information (timing, process, caller, API, and other info) from the kernel, drivers, software frameworks, application, and JavaScript environments, you’re able to see exactly how the individual components of a system are interacting. That opens up areas of investigation that are impossible to achieve with standard application profilers, kernel debug messages, or even strategically inserted printf() commands. One way to think of full stack tracing is like a developer’s MRI machine that allows you to look into a running system without disturbing it to determine what is happening inside. (And unlike other low-level traces that we’ve written about before, full stack tracing provides a simpler way to view activity up and down the entire software stack.)

Dell XPS 13 Developer Edition Gets 11th-Gen Intel Refresh, Ubuntu 20.04 LTS

The revised model doesn’t buck any conventions. It’s a refreshed version of the XPS 13 model released earlier this year, albeit offering the latest 11th generation Intel processors, Intel Iris Xe graphics, Thunderbolt 4 ports, and up to 32GB 4267MHz LPDDR4x RAM. These are also the first Dell portables to carry Intel “Evo” certification. What’s Intel Evo? Think of it as an assurance. Evo certified notebooks have 11th gen Intel chips, can wake from sleep in under 1s, offer at least 9 hours battery life (with a Full HD screen), and support fast charging (with up to 4 hours from a single 30 min charge) — if they can’t meet any of those criteria they don’t get certified. Read more

Vulkan 1.2.155 Released and AMDVLK 2020.Q3.6 Vulkan Driver Brings Several Fixes

  • Vulkan 1.2.155 Released With EXT_shader_image_atomic_int64

    Vulkan 1.2.155 is out this morning as a small weekly update over last week's spec revision that brought the Vulkan Portability Extension 1.0 for easing software-based Vulkan implementations running atop other graphics APIs. Vulkan 1.2.155 is quite a tiny release after that big release last week, but there aren't even any documentation corrections/clarifications and just a sole new extension.

  • AMDVLK 2020.Q3.6 Vulkan Driver Brings Several Fixes

    AMD driver developers today released AMDVLK 2020.Q3.6 as their latest open-source snapshot of their official Vulkan graphics driver. The primary new feature of this AMDVLK driver update is VK_EXT_robustness2, which mandates stricter requirements around dealing with out-of-bounds reads/writes. Robustness2 requires greater bounds checking, discarding out-of-bounds writes, and out-of-bounds reads must return zero. This extension debuted back in April as part of Vulkan 1.2.139.

9 Best Free and Open Source RAW Processing Tools

When a digital camera captures an image, image sensors in the camera record the light from millions of sensing area. The camera’s digital circuitry converts the generated analog voltage signal into a digital representation. Many cameras allow these images to be stored in a raw image file. They are similar to digital negatives, as they have the same role as negatives in film photography. RAW files are not directly usable, but have all the necessary information to create an image. RAW files usually offer higher color depth, higher dynamic range, and preserve most of the information of the image compared with the final image format. The downside of RAW files is that they take up far more storage space. Dynamic range in photography describes the ratio between the maximum and minimum measurable light intensities (white and black, respectively). As implied by the name, RAW files have not been processed. By taking pictures in raw format the photographer is not committing to the conversion software that is built into the firmware of the camera. Instead, the individual can store the raw files, and make use of computer software to generate better JPEG files, and also benefit from future improvements in image software. There is a good range of open source Linux software that processes RAW files. Here’s our recommendations. Hopefully, there will be something of interest here for anyone who has a passion for digital photography. Read more