Language Selection

English French German Italian Portuguese Spanish

Drupal

Mark Morton: Why we chose an open source website

Filed under
OSS
Drupal

Platforms like Wordpress and Drupal, which are maintained by a community of users, can be a cost-effective and flexible option for charities, writes the digital media manager at Epilepsy Action

Read more

Also: Sydney developer brings open source e-commerce to WordPress

Australian government Drupal-based CMS goes live

Filed under
OSS
Drupal

GovCMS, the Australian government's new cloud-based web content management system, has gone live on Australia.gov.au, the federal government's chief technology officer, John Sheridan, said at a media briefing in Sydney on Tuesday. The site receives more than 2 million visitors each month, and is the first site to migrate to the platform.

The Department of Finance has developed govCMS, an Australian government-specific distribution of the Drupal open-source content management platform, in conjunction with Acquia — a company founded by Drupal's creator, Dries Buytaert, to provide commercial-grade support for the platform.

Read more

Drupal Core - Highly Critical - Public Service announcement - PSA-2014-003

Filed under
Drupal
Security

This Public Service Announcement is a follow up to SA-CORE-2014-005 - Drupal core - SQL injection. This is not an announcement of a new vulnerability in Drupal.

Automated attacks began compromising Drupal 7 websites that were not patched or updated to Drupal 7.32 within hours of the announcement of SA-CORE-2014-005 - Drupal core - SQL injection. You should proceed under the assumption that every Drupal 7 website was compromised unless updated or patched before Oct 15th, 11pm UTC, that is 7 hours after the announcement.

Read more

Jeffrey McGuire From Acquia Explains Drupal 8, the GPL, and Much More

Filed under
Interviews
Drupal

Jeffrey McGuire

Tux Machines has run using Drupal for nearly a decade (the site is older than a decade) and we recently had the pleasure of speaking with Jeffrey A. "jam" McGuire, Open Source Evangelist at Acquia, the key company behind Drupal (which the founder of Drupal is a part of). The questions and answers below are relevant to many whose Web sites depend on Drupal.

1) What is the expected delivery date for Drupal 8 (to developers) and what will be a good point for Drupal 6 and 7 sites to advance to it?

 

Drupal 8.0.0 beta 1 came out on October 1, 2014, during DrupalCon Amsterdam. It’s a little early for designers to port their themes, good documentation to be written, or translators to finalise the Drupal interface in their language – some things are still too fluid. For coders and site builders, however, it’s a great time to familiarise yourself with the new system and start porting your contributed modules. Read this post by Drupal Project Lead, Dries Buytaert; it more thoroughly describes who and what the beta releases are and aren’t good for: “Betas are good testing targets for developers and site builders who are comfortable reporting (and where possible, fixing) their own bugs, and who are prepared to rebuild their test sites from scratch when necessary. Beta releases are not recommended for non-technical users, nor for production websites.”

 

With a full Release Candidate or 8.0.0 release on the cards for some time in 2015, now is the perfect time to start planning and preparing your sites for the upgrade to Drupal 8. Prolific Drupal contributor Dave Reid gave an excellent session at DrupalCon Amsterdam, “Future-proof your Drupal 7 Site”, in which he outlines a number of well-established best practices in Drupal 7 that will help you have a smooth migration when it is time - as well as a number of deprecated modules and practices to avoid.

 

2) What is the importance of maintaining API and module compatibility in future versions of Drupal and how does Acquia balance that with innovation that may necessitate new/alternative hooks and functions?

 

The Drupal community, which is not maintained or directed by Acquia or any company, has always chosen innovation over backward compatibility. Modules and APIs of one version have never had to be compatible with other versions. The new point-release system that will be used from Drupal 8.0.0 onwards - along with new thinking among core contributors and the broader community - may change this in future. There has been discussion, for example, of having APIs valid over two releases, guaranteeing that a Drupal 8 module would still work in Drupal 9 and that a Drupal 9 module would work in Drupal 10. Another possibility is that this all may be obviated in the future as moves toward broad intercompatibility in PHP lead to the creation of PHP libraries with Drupal implementations rather than purely Drupal modules.

 

3) Which Free/libre software project do you consider to be the biggest competitor of Drupal?

 

The “big three” FOSS CMSs – Drupal, Wordpress, and Joomla! – seem to have settled into roughly defined niches. There is no hard and fast rule to this, but Wordpress runs many smaller blogs and simpler sites; Joomla! projects fall into the small to medium range; and Drupal projects are generally medium to large to huge and complex. Many tech people with vested interests in one camp or another may identify another project as “frenemies” and compete with these technologies when bidding for clients, but the overall climate between the various PHP and open source projects is friendly and open. Drupal is one of the largest free/libre projects out there and doesn’t compete with other major projects like Apache, Linux, Gnome, KDE, or MySQL. Drupal runs most commonly on the LAMP stack and couldn’t exist or work at all without these supporting free and open source technologies.

 

NB – I use the term “open source” as synonymous shorthand for “FOSS, Free and Open Source Software, and/or Free/libre software”.

 

4) Which program -- proprietary or Free/libre software -- is deemed the biggest growth opportunity for Drupal?

 

Frankly, all things PHP. Drupal’s biggest growth opportunity at present is its role as an innovator and “meta-project” in the current “PHP Renaissance”. While fragmented at times in the past, the broader PHP community is now rallying around common goals and standards that allow for extensive compatibility and interoperability between projects. For the upcoming Drupal 8 release, the project has adopted object-oriented coding, several components from the Symfony2 framework, a more up-to-date minimum version of PHP (5.4 as of October 2014), and an extensive selection of external libraries.

 

On the one hand, Drupal being at the heart of the action in PHP-Land allows it and its community of innovators to make a more direct impact and spread its influence. On the other hand, it is now also able to attract even more developers from a variety of backgrounds to use and further develop Drupal. A Symfony developer (who has had a client website running on Drupal 8 since summer 2014) told me that looking under the hood in Drupal 8, “felt very familiar, like looking at a dialect of Symfony code.”

 

NB – I use the term “open source” as synonymous shorthand for “FOSS, Free and Open Source Software, and/or Free/libre software”.

5) To what degree did Drupal succeed owing to the fact that Drupal and all contributed files are licensed under the GNU GPL (version 2 or 3)?

 

“Building on the shoulders of giants” is a common thread in free and open source software. The GPL licenses clearly promote a culture of mutual sharing. This certainly applies to Drupal, where I can count on huge advantages thanks to benefitting from more than twelve years of development, 100k+ active users, running something like 2% of the Web for thousands of businesses, and millions of hours of coding and best practices by tens of thousands of active developers. Our code being GPL-licensed and collected in a central repository on Drupal.org has allowed us to build upon the strengths of each other’s work in a Darwinian environment (”bad code dies or gets fixed” - Jeff Eaton) where the best code rises to the top and becomes even better thanks to the attention of thousands of site owners and developers. The same repository has contributed to a reputation economy where bad actors and dubious or dangerous code has little chance of survival.

 

The GPL 2 is business friendly in that the license specifically allows for commercial activity and has been court tested. As a result, there is very little legal ambiguity in adopting GPL-licensed code. It also makes clear cases for when code needs to be shared as open source and when it doesn't (allowing for sites to use Drupal but still have "proprietary" code). The so-called “Web Services Loophole” caused some controversy and discussion, but also opened the way to SaaS products being built on free/libre GPL code. Drupal Project Lead Dries Buytaert explained this back in 2006 (read the full post here):

 

“The General Public License 2 (GPL 2), mandates that all modifications also be distributed under the GPL. But when you are providing a service through the web using GPL'ed software like Drupal, you are not actually distributing the software. You are providing access to the software. Thus, a way to make money with Drupal is to sell access to a web service built on top of Drupal. This is commonly referred to as the web services loophole.”

 

Business models remain challenging in a GPL world; nothing is stopping me from selling you GPL code, but nothing is stopping you from passing it on to anyone else either. App stores, for example, are next to impossible to realise under these conditions. Most Drupal businesses are focused on value add services like site building, auditing and consulting of various kinds, hosting, and so on, with a few creating SaaS or PaaS offerings of one kind or another.

 

NB – I use the term “open source” as synonymous shorthand for “FOSS, Free and Open Source Software, and/or Free/libre software”.

 

6) What role do companies that build, maintain and support Drupal sites play in Acquia's growth and in Drupal's growth?

 

Acquia was the first company to offer SLA-based commercial support for Drupal (a Service Level Agreement essentially says, “In return for your subscription, Acquia promises to respond to your problems within a certain time and in a certain manner”). The specifics of response time and action vary according to the level of subscription, but these allowed a new category of customer to adopt Drupal: The Enterprise.

 

Enterprise adoption – think Whitehouse.gov, Warner Music, NBC Universal, Johnson & Johnson – of Drupal resulted in increased awareness and therefore even further increased adoption (and improvement) of the platform over time. Everyone who delivers a successful Drupal project for happy clients improves Drupal for everyone else involved. The more innovative projects there are, the more innovation flows back into our codebase. The more happy customers there are, the more likely their peers are to adopt Drupal, too. Finally, the open source advantage also comes into play: it behooves Drupal service providers to give the best possible service and deliver the highest-quality sites and results. If they don’t, there is no vendor lock-in and being open source at scale also means you can find another qualified Drupal business to work with if it becomes necessary. Acquia and the whole, large Drupal vendor ecosystem simultaneously compete, cooperatively grow the project (in code and happy customer advocates), and act as each other’s safety net and guarantors.

 

NB – I use the term “open source” as synonymous shorthand for “FOSS, Free and Open Source Software, and/or Free/libre software”.

 

7) How does Acquia manage and coordinate the disclosure of security vulnerabilities, such as the one disclosed on October 15th?

Acquia as an organisation is an active, contributing member of the Drupal community and it adheres strictly to the Drupal project’s security practices and guidelines, including the Drupal project’s strict procedure for reporting security issues. Many of Acquia’s technical employees are themselves active Drupal contributors; as of October 2014, ten expert Acquians also belong to the Drupal Security Team. Acquia also works closely with other service providers, whether competitors or partners, in the best interests of all of us who use and work with Drupal. This blog post, “Shields Up!”, by Moshe Weizman explains how Acquia, in cooperation with the Drupal Security Team and some other Drupal hosting companies, dealt with the recent “Drupalgeddon” security vulnerability.

First open source enterprise resource planning app for Drupal unveiled

Filed under
OSS
Drupal

ERPAL for Service Providers is the world's first open source ERP built on Drupal, a popular content management system.

Read more

Is Your Small Business Website Like a Bad First Date?

Filed under
OSS
Drupal
Web

Open source platforms like Drupal and WordPress provide a backend framework that small businesses can use to build and customize their websites while managing key functions like registration, system administration, layout and RSS. Users can also create their own modules to enable new functions or change the website's look and feel.

Smaller companies can use open source content management systems (CMS) to reduce or eliminate the need for coding while delivering rich media online, including text, graphics, video and audio. They can use open source assets to create responsive design sites that optimize content for viewing across multiple device types, including smartphones, tablets and laptops, while eliminating the need to scroll from side to side.

With open source tools available to help small businesses establish an online presence with robust front and backend functions quickly and affordably, there's never been a better time to focus on content excellence. And the best way to do that is to concentrate on the customer. Engage with your target customers and find out what they value the most. Use that information to develop your content, and speak directly to your customers' needs.

Read more

Acquia to deliver government's cloud-hosted, open source CMS

Filed under
Drupal

Boston-headquartered Drupal services company Acquia will deliver the federal government’s govCMS project.

The project to create a standard content management system for federal government agencies was announced in May.

Read more

3 Drupal education distros reviewed

Filed under
Drupal

Drupal is a powerful and flexible open source content management system that powers a large number of sites on the Internet. Drupal's flexibility means that sites built with Drupal can vary widely in form and function. In most cases, this flexibility is a benefit, but it can sometimes also be overwhelming. Growing a Drupal powered website from Drupal Core to a finished, customized site, by selecting from a wide variety of modules and themes, can be a complicated and time consuming process.

Read more

Cloud, open source power TransLink's Web presence

Filed under
OSS
Drupal

It was an aging bespoke application that drove TransLink to seek a new content management system, but it was the strength of the community surrounding the open source project that helped the Queensland public transport agency choose Drupal.

Prior to the switch to Drupal, which began last year, the former TransLink site was partly based on static files and partly on a "home-grown CMS that managed a lot of our custom content such as service disruption and events, so that we could do a little bit of distributed authoring within the organisation," said Natalie Gorring, manager, online products and services, at TransLink.

Read more

Introduction to 4 Open Source CMS

Filed under
OSS
Drupal

A content management system (CMS is a computer application that allows publishing, editing and modifying content, organizing, deleting as

well as maintenance from a central interface. CMS’s are often used to run websites containing blogs, news, and shopping. Many corporate and marketing websites use CMS’s. CMS’s typically aim to avoid the need for hand coding, but may support it for specific elements or entire pages.

Read more

Syndicate content

More in Tux Machines

Red Hat's "DevOps" Hype Again and Analysis of last Night's Financial Results

OSS Leftovers

  • Deutsche Telekom and Aricent Create Open Source Edge Software Framework
    Deutsche Telekom and Aricent today announced the creation of an Open Source, Low Latency Edge Compute Platform available to operators, to enable them to develop and launch 5G mobile applications and services faster. The cost-effective Edge platform is built for software-defined data centers (SDDC) and is decentralized, to accelerate the deployment of ultra-low latency applications. The joint solution will include a software framework with key capabilities for developers, delivered as a platform-as-a-service (PaaS) and will incorporate cloud-native Multi-access edge computing (MEC) technologies.
  • A Deeper Look at Sigma Prime's Lighthouse: An Open-Source Ethereum 2.0 Client
  • Notable moments in Firefox for Android UA string history
  • Dweb: Creating Decentralized Organizations with Aragon
    With Aragon, developers can create new apps, such as voting mechanisms, that use smart contracts to leverage decentralized governance and allow peers to control resources like funds, membership, and code repos. Aragon is built on Ethereum, which is a blockchain for smart contracts. Smart contracts are software that is executed in a trust-less and transparent way, without having to rely on a third-party server or any single point of failure. Aragon is at the intersection of social, app platform, and blockchain.
  • LLVM 7.0.0 released
  • Parabola GNU/Linux-libre: Boot problems with Linux-libre 4.18 on older CPUs
    Due to a known bug in upstream Linux 4.18, users with older multi-core x86 CPUs (Core 2 Duo and earlier?) may not correctly boot up with linux-libre 4.18 when using the default clocksource.
  • Visual Schematic Diffs in KiCAD Help Find Changes
    In the high(er)-end world of EDA tools like OrCAD and Altium there is a tight integration between the version control system and the design tools, with the VCS is sold as a product to improve the design workflow. But KiCAD doesn’t try to force a version control system on the user so it doesn’t really make sense to bake VCS related tools in directly. You can manage changes in KiCAD projects with git but as [jean-noël] notes reading Git’s textual description of changed X/Y coordinates and paths to library files is much more useful for a computer than for a human. It basically sucks to use. What you really need is a diff tool that can show the user what changed between two versions instead of describe it. And that’s what plotgitsch provides.

LWN's Latest (Today Outside Paywall) Articles About the Kernel, Linux

  • Toward better handling of hardware vulnerabilities
    From the kernel development community's point of view, hardware vulnerabilities are not much different from the software variety: either way, there is a bug that must be fixed in software. But hardware vendors tend to take a different view of things. This divergence has been reflected in the response to vulnerabilities like Meltdown and Spectre which was seen by many as being severely mismanaged. A recent discussion on the Kernel Summit discussion list has shed some more light on how things went wrong, and what the development community would like to see happen when the next hardware vulnerability comes around. The definitive story of the response to Meltdown and Spectre has not yet been written, but a fair amount of information has shown up in bits and pieces. Intel was first notified of the problem in July 2017, but didn't get around to telling anybody in the the Linux community about it until the end of October. When that disclosure happened, Intel did not allow the community to work together to fix it; instead each distributor (or other vendor) was mostly left on its own and not allowed to talk to the others. Only at the end of December, right before the disclosure (and the year-end holidays), were members of the community allowed to talk to each other. The results of this approach were many, and few were good. The developers charged with responding to these problems were isolated and under heavy stress for two months; they still have not been adequately thanked for the effort they put in. Many important stakeholders, including distributions like Debian and the "tier-two" cloud providers, were not informed at all prior to the general disclosure and found themselves scrambling. Different distributors shipped different fixes, many of which had to be massively revised before entry into the mainline kernel. When the dust settled, there was a lot of anger left simmering in its wake.
  • Writing network flow dissectors in BPF
    Network packet headers contain a great deal of information, but the kernel often only needs a subset of that information to be able to perform filtering or associate any given packet with a flow. The piece of code that follows the different layers of packet encapsulation to find the important data is called a flow dissector. In current Linux kernels, the flow dissector is written in C. A patch set has been proposed recently to implement it in BPF with the clear goal of improving security, flexibility, and maybe even performance.
  • Coscheduling: simultaneous scheduling in control groups
    The kernel's CPU scheduler must, as its primary task, determine which process should be executing in each of a system's processors at any given time. Making an optimal decision involves juggling a number of factors, including the priority (and scheduling classes) of the runnable processes, NUMA locality, cache locality, latency minimization, control-group policies, power management, overall fairness, and more. One might think that throwing another variable into the mix — and a complex one at that — would not be something anybody would want to attempt. The recent coscheduling patch set from Jan Schönherr does exactly that, though, by introducing the concept of processes that should be run simultaneously. The core idea behind coscheduling is the marking of one or more control groups as containing processes that should be run together. If one process in a coscheduled group is running on a specific set of CPUs (more on that below), only processes from that group will be allowed to run on those CPUs. This rule holds even to the point of forcing some of the CPUs to go idle if the given control group lacks runnable processes, regardless of whether processes outside the group are runnable. Why might one want to do such a thing? Schönherr lists four motivations for this work, the first of which is virtualization. That may indeed be the primary motivation, given that Schönherr is posting from an Amazon address, and Amazon is rumored to be running a virtualized workload or two. A virtual machine usually contains multiple processes that interact with each other; these machines will run more efficiently (and with lower latencies) if those processes can run simultaneously. Coscheduling would ensure that all of a virtual machine's processes are run together, maximizing locality and minimizing the latencies of the interactions between them.
  • Machine learning and stable kernels
    There are ways to get fixes into the stable kernel trees, but they require humans to identify which patches should go there. Sasha Levin and Julia Lawall have taken a different approach: use machine learning to distinguish patches that fix bugs from others. That way, all bug-fix patches could potentially make their way into the stable kernels. Levin and Lawall gave a talk describing their work at the 2018 Open Source Summit North America in Vancouver, Canada. Levin began with a quick introduction to the stable tree and how patches get into it. When a developer fixes a bug in a patch they can add a "stable tag" to the commit or send a mail to the stable mailing list; Greg Kroah-Hartman will then pick up the fix, evaluate it, and add it to the stable tree. But that means that the stable tree is only getting the fixes that are pointed out to the stable maintainers. No one has time to check all of the commits to the kernel for bug fixes but, in an ideal world, all of the bug fixes would go into the stable kernels. Missing out on some fixes means that the stable trees will have more security vulnerabilities because the fixes often close those holes—even if the fixer doesn't realize it.
  • Trying to get STACKLEAK into the kernel
    The STACKLEAK kernel security feature has been in the works for quite some time now, but has not, as yet, made its way into the mainline. That is not for lack of trying, as Alexander Popov has posted 15 separate versions of the patch set since May 2017. He described STACKLEAK and its tortuous path toward the mainline in a talk [YouTube video] at the 2018 Linux Security Summit. STACKLEAK is "an awesome security feature" that was originally developed by The PaX Team as part of the PaX/grsecurity patches. The last public version of the patch set was released in April 2017 for the 4.9 kernel. Popov set himself on the goal of getting STACKLEAK into the kernel shortly after that; he thanked both his employer (Positive Technologies) and his family for giving him working and free time to push STACKLEAK. The first step was to extract STACKLEAK from the more than 200K lines of code in the grsecurity/PaX patch set. He then "carefully learned" about the patch and what it does "bit by bit". He followed the usual path: post the patch, get feedback, update the patch based on the feedback, and then post it again. He has posted 15 versions and "it is still in progress", he said.

PostgreSQL 11: something for everyone

PostgreSQL 11 had its third beta release on August 9; a fourth beta (or possibly a release candidate) is scheduled for mid-September. While the final release of the relational database-management system (currently slated for late September) will have something new for many users, its development cycle was notable for being a period when the community hit its stride in two strategic areas: partitioning and parallelism. Partitioning and parallelism are touchstones for major relational database systems. Proprietary database vendors manage to extract a premium from a minority of users by upselling features in these areas. While PostgreSQL has had some of these "high-tier" items for many years (e.g., CREATE INDEX CONCURRENTLY, advanced replication functionality), the upcoming release expands the number considerably. I may be biased as a PostgreSQL major contributor and committer, but it seems to me that the belief that community-run database system projects are not competitive with their proprietary cousins when it comes to scaling enterprise workloads has become just about untenable. Read more