Language Selection

English French German Italian Portuguese Spanish

Development

GNU Privacy Guard (GnuPG), GNU Radio, and BPF Compiler Collection

Filed under
Development
GNU
  • Future directions for PGP

    Back in October, LWN reported on a talk about the state of the GNU Privacy Guard (GnuPG) project, an asymmetric public-key encryption and signing tool that had been almost abandoned by its lead developer due to lack of resources before receiving a significant infusion of funding and community attention. GnuPG 2 has brought about a number of changes and improvements but, at the same time, several efforts are underway to significantly change the way GnuPG and OpenPGP are used. This article will look at the current state of GnuPG and the OpenPGP web of trust, as compared to new implementations of the OpenPGP standard and other trust systems.

    GnuPG produces encrypted files, signed messages, and other types of artifacts that comply to a common standard called OpenPGP, described in RFC 4880. OpenPGP is derived from the Pretty Good Privacy (PGP) commercial software project (since acquired by Symantec) and today is almost synonymous with the GnuPG implementation, but the possibility exists for independent implementations of the standard that interoperate with each other. Unfortunately, RFC 4880 was released in 2007 and a new standard has not been published since then. In the meantime, several extensions have been added to GnuPG without broader standardization, and a 2017 IETF working group formed to update RFC 4880 ultimately shut down due to lack of interest.

    GnuPG 2 is a significantly heavier-weight software package than previous GnuPG versions. A major example of this change in architecture is GnuPG 2's complete reliance on the use of the separate gpg-agent daemon for private-key operations. While isolating private-key access within its own process enables improvements to security and functionality, it also adds complexity.

    In the wake of the Heartbleed vulnerability in OpenSSL, a great deal of scrutiny has been directed toward the maintainability of complex and long-lived open-source projects. GnuPG does not rely on OpenSSL for its cryptographic implementation, instead it uses its own independent implementation: Libgcrypt. This leads to the question of whether GnuPG's cryptographic implementation is susceptible to the same kinds of problems that OpenSSL has had; indeed the concern may be larger in the case of GnuPG.

  • Foundations of Amateur Radio - Episode 137

    I've been playing with a wonderful piece of software called GNU Radio, more on that in a moment.

  • An introduction to the BPF Compiler Collection

    In the previous article of this series, I discussed how to use eBPF to safely run code supplied by user space inside of the kernel. Yet one of eBPF's biggest challenges for newcomers is that writing programs requires compiling and linking to the eBPF library from the kernel source. Kernel developers might always have a copy of the kernel source within reach, but that's not so for engineers working on production or customer machines. Addressing this limitation is one of the reasons that the BPF Compiler Collection was created. The project consists of a toolchain for writing, compiling, and loading eBPF programs, along with example programs and battle-hardened tools for debugging and diagnosing performance issues.

    Since its release in April 2015, many developers have worked on BCC, and the 113 contributors have produced an impressive collection of over 100 examples and ready-to-use tracing tools. For example, scripts that use User Statically-Defined Tracing (USDT) probes (a mechanism from DTrace to place tracepoints in user-space code) are provided for tracing garbage collection events, method calls and system calls, and thread creation and destruction in high-level languages. Many popular applications, particularly databases, also have USDT probes that can be enabled with configuration switches like --enable-dtrace. These probes are inserted into user applications, as the name implies, statically at compile-time. I'll be dedicating an entire LWN article to covering USDT probes in the near future.

Linux Kernel Development

Filed under
Development
Linux
  • New Sound Drivers Coming In Linux 4.16 Kernel

    Due to longtime SUSE developer Takashi Iwai going on holiday the next few weeks, he has already sent in the sound driver feature updates targeting the upcoming Linux 4.16 kernel cycle.

    The sound subsystem in Linux 4.16 sees continued changes to the ASoC code, clean-ups to the existing drivers, and a number of new drivers.

  • Varlink: a protocol for IPC

    One of the motivations behind projects like kdbus and bus1, both of which have fallen short of mainline inclusion, is to have an interprocess communication (IPC) mechanism available early in the boot process. The D-Bus IPC mechanism has a daemon that cannot be started until filesystems are mounted and the like, but what if the early boot process wants to perform IPC? A new project, varlink, was recently announced; it aims to provide IPC from early boot onward, though it does not really address the longtime D-Bus performance complaints that also served as motivation for kdbus and bus1.

    The announcement came from Harald Hoyer, but he credited Kay Sievers and Lars Karlitski with much of the work. At its core, varlink is simply a JSON-based protocol that can be used to exchange messages over any connection-oriented transport. No kernel "special sauce" (such as kdbus or bus1) is needed to support it as TCP or Unix-domain sockets will provide the necessary functionality. The messages can be used as a kind of remote procedure call (RPC) using an API defined in an interface file.

  • Statistics for the 4.15 kernel

    The 4.15 kernel is likely to require a relatively long development cycle as a result of the post-rc5 merge of the kernel page-table isolation patches. That said, it should be in something close to its final form, modulo some inevitable bug fixes. The development statistics for this kernel release look fairly normal, but they do reveal an unexpectedly busy cycle overall.

    This development cycle was supposed to be relatively calm after the anticipated rush to get work into the 4.14 long-term-support release. But, while 4.14 ended up with 13,452 non-merge changesets at release, 4.15-rc6 already has 14,226, making it one of the busiest releases in the kernel project's history. Only 4.9 (16,214 changesets) and 4.12 (14,570) brought in more work, and 4.15 may exceed 4.12 by the time it is finished. So far, 1,707 developers have contributed to this kernel; they added 725,000 lines of code while removing 407,000, for a net growth of 318,000 lines of code.

  • A new kernel polling interface

    Polling a set of file descriptors to see which ones can perform I/O without blocking is a useful thing to do — so useful that the kernel provides three different system calls (select(), poll(), and epoll_wait() — plus some variants) to perform it. But sometimes three is not enough; there is now a proposal circulating for a fourth kernel polling interface. As is usually the case, the motivation for this change is performance.
    On January 4, Christoph Hellwig posted a new polling API based on the asynchronous I/O (AIO) mechanism. This may come as a surprise to some, since AIO is not the most loved of kernel interfaces and it tends not to get a lot of attention. AIO allows for the submission of I/O operations without waiting for their completion; that waiting can be done at some other time if need be. The kernel has had AIO support since the 2.5 days, but it has always been somewhat incomplete. Direct file I/O (the original use case) works well, as does network I/O. Many other types of I/O are not supported for asynchronous use, though; attempts to use the AIO interface with them will yield synchronous behavior. In a sense, polling is a natural addition to AIO; the whole point of polling is usually to avoid waiting for operations to complete.

IBM code grandmaster: what Java does next

Filed under
Development

Reports of Java’s death have been greatly exaggerated — said, well, pretty much every Java engineer that there is.

The Java language and platform may have been (in some people’s view) somewhat unceremoniously shunted into a side ally by the self-proclaimed aggressive corporate acquisition strategists (their words, not ours) at Oracle… but Java still enjoys widespread adoption and, in some strains, growing use and development.

Read more

Programming/Development: Git 2.16, Node.js, Testing/Bug Hunting

Filed under
Development
  • Git v2.16.0

    The latest feature release Git v2.16.0 is now available at the usual places. It is comprised of 509 non-merge commits since v2.15.0, contributed by 91 people, 26 of which are new faces.

  • Git 2.16 Released

    Git maintainer Junio Hamano has released version 2.16.0 of this distributed revision control system.

  • Announcing The Node.js Application Showcase

    The stats around Node.js are pretty staggering. There were 25 million downloads of Node.js in 2017, with over one million of them happening on a single day. And these stats are just the users. On the community side, the numbers are equally exceptional.

    What explains this immense popularity? What we hear over and over is that, because Node.js is JavaScript, anyone who knows JS can apply that knowledge to build powerful apps — every kind of app. Node.js empowers everyone from hobbyists to the largest enterprise teams to bring their dreams to life faster than ever before.

  • Google AutoML Cloud: Now Build Machine Learning Models Without Coding Experience

    Google has been offering pre-trained neural networks for a long time. To lower the barrier of entry and make the AI available to all the developers and businesses around, Google has now introduced Cloud AutoML.

    With the help of Cloud AutoML, businesses will be able to build machine learning models with the help of a drag-and-drop interface. In other words, if your company doesn’t have expert machine-learning programmers, Google is here to fulfill your needs.

  • Re-imagining beta testing in the ever-changing world of automation

    Fundamentally, beta testing is a test of a product performed by real users in the real environment. There are a number of names for this type of testing—user acceptance testing (UAT), customer acceptance testing (CAT), customer validation and field testing (common in Europe)—but the basic components are more or less the same. All involve user testing of the front-end user interface (UI) and the user experience (UX) to find and resolve potential issues. Testing happens across iterations in the software development lifecycle (SDLC), from when an idea transforms into a design, across the development phases, to after unit and integration testing.

Programming/Development: HHVM 3.24, 'DevOps', RcppMsgPack

Filed under
Development
  • HHVM 3.24

    HHVM 3.24 is released! This release contains new features, bug fixes, performance improvements, and supporting work for future improvements. Packages have been published in the usual places.

  • HHVM 3.24 Released, The Final Supporting PHP5

    The Facebook crew responsible for the HHVM project as a speedy Hack/PHP language implementation is out with its 3.24 release.

    HHVM 3.24 is important as it's the project's last release focusing on PHP5 compatibility. Moving forward, PHP5 compatibility will no longer be a focus and components of it will likely be dropped. As well, Facebook will be focusing on their Hack language rather than PHP7. Now that PHP7 is much faster than PHP5 and all around in a much better state, Facebook developers are focusing on their Hack language rather than just being an alternative PHP implementation.

  • How to get into DevOps

    I've observed a sharp uptick of developers and systems administrators interested in "getting into DevOps" within the past year or so. This pattern makes sense: In an age in which a single developer can spin up a globally distributed infrastructure for an application with a few dollars and a few API calls, the gap between development and systems administration is closer than ever. Although I've seen plenty of blog posts and articles about cool DevOps tools and thoughts to think about, I've seen fewer content on pointers and suggestions for people looking to get into this work.

  • RcppMsgPack 0.2.1

    Am update of RcppMsgPack got onto CRAN today. It contains a number of enhancements Travers had been working on, as well as one thing CRAN asked us to do in making a suggested package optional.

    MessagePack itself is an efficient binary serialization format. It lets you exchange data among multiple languages like JSON. But it is faster and smaller. Small integers are encoded into a single byte, and typical short strings require only one extra byte in addition to the strings themselves. RcppMsgPack brings both the C++ headers of MessagePack as well as clever code (in both R and C++) Travers wrote to access MsgPack-encoded objects directly from R.

Debugging and Compiling

Filed under
Development
GNU
  • How debuggers really work

    A debugger is one of those pieces of software that most, if not every, developer uses at least once during their software engineering career, but how many of you know how they actually work? During my talk at linux.conf.au 2018 in Sydney, I will be talking about writing a debugger from scratch... in Rust!

    In this article, the terms debugger/tracer are interchangeably. "Tracee" refers to the process being traced by the tracer.

  • GCC 8.0 Moves On To Only Regression/Documentation Fixes

    The GCC 8 compiler is on to its last stage of development

Programming: Continuous Integration, JavaScript Frameworks, Visualizing Molecules with Python

Filed under
Development
  • Librsvg gets Continuous Integration

    One nice thing about gitlab.gnome.org is that we can now have Continuous Integration (CI) enabled for projects there. After every commit, the CI machinery can build the project, run the tests, and tell you if something goes wrong.

    Carlos Soriano posted a "tips of the week" mail to desktop-devel-list, and a link to how Nautilus implements CI in Gitlab. It turns out that it's reasonably easy to set up: you just create a .gitlab-ci.yml file in the toplevel of your project, and that has the configuration for what to run on every commit.

  • The Brutal Lifecycle of JavaScript Frameworks

    Using the Stack Overflow Trends tool and some of our internal traffic data, we decided to take a look at some of the more prominent UI frameworks: Angular, React, Vue.js, Backbone, Knockout, and Ember.
     

  • Visualizing Molecules with Python

    The PyMOL Wiki also hosts a script library, and it's a good place to look before you start down the road of creating your own script, as someone else may have run into the same issue and may have found a solution you can use. If nothing else, you may be able to find a script that could serve as a starting point for your own particular problem.

    When you're are done working with PyMOL, there are many different ways to end the session. If there is work you are likely to pick up again and continue with, click File→Save Session to save all of the work you just did, including all of the transitions applied to the view. If the changes you made were actually structural, rather than just superficial changes to the way the molecule looked, you can save those structural changes by selecting File→Save Molecule. This allows you to write out the new molecule to a chemical file format, such as a PDB file.

    If you need output for publications or presentations, a few different options are available. Clicking File→Save Image As allows you to select from saving a regular image file in PNG format or writing out data in a POVRay or VRML 3D file format. If you are doing a fancier presentation, you even can export a movie of your molecule by clicking File→Save Movie As. This lets you generate an MPEG movie file that can be used either on a web-based journal or within a slide deck for a presentation.

GCC 8.0 vs. LLVM Clang 6.0 On AMD EPYC

Filed under
Development
Graphics/Benchmarks

At the beginning of January I posted some early LLVM Clang 6.0 benchmarks on AMD EPYC while in this article is comparing the tentative Clang 6.0 performance to that of the in-development GCC 8.0. Both compilers are now into their feature freeze and this testing looked at the performance of generated binaries both for generic x86_64 as well as being tuned for AMD's Zen "znver1" microarchitecture.

Read more

Programming/Development: JavaScript, Go, Qt, and GitHub

Filed under
Development
  • Exploring Node.js with Mark Hinkle, Executive Director of the Node.js Foundation

    Even though JavaScript has been around for more than 20 years, it’s becoming the first-class citizen for developing enterprise applications. There is a huge developer community behind this technology.

    What makes things even more interesting is that, with Node.js, JavaScript can run on server, so developers can write applications that run end-to-end in JavaScript. Node.js is very well suited for service applications because server applications are increasingly becoming single function event-driven microservices.

  • As Go 2.0 Nears, AWS Launches Developer Preview of Go SDK 2.0
  • PackageKit-Qt Updated With Qt5 Port, Offline Updates & Performance Improvement

    The PackageKit-Qt project that provides Qt bindings for PackageKit has simultaneously released versions v0.10 and v1.0.

  • PackageKitQt 1.0.0 and 0.10.0 released!

    PackageKitQt is a Qt Library to interface with PackageKit

    It’s been a while that I don’t do a proper PackageKitQt release, mostly because I’m focusing on other projects, but PackageKit API itself isn’t evolving as fast as it was, so updating stuff is quite easy.

  • GitHub Knows

    I was reflecting the other day how useful it would be if GitHub, in addition to the lists it has now like Trending and Explore, could also provide me a better view into which projects a) need help; and more, Cool can accept that help when it arrives. Lots of people responded, and I don't think I'm alone in wanting better ways to find things in GitHub.

    Lots of GitHub users might not care about this, since you work on what you work on already, and finding even more work to do is the last thing on your mind. For me, my interest stems from the fact that I constantly need to find good projects, bugs, and communities for undergrads wanting to learn how to do open source, since this is what I teach. Doing it well is an unsolved problem, since what works for one set of students automatically disqualifies the next set: you can't repeat your success, since closed bugs (hopefully!) don't re-open.

    And because I write about this stuff, I hear from lots of students that I don't teach, students from all over the world who, like my own, are struggling to find a way in, a foothold, a path to get started. It's a hard problem, made harder by the size of the group we're discussing. GitHub's published numbers from 2017 indicate that there are over 500K students using its services, and those are just the ones who have self-identified as such--I'm sure it's much higher.

OSS and Programming Leftovers

Filed under
Development
OSS
  • Telecommunications Infrastructure Project looks to apply open source technologies

    The Telecommunications Infrastructure Project is looking to apply open source technologies to next generation fixed and mobile networks.

    The Telecom Infra Project (TIP), conceived by Facebook to light a fire under the traditional telecommunications infrastructure market, continues to expand into new areas.

    Launched at the 2016 Mobile World Congress in Barcelona, the highly disruptive project takes an open ecosystem approach to foster network innovation and improve the cost efficiencies of both equipment suppliers and network operators.“We know from our experience with the Open Compute Project that the best way to accelerate the pace of innovation is for companies to collaborate and work in the open. We helped to found TIP with the same goal - bringing different parties together and strengthen and improve efficiencies in the telecom industry,” according to Aaron Bernstein, Director of Connectivity Ecosystem Programmmes at Facebook.

  • Introducing Ad Inspector: Our open-source ad inspection tool
  • AI and machine learning bias has dangerous implications

    Algorithms are everywhere in our world, and so is bias. From social media news feeds to streaming service recommendations to online shopping, computer algorithms—specifically, machine learning algorithms—have permeated our day-to-day world. As for bias, we need only examine the 2016 American election to understand how deeply—both implicitly and explicitly—it permeates our society as well.

    What’s often overlooked, however, is the intersection between these two: bias in computer algorithms themselves.

    Contrary to what many of us might think, technology is not objective. AI algorithms and their decision-making processes are directly shaped by those who build them—what code they write, what data they use to “train” the machine learning models, and how they stress-test the models after they’re finished. This means that the programmers’ values, biases, and human flaws are reflected in the software. If I fed an image-recognition algorithm the faces of only white researchers in my lab, for instance, it wouldn’t recognize non-white faces as human. Such a conclusion isn’t the result of a “stupid” or “unsophisticated” AI, but to a bias in training data: a lack of diverse faces. This has dangerous consequences.

  • Pineapple Fund Supports Conservancy

    Software Freedom Conservancy thanks the Pineapple Fund and its anonymous backer for its recent donation of over 18 Bitcoin (approximately $250,000). The Pineapple Fund is run by an early Bitcoin adopter to give about $86 million worth of Bitcoin to various charities. Shortly after the fund’s announcement earlier this month, volunteers and Conservancy staff members applied for its support. That application was granted this week.

  • Top Programming Languages That Largest Companies Are Hiring Developers For In 2018

    Learning a programming language involves some important decisions on the part of a professional. Gone are the days when one mastered a single popular programming language and it granted job security. Highlighting these limitations of reliance on a single programming language, Coding Dojo coding school has shared the results of an interesting study.

  • Rust in 2018

    I think 2017 was a great year for Rust. Near the beginning of the year, after custom derive and a bunch of things stabilized, I had a strong feeling that Rust was “complete”. Not really “finished”, there’s still tons of stuff to improve, but this was the first time stable Rust was the language I wanted it to be, and was something I could recommend for most kinds of work without reservations.

    I think this is a good signal to wind down the frightening pace of new features Rust has been getting. And that happened! We had the impl period, which took some time to focus on getting things done before proposing new things. And Rust is feeling more polished than ever.

Syndicate content

More in Tux Machines

Type Title Author Replies Last Postsort icon
Story Android Leftovers Rianne Schestowitz 21/01/2018 - 7:42pm
Story Our Favourite Apps for Ubuntu Rianne Schestowitz 21/01/2018 - 7:38pm
Story Kernel Space: Plans for Linux 4.16, 4.15 Likely Out Shortly Roy Schestowitz 21/01/2018 - 7:10pm
Story Some FreeBSD Users Are Still Running Into Random Lock-Ups With Ryzen Roy Schestowitz 21/01/2018 - 7:09pm
Story PC desktop build, Intel, spectre issues etc. Roy Schestowitz 21/01/2018 - 7:00pm
Story Intel OpenGL vs. Vulkan Performance With Mesa 18.0 Roy Schestowitz 21/01/2018 - 6:50pm
Story How To Install Themes Or Icons In Elementary OS Mohd Sohail 21/01/2018 - 6:38pm
Story How To Create Virtual Hosts On Apache Server To Host Multiple Websites Mohd Sohail 21/01/2018 - 6:34pm
Story Android Users: To Avoid Malware, Try the F-Droid App Store Roy Schestowitz 21/01/2018 - 6:25pm
Story LibreELEC (Krypton) 8.2.3 MR Roy Schestowitz 21/01/2018 - 6:19pm