Language Selection

English French German Italian Portuguese Spanish

Your rPath to Conary

Filed under
Linux
Reviews
-s

Development Release: rPath Linux 0.51 (Alpha) was announced by DistroWatch yesterday, and I was a bit curious. After my first glance, I was a bit taken aback. rPath doesn't seem to be targetting desktop users. Although it ships with KDE and Gnome, they aren't the most up-to-date versions, nor are they dressed up or enhanced in any manner distinguishable. In my humble opinion, I think rPath is probably a developer's platform, ...a conary developer's platform.

Information about rPath, as well as its ancestor Specifix, is fairly sketchy. The rPath website is a page listing a job opening and a link to the conary wiki, however DistroWatch states "rPath is a distribution based around the new Conary package management, created by ex-Red Hat engineers, to both showcase the abilities Conary provides and to provide a starting point for customisation." The conary wiki is pretty thin itself, although I was able to gleen a little information from it.

It was no big surprise to see (a modified) Anacoda as the installer and (as usual) I found it fairly straight forward and easy to complete. It asks some basic configuration questions such as network setup, firewall choice, and bootloader conf. I must say I loved the package selection portion. One is give one choice: everything. Could it be any easier? It takes a little while to install and once it's complete, it reboots without setting up other hardware or user accounts. Upon reboot it starts X as root, but to complete some other basic configurations in a graphical environment using rPaths Setup Agent. Included configurations include the date and timezone, monitor and resolution, and of course user account(s). Upon Finish, it restarts X and presents gdm for login. KDE and gnome are about your only choices for a desktop environment/window manager. rPath includes KDE-3.4.1 and Gnome-2.10.2. The Xserver version is xorg-6.8.2, gcc is 3.3.3, and the kernel is 2.6.12.5. The kernel-source isn't installed from the iso, but one can install it with conary.

        

Conary is rPath's package management system. As it appears conary is the focus of rPath, I spent quite a bit of time trying to figure it out. I began my quest quite lost and confused and ended it a little less lost and confused. According to the site, "Conary is a distributed software management system for Linux distributions. It replaces traditional package management solutions (such as RPM and dpkg) with one designed to enable loose collaboration across the Internet." Simply put, it's the package manager. It appears to be able to obtain packages from different repositories, utilizing binaries if available or sources if necessary and storing all versionings in a database in order to track changes from source branch all the way back to local versions installed on a given system to meet dependencies without conflicts.

According to the wiki, after the installation of rPath 0.51 the first thing one should do is update conary to version 0.62.2. Termed Conversion, the instructions stated to issue the following commands:

$su -
# conary update conary
# conary q conary
$ su
# sed -i 's/lockTroves/pinTroves/g' /etc/conaryrc

They continue with instructions in case an AssertionError is encountered. I didn't experience such an error and proceded with reading the wiki, --help, and man pages.

Conary at the commandline appears very apt-like. In fact the conary-gui is identical in appearance to synaptic. The gui front-end didn't seem to function very well here, but the commandline version seems to work as intended. Also included is the utility "yuck" which is a wrapper script to call conary --upgradeall.

        

Fortunately running conary is much easier than trying to understand what it is or how it works. Some simple commands include: conary q <packagename> reveals if the given packagename is installed, whereas conary rq <packagename> lists the newest available upgrade. conary update <packagename> installs or updates requested packagename, and conary erase <packagename> uninstalls. There are many many interesting options to play with in using conary beyond those basics, but most seem to geared toward package builders. Some of these include emerge, which builds the "recipe"; commit, which stores the changes; and showcs, which shows the difference. It really looks sophisticated and yes, I admit, a little complicated at the more in-depth level.

So, to install the kernel-source, one simply types: conary update kernel-source

The developers might be onto a superior package management system, but is it catching on? We know rPath obviously uses it and I understand Foresight Linux to utilize this package management system. As for rPath, it was a stable functional development environment. It seems it isn't trying to be the latest or greatest nor the prettiest. If you are interested in developing for conary or wish to use a system utilizing that package management system, then rPath might be the distro for you. The full package list as tested is HERE.

        

Conary

I'm pretty hazy on this too, so I might be completely off, but here is how I understand this:

While to a casual user Conary looks pretty much like apt-get or synaptic, it does do something more advanced under the hood. It is intended to make it easy to put together a system using a number of separate and *independent* repositories, each making its own changes and mini-releases. Conary tracks not only what you installed on your system, but also where it came from. This extends to any dependencies it uses, and it becomes quite a powerful concept.
For example, Foresight which also uses Conary is actually created largely from packages pulled directly from rPath repos; I would say as much as 75% of packages are not modified at all. If you install Foresight and later run updates on it, you'll see number of packages are updated from rPath repos. Any packages Foresight guys developed themselves come from their own repositories, naturally. But any packages that do exist in rPath but were modified in Foresight are overlayed over the 'standard' versions, with Conary keeping track of what comes from where, and what depends on what (in that context). This is pretty cool for Foresight guys, who can make their own distro while at the same time take a lot from the base, rPath.

Think of it this way: if you used Fedora, you probably tried at some stage to add various third-party repos to your yum config: Livna, Freshrpms etc... and quite possibly you discovered in the process some of them can conflict with others... it can become a mess. Well, this is exactly the situation Conary adresses.

... but again, I could be completely wrong.

re: Conary

That's pretty much the way I understand it as well, in that conary can keep track of any and all changes to the branches of a given source from the main branch all the way to minor revisions on public mirrors as well as on your local machine (which is especially good for developers). An end user can choose to install any version listed or just go with the latest. Like other package managers, all depends on the repositories set up tho. Good explanation! Thanks for your contribution. That's wonderful.

-s

----
You talk the talk, but do you waddle the waddle?

Conary

You are correct that rPath is developmnent release for the extensive testing of the Conary system, fPath is from Specifix who is the creator of Conary. Other distros like Foresight have taken and used it for their own needs. I find the Conary system interesting and quite functional, but have not made a decision about it's need and potential in the comunity.

My 2cents,
Capnkirby

re: Conary

I think it's a wonderful concept as well, but I think it'd be rather complicated to set up and most developers are already set in their ways. And when you factor in how few distros use that method... I don't think it's something that will catch on right away.

----
You talk the talk, but do you waddle the waddle?

Comment viewing options

Select your preferred way to display the comments and click "Save settings" to activate your changes.

More in Tux Machines

Red Hat's "DevOps" Hype Again and Analysis of last Night's Financial Results

OSS Leftovers

  • Deutsche Telekom and Aricent Create Open Source Edge Software Framework
    Deutsche Telekom and Aricent today announced the creation of an Open Source, Low Latency Edge Compute Platform available to operators, to enable them to develop and launch 5G mobile applications and services faster. The cost-effective Edge platform is built for software-defined data centers (SDDC) and is decentralized, to accelerate the deployment of ultra-low latency applications. The joint solution will include a software framework with key capabilities for developers, delivered as a platform-as-a-service (PaaS) and will incorporate cloud-native Multi-access edge computing (MEC) technologies.
  • A Deeper Look at Sigma Prime's Lighthouse: An Open-Source Ethereum 2.0 Client
  • Notable moments in Firefox for Android UA string history
  • Dweb: Creating Decentralized Organizations with Aragon
    With Aragon, developers can create new apps, such as voting mechanisms, that use smart contracts to leverage decentralized governance and allow peers to control resources like funds, membership, and code repos. Aragon is built on Ethereum, which is a blockchain for smart contracts. Smart contracts are software that is executed in a trust-less and transparent way, without having to rely on a third-party server or any single point of failure. Aragon is at the intersection of social, app platform, and blockchain.
  • LLVM 7.0.0 released
  • Parabola GNU/Linux-libre: Boot problems with Linux-libre 4.18 on older CPUs
    Due to a known bug in upstream Linux 4.18, users with older multi-core x86 CPUs (Core 2 Duo and earlier?) may not correctly boot up with linux-libre 4.18 when using the default clocksource.
  • Visual Schematic Diffs in KiCAD Help Find Changes
    In the high(er)-end world of EDA tools like OrCAD and Altium there is a tight integration between the version control system and the design tools, with the VCS is sold as a product to improve the design workflow. But KiCAD doesn’t try to force a version control system on the user so it doesn’t really make sense to bake VCS related tools in directly. You can manage changes in KiCAD projects with git but as [jean-noël] notes reading Git’s textual description of changed X/Y coordinates and paths to library files is much more useful for a computer than for a human. It basically sucks to use. What you really need is a diff tool that can show the user what changed between two versions instead of describe it. And that’s what plotgitsch provides.

LWN's Latest (Today Outside Paywall) Articles About the Kernel, Linux

  • Toward better handling of hardware vulnerabilities
    From the kernel development community's point of view, hardware vulnerabilities are not much different from the software variety: either way, there is a bug that must be fixed in software. But hardware vendors tend to take a different view of things. This divergence has been reflected in the response to vulnerabilities like Meltdown and Spectre which was seen by many as being severely mismanaged. A recent discussion on the Kernel Summit discussion list has shed some more light on how things went wrong, and what the development community would like to see happen when the next hardware vulnerability comes around. The definitive story of the response to Meltdown and Spectre has not yet been written, but a fair amount of information has shown up in bits and pieces. Intel was first notified of the problem in July 2017, but didn't get around to telling anybody in the the Linux community about it until the end of October. When that disclosure happened, Intel did not allow the community to work together to fix it; instead each distributor (or other vendor) was mostly left on its own and not allowed to talk to the others. Only at the end of December, right before the disclosure (and the year-end holidays), were members of the community allowed to talk to each other. The results of this approach were many, and few were good. The developers charged with responding to these problems were isolated and under heavy stress for two months; they still have not been adequately thanked for the effort they put in. Many important stakeholders, including distributions like Debian and the "tier-two" cloud providers, were not informed at all prior to the general disclosure and found themselves scrambling. Different distributors shipped different fixes, many of which had to be massively revised before entry into the mainline kernel. When the dust settled, there was a lot of anger left simmering in its wake.
  • Writing network flow dissectors in BPF
    Network packet headers contain a great deal of information, but the kernel often only needs a subset of that information to be able to perform filtering or associate any given packet with a flow. The piece of code that follows the different layers of packet encapsulation to find the important data is called a flow dissector. In current Linux kernels, the flow dissector is written in C. A patch set has been proposed recently to implement it in BPF with the clear goal of improving security, flexibility, and maybe even performance.
  • Coscheduling: simultaneous scheduling in control groups
    The kernel's CPU scheduler must, as its primary task, determine which process should be executing in each of a system's processors at any given time. Making an optimal decision involves juggling a number of factors, including the priority (and scheduling classes) of the runnable processes, NUMA locality, cache locality, latency minimization, control-group policies, power management, overall fairness, and more. One might think that throwing another variable into the mix — and a complex one at that — would not be something anybody would want to attempt. The recent coscheduling patch set from Jan Schönherr does exactly that, though, by introducing the concept of processes that should be run simultaneously. The core idea behind coscheduling is the marking of one or more control groups as containing processes that should be run together. If one process in a coscheduled group is running on a specific set of CPUs (more on that below), only processes from that group will be allowed to run on those CPUs. This rule holds even to the point of forcing some of the CPUs to go idle if the given control group lacks runnable processes, regardless of whether processes outside the group are runnable. Why might one want to do such a thing? Schönherr lists four motivations for this work, the first of which is virtualization. That may indeed be the primary motivation, given that Schönherr is posting from an Amazon address, and Amazon is rumored to be running a virtualized workload or two. A virtual machine usually contains multiple processes that interact with each other; these machines will run more efficiently (and with lower latencies) if those processes can run simultaneously. Coscheduling would ensure that all of a virtual machine's processes are run together, maximizing locality and minimizing the latencies of the interactions between them.
  • Machine learning and stable kernels
    There are ways to get fixes into the stable kernel trees, but they require humans to identify which patches should go there. Sasha Levin and Julia Lawall have taken a different approach: use machine learning to distinguish patches that fix bugs from others. That way, all bug-fix patches could potentially make their way into the stable kernels. Levin and Lawall gave a talk describing their work at the 2018 Open Source Summit North America in Vancouver, Canada. Levin began with a quick introduction to the stable tree and how patches get into it. When a developer fixes a bug in a patch they can add a "stable tag" to the commit or send a mail to the stable mailing list; Greg Kroah-Hartman will then pick up the fix, evaluate it, and add it to the stable tree. But that means that the stable tree is only getting the fixes that are pointed out to the stable maintainers. No one has time to check all of the commits to the kernel for bug fixes but, in an ideal world, all of the bug fixes would go into the stable kernels. Missing out on some fixes means that the stable trees will have more security vulnerabilities because the fixes often close those holes—even if the fixer doesn't realize it.
  • Trying to get STACKLEAK into the kernel
    The STACKLEAK kernel security feature has been in the works for quite some time now, but has not, as yet, made its way into the mainline. That is not for lack of trying, as Alexander Popov has posted 15 separate versions of the patch set since May 2017. He described STACKLEAK and its tortuous path toward the mainline in a talk [YouTube video] at the 2018 Linux Security Summit. STACKLEAK is "an awesome security feature" that was originally developed by The PaX Team as part of the PaX/grsecurity patches. The last public version of the patch set was released in April 2017 for the 4.9 kernel. Popov set himself on the goal of getting STACKLEAK into the kernel shortly after that; he thanked both his employer (Positive Technologies) and his family for giving him working and free time to push STACKLEAK. The first step was to extract STACKLEAK from the more than 200K lines of code in the grsecurity/PaX patch set. He then "carefully learned" about the patch and what it does "bit by bit". He followed the usual path: post the patch, get feedback, update the patch based on the feedback, and then post it again. He has posted 15 versions and "it is still in progress", he said.

PostgreSQL 11: something for everyone

PostgreSQL 11 had its third beta release on August 9; a fourth beta (or possibly a release candidate) is scheduled for mid-September. While the final release of the relational database-management system (currently slated for late September) will have something new for many users, its development cycle was notable for being a period when the community hit its stride in two strategic areas: partitioning and parallelism. Partitioning and parallelism are touchstones for major relational database systems. Proprietary database vendors manage to extract a premium from a minority of users by upselling features in these areas. While PostgreSQL has had some of these "high-tier" items for many years (e.g., CREATE INDEX CONCURRENTLY, advanced replication functionality), the upcoming release expands the number considerably. I may be biased as a PostgreSQL major contributor and committer, but it seems to me that the belief that community-run database system projects are not competitive with their proprietary cousins when it comes to scaling enterprise workloads has become just about untenable. Read more