Language Selection

English French German Italian Portuguese Spanish

April 2008

Is Linux now a slave to corporate masters?

Filed under
Linux

linuxjournal.com: Does it matter who pays the salaries of Linux kernel developers? If so, how much, and in what ways? Guess which one has been getting the most attention?

Why Linux continues to languish

Filed under
Linux

blogbeebe.blogspot: There's an interesting comparison on CNN Money between the Apple MacBook Air, the Everex Cloudbook, and the Sony VAIO Tz 298N. Cost wise the Sony was at the top at nearly 4 grand, while the Everex nailed the low end at $400.

Review: Hackett and Bankwell Issue #1

Filed under
Linux

newlinuxuser.com: I was lucky to have received my copy of Hackett and Bankwell Issue 1 this week. I saw that there’s a huge penguin on the cover. Yay! Hooray for penguins! Big Grin It’s an interesting way to study using Linux especially Ubuntu.

latest ubuntu posts

Filed under
Ubuntu
  • Ubuntu Hardy Heron Release

  • Ubuntu 8.04, “Hardy Heron”: My personal review
  • I'm loving 8.04
  • more Krazy Krashes from Krappy, untested system
  • No, Ubuntu is Open Source.
  • Installing Ubuntu 8.04 Hardy Heron on the HP Mini-Note
  • Trying out Ubuntu
  • Hardy Heron on a Toshiba Portege 2010 - how to change the video settings
  • Improved Video with Hardy Heron

Does open source programming make you a criminal?

Filed under
OSS
Humor

blogs.zdnet.com: The New York Times is working on a story saying open source programming makes you a criminal. It just makes sense, he told me during the interview. If blogging kills, then programming must lead to criminality.

9 features I wish Ubuntu had: or why I still prefer PCLinuxOS

Filed under
PCLOS

alternativenayk.wordpress: I’ve been using Ubuntu 8.04 for about four days now and I must admit that that I still prefer PCLinuxOS 2007 as my favourite entry-level Linux distribution. I’ve compiled a list of 9 features I wish Ubuntu had, which may help me change my mind.

How to Make People Love Linux

Filed under
Linux

linuxjournal.com: There are two kinds of Linux people in the world, those that will help people fix their Windows spyware problems, and those that will not. I land squarely in the former camp, and I think that it's important for us all to consider doing the same.

Psystar Open Computer unboxing and hands-on

Filed under
Mac

engadget.com: Engadget NYC might have gotten to play with Apple's latest and greatest iMac yesterday, but we keep it dirty in the Chi -- yep, we've got the first Psystar Open Computer shipped out for review.

Why Microsoft will dump their anti-Linux rhetoric

Filed under
Microsoft
  • Why Microsoft will dump their anti-Linux rhetoric

  • Stop hating Microsoft?
  • Microsoft mulls proxy fight for Yahoo
  • Mozilla warns of Flash and Silverlight 'agenda'
  • Microsoft Gives Backdoor to Law Enforcement -- Well, Not Really

Open source Java added to Linux distros

Filed under
Software

vnunet.com: Sun Microsystems, Canonical and Red Hat have announced the inclusion of OpenJDK-based implementations in Fedora 9 and Ubuntu 8.04 Long Term Support Server and Desktop editions.

More in Tux Machines

Pico-ITX board based on i.MX8M ships with Linux BSP

F&S has launched a $407 and up “armStone MX8M” Pico-ITX SBC that runs Linux on an i.MX8M with up to 8GB LPDDR4 and 64GB eMMC with GbE, WiFi/BT, 5x USB, MIPI-CSI, DVI, and a mini-PCIe slot. F&S Elektronik Systeme originally announced the NXP i.MX8M-based armStone MX8M Pico-ITX board in early 2018 with an intention to begin sampling in Q2 of that year. The i.MX8M-based SBC has finally arrived, selling for 360 Euros ($407) in a kit that includes cables, a Yocto/Buildroot BSP, and full access to documentation. The key new addition since the 2018 announcement is a mini-PCIe slot and SIM card slot. Instead of supplying 4x USB 2.0 host ports, you get 2x USB 3.0 and 2x USB 2.0, and the micro-USB OTG port has been updated from 2.0 to 3.0. Read more

Programming: Rust, Perl, Compilers, IBM/Red Hat and More

  • GStreamer Rust bindings 0.16.0 release

    A new version of the GStreamer Rust bindings, 0.16.0, was released. As usual this release follows the latest gtk-rs release. This is the first version that includes optional support for new GStreamer 1.18 APIs. As GStreamer 1.18 was not released yet, these new APIs might still change. The minimum supported version of the bindings is still GStreamer 1.8 and the targetted GStreamer API version can be selected by applications via feature flags. Apart from this, new version features mostly features API cleanup and the addition of a few missing APIs. The focus of this release was to make usage of GStreamer from Rust as convenient and complete as possible.

  • Set up Vim as your Rust IDE

    Text editors and integrated development environment (IDE) tools make writing Rust code easier and quicker. There are many editors to choose from, but I believe the Vim editor is a great fit for a Rust IDE. In this article, I'll explain how to set up Vim for Rust application development.

  • It was bound to happen.

    While I don't actually work in Perl these days, and not by choice, I still keep an eye on the community. The language is chugging along nicely. Perl 6 is out, so at least that joke has died down, features are being added, some beneficiary, some not. All is well in perland. Then the news dropped. Perl 7. I was very interested. More so when I realised that it was a rebranding of the latest Perl. First, let me say one thing right off the bat. It's a good call. I'm all for it. In fact, I'm so all for it that I called for it in a post from 2011. At the time I suggested using codenames like Apple and others do, or to rebrand Perl 5.14 (at the time) as Perl 14 like Java did. Here's why I thought, and still do, that this "rebranding" is a Good Thing: It bypass the whole perl5/per6 story. With perl 6 not being perl anymore and Perl 5.32 being rebranded Perl 7 the community will be able to finally move past this whole deal.

  • When a deleted master device file only takes 20 mins out of your maintenance window, but a whole year off your lifespan

    Out of ideas, Jim decided to crash (rather than halt) the system by typing the BREAK sequence at the console. The server would not get the chance to close the file cleanly... "We said a small prayer, crossed our fingers, booted the server, and waited for the file system check (fsck) to repair the damage we had done," he recalled. "I've never typed the letter 'y' more carefully than when asked if we wanted to re-link orphaned inodes." With an elevated heart rate, Jim logged in and checked the file system's lost+found directory.

  • LLVMpipe Now Exposes OpenGL 4.2 For GL On CPUs

    It was just a few days ago that the LLVMpipe OpenGL software rasterizer within Mesa finally achieved OpenGL 4.0 support while today it has crossed both OpenGL 4.1 and 4.2 milestones. Thanks to much of GL 4.1 and GL 4.2 support for this Gallium3D software driver already being in place, it didn't take too much work to get it over the latest hurdles.

  • GCC Compiler Support Posted For Intel AMX

    Building upon Intel working on GNU toolchain support for AMX, the newly-detailed Advanced Matrix Extensions being introduced next year with "Sapphire Rapids" Xeon CPUs, the GCC compiler support has been sent out in patch form. On top of the GNU bits that began at the end of June following Intel publishing documentation on AMX, AMX started landing in LLVM too a few days ago. The latest is AMX enablement for the GNU Compiler Collection sent out overnight.

  • 9 open source test-automation frameworks

    A test-automation framework is a set of best practices, common tools, and libraries that help quality-assurance testers assess the functionality, security, usability, and accessibility of multiple web and mobile applications. In a "quick-click" digital world, we're accustomed to fulfilling our needs in a jiffy. This is one reason why the software market is flooded with hundreds of test-automation frameworks. Although teams could build elaborate automated testing frameworks, there's usually little reason to spend the money, resources, and person-hours to do so when they can achieve equal or even better results with existing open source tools, libraries, and testing frameworks.

  • Profile-guided optimization in Clang: Dealing with modified sources

    Profile-guided optimization (PGO) is a now-common compiler technique for improving the compilation process. In PGO (sometimes pronounced “pogo”), an administrator uses the first version of the binary to collect a profile, through instrumentation or sampling, then uses that information to guide the compilation process. Profile-guided optimization can help developers make better decisions, for instance, concerning inlining or block ordering. In some cases, it can also lead to using obsolete profile information to guide compilation. For reasons that I will explain, this feature can benefit large projects. It also puts the burden on the compiler implementation to detect and handle inconsistencies. This article focuses on how the Clang compiler implements PGO, and specifically, how it instruments binaries. We will look at what happens when Clang instruments source code during the compilation step to collect profile information during execution. Then, I’ll introduce a real-world bug that demonstrates the pitfalls of the current approach to PGO. [...] Clang and GCC both support using obsolete profile information to guide the compilation process. If a function body changes, obsolete information is ignored. This feature can be beneficial for large projects, where gathering profile information is costly. This puts an extra burden on the compiler implementation to detect and handle inconsistencies, which also increases the likelihood of a compiler bug.

  • Earn a Red Hat containers certification online

    Lockdowns and travel restrictions due to COVID-19 have meant limited access to testing centers for most certification programs in much of the world. We recently announced that remote exams would be an option in the near future for taking some Red Hat certification exams. In the meantime, many organizations are using the current situation as an opportunity for their teams to learn and build new skills in support of containers and Kubernetes. The need to provide the hands-on validation of these skills provided by Red Hat Certification has never been greater. In order to address these limitations and needs, and to help organizations and IT professionals pursue the opportunities offered by these technologies, Red Hat is offering a new certification, Red Hat Certified Specialist in Containers for Kubernetes to people who pass the Preliminary Exam in Containers, Kubernetes, and Openshift (PE180). This certification will be given to those who have already taken the exam since it was launched in late 2019 as well as those who pass it going forward. This affordable certification offers IT professionals a remote option to strengthen their Kubernetes skills and embrace a DevOps mindset.

  • Official Gentoo Docker images

    Did you already know that we have official Gentoo Docker images available on Docker Hub?! The most popular one is based on the amd64 stage. Images are created automatically; you can peek at the source code for this on our git server. Thanks to the Gentoo Docker project!

  • Grupo Condis Embraces the Hybrid Cloud with Red Hat OpenShift

    Red Hat, Inc, the world’s leading provider of open source solutions, today announced that Grupo Condis has adopted Red Hat OpenShift, the industry’s most comprehensive enterprise Kubernetes platform, as part of its digital transformation strategy. Building on the back of the world’s leading enterprise Linux platform in Red Hat Enterprise Linux, Red Hat OpenShift helps Condis respond to market needs faster, build greater customer loyalty and create more innovative services without sacrificing the stability of critical operations.

Meet the GNOMEies: Kristi Progri

What is your role within the GNOME community? I am the Program Coordinator in the GNOME Foundation where I help to organize various events, leading many initiatives within the community including the Engagement Team, and working closely with all the volunteers and contributors. I also coordinate internships and help with general Foundation activities. Do you have any other affiliations you want to share? Before joining GNOME, I was very active in Mozilla community. I have been part of the Tech Speakers program and a Mozilla Representative for more than seven years now. I have organized many events and workshops and also have participated as a speaker talking about Free Software communities at many events around the globe. Why did you get involved in GNOME? I was introduced to Free Software when I was in high school, my friend had a computer running Debian and he started explaining how it worked. This was the first time I heard about it and I immediately understood that I would never be part of these communities. It looked so complicated and not my cup of tea, but it looks like I was very wrong. Once I went for to a hackerspace meeting I completely changed my mind and from that moment the hackerspace become my second home. Read more

Mozilla, the Web, and Standards

  • Firefox UX: UX Book Club Recap: Writing is Designing, in Conversation with the Authors

    Beyond the language that appears in our products, Michael encouraged the group to educate themselves, follow Black writers and designers, and be open and willing to change. Any effective UX practitioner needs to approach their work with a sense of humility and openness to being wrong. Supporting racial justice and the Black Lives Matter movement must also include raising long-needed conversations in the workplace, asking tough questions, and sitting with discomfort. Michael recommended reading How To Be An Antiracist by Ibram X. Kendi and So You Want to Talk About Race by Ijeoma Oluo. [...] In the grand scheme of tech things, UX writing is still a relatively new discipline. Books like Writing for Designing are helping to define and shape the practice. When asked (at another meet-up, not our own) if he’s advocating for a ‘content-first approach,’ Michael’s response was that we need an ‘everything first approach’ — meaning, all parties involved in the design and development of a product should come to the planning table together, early on in the process. By making the case for writing as a strategic design practice, this book helps solidify a spot at that table for UX writers.

  • Tantek Çelik: Changes To IndieWeb Organizing, Brief Words At IndieWebCamp West

    A week ago Saturday morning co-organizer Chris Aldrich opened IndieWebCamp West and introduced the keynote speakers. After their inspiring talks he asked me to say a few words about changes we’re making in the IndieWeb community around organizing. This is an edited version of those words, rewritten for clarity and context. — Tantek

  • H.266/VVC Standard Finalized With ~50% Lower Size Compared To H.265

    The Versatile Video Coding (VVC) standard is now firmed up as H.266 as the successor to H.265/HEVC. H.266/VVC has been in the works for several years by a multitude of organizations. The schedule had been aiming for finalizing the standard by July 2020.

  • DSA Is Past Its Prime

    DSA is not only broken from an engineering point of view, though, it’s also cryptographically weak as deployed. The strength of an N-bit DSA key is approximately the same as that of an N-bit RSA key4, and modern cryptography has painstakingly moved away from 1024-bit RSA keys years ago considering them too weak. Academics computed a discrete logarithm modulo a 795-bit prime last year. NIST 800-57 recommends lengths of 2048 for keys with security lifetimes extending beyond 2010. The LogJam attack authors estimated the cost of breaking a 1024-bit DLP to be within reach of nation-states in 2015.5 And yet, DSA with keys larger than 1024 bits is not really a thing!

  • Email Isn’t Broken, Email Clients Are!

    You wouldn’t say “the Web” is broken (or HTTP for those reading who happen to be technologists). Actually some of you (of the HTTPS all-the-things variety) might but that’s beside the point. The real problem with email is managing the massive volume received in a way that’s relatively sane. You can’t fix this problem at the protocol level, it’s an application-level problem. The only real solution to dealing with massive amounts of email is automation (maybe even massive amounts of it). The uninitiated might be shocked to realize how much preprocessing their email messages undergo before they make it to the inbox, researching spam filtering is a great way to get a glimpse into what’s happening, but it’s not enough because it’s not personalized in a way that’s truly effective for the end-user.