Language Selection

English French German Italian Portuguese Spanish

About Tux Machines

Wednesday, 16 Oct 19 - Tux Machines is a community-driven public service/news site which has been around for over a decade and a half and primarily focuses on GNU/LinuxSubscribe now Syndicate content

Search This Site

Quick Roundup

  • 07/07/2019 - 5:40pm
    JamieCull
  • 04/07/2019 - 7:09pm
    ksanaj
  • 18/07/2018 - 6:58am
    arindam1989
  • 14/08/2017 - 5:04pm
    2daygeek
  • 11/07/2017 - 9:36am
    itsfoss
  • 04/05/2017 - 11:58am
    Variscite
  • 09/04/2017 - 4:47pm
    mwilmoth
  • 11/01/2017 - 12:02am
    tishacrayt
  • 11/01/2017 - 12:01am
    lashayduva
  • 10/01/2017 - 11:56pm
    neilheaney

Red Hat and Fedora: syslog-ng, Ansible, Libinput and Fedora Community

Filed under
Red Hat
  • syslog-ng in two words at One Identity UNITE: reduce and simplify

    UNITE is the partner and user conference of One Identity, the company behind syslog-ng. This time the conference took place in Phoenix, Arizona where I talked to a number of American business customers and partners about syslog-ng. They were really enthusiastic about syslog-ng and emphasized two major reasons why they use syslog-ng or plan to introduce it to their infrastructure: syslog-ng allows them to reduce the log data volume and greatly simplify their infrastructure by introducing a separate log management layer.

    [...]

    When you collect log messages to a central location using syslog-ng, you can archive all of the messages there. If you add a new log analysis application to your infrastructure, you can just point syslog-ng at it and forward the necessary subset of log data there.

    Life at both security and operations in your environment becomes easier, as there is only a single software to check for security problems and distribute on your systems instead of many.

  • Ansible vs Terraform vs Juju: Fight or cooperation?

    Ansible vs Terraform vs Juju vs Chef vs SaltStack vs Puppet vs CloudFormation – there are so many tools available out there. What are these tools? Do I need all of them? Are they fighting with each other or cooperating?

    The answer is not really straightforward. It usually depends on your needs and the particular use case. While some of these tools (Ansible, Chef, StaltStack, Puppet) are pure configuration management solutions, the others (Juju, Terraform, CloudFormation) focus more on services orchestration. For the purpose of this blog, we’re going to focus on Ansible vs Terraform vs Juju comparison – the three major players which have dominated the market.

    [...]

    Contrary to both Ansible and Terraform, Juju is an application modelling tool, developed and maintained by Canonical. You can use it to model and automate deployments of even very complex environments consisting of various interconnected applications. Examples of such environments include OpenStack, Kubernetes or Ceph clusters. Apart from the initial deployment, you can also use Juju to orchestrate deployed services too. Thanks to Juju you can backup, upgrade or scale-out your applications as easily as executing a single command.

    Like Terraform, Juju uses a declarative approach, but it brings it beyond the providers up to the applications layer. You can not only declare a number of machines to be deployed or number of application units, but also configuration options for deployed applications, relations between them, etc. Juju takes care of the rest of the job. This allows you to focus on shaping your application instead of struggling with the exact routines and recipes for deploying them. Forget the “How?” and focus on the “What?”.

  • libinput's bus factor is 1

    Let's arbitrarily pick the 1.9.0 release (roughly 2 years ago) and look at the numbers: of the ~1200 commits since 1.9.0, just under 990 were done by me. In those 2 years we had 76 contributors in total, but only 24 of which have more than one commit and only 6 contributors have more than 5 commits. The numbers don't really change much even if we go all the way back to 1.0.0 in 2015. These numbers do not include the non-development work: release maintenance for new releases and point releases, reviewing CI failures [1], writing documentation (including the stuff on this blog), testing and bug triage. Right now, this is effectively all done by one person.

    This is... less than ideal. At this point libinput is more-or-less the only input stack we have [2] and all major distributions rely on it. It drives mice, touchpads, tablets, keyboards, touchscreens, trackballs, etc. so basically everything except joysticks.

  • Contribute to Fedora Magazine

    Do you love Linux and open source? Do you have ideas to share, enjoy writing, or want to help run a blog with over 60k visits every week? Then you’re at the right place! Fedora Magazine is looking for contributors. This article walks you through various options of contributing and guides you through the process of becoming a contributor.

  • Fabiano Fidêncio: Libosinfo (Part Sleepy

    Libosinfo is the operating system information database. As a project, it consists of three different parts, with the goal to provide a single place containing all the required information about an operating system in order to provision and manage it in a virtualized environment.

  • Τι κάνεις FOSSCOMM 2019

    When the students visited our Fedora booth, they were excited to take some Fedora gifts, especially the tattoo sticker. I was asking how many of them used Fedora, and most of them were using Ubuntu, Linux Mint, Kali Linux and Elementary OS. It was an opportunity to share the Fedora 30 edition and give the beginner’s guide that the Fedora community wrote in a little book. Most of them enjoyed taking photos with the Linux frame I did in Edinburgh...

    [...]

    I was planning to teach the use of the GTK library with C, Python, and Vala. However, because of the time and the preference of the attendees, we only worked with C. The workshop was supported by Alex Angelo who also traduced some of my expressions in Greek. I was flexible in using different Operating Systems such as Linux Mint, Ubuntu, Kubuntu among other distros. There were only two users that used Fedora. Almost half of the audience did not bring a laptop, and then I grouped in groups to work together. I enjoyed to see young students eager to learn, they took their own notes, and asked questions. You might see the video of the workshop that was recorded by the organizers.

  • Extending the Minimization objective

    Earlier this summer, the Fedora Council approved the first phase of the Minimization objective. Minimization looks at package dependencies and tries to minimize the footprint for a variety of use cases. The first phase resulted in the development of a feedback pipeline, a better understanding of the problem space, and some initial ideas for policy improvements.

today's howtos and programming leftovers

Filed under
Development
HowTos

Google: Replacing Google Chrome, AMP and Titan Security Keys

Filed under
Google
Security
Web
  • The top 5 alternatives to Google Chrome

    Google Chrome is the most popular web browser on the market. It provides a user-friendly, easy-to-use interface, with a simple appearance featuring a combined address and search bar with a small space for extensions.

    Chrome also offers excellent interconnectivity on different devices and easy syncing that means that once a user installs the browser on different devices, all their settings, bookmarks and search history come along with it. Virtually all a user does on Google chrome is backed up to Google Cloud.

    Chrome also offers easy connectivity to other Google products, such as Docs, Drive, and YouTube via an “Apps” menu on the bookmarks bar, located just below the address/search bar. Google Translate, one of the best translation applications currently available on the internet, is also included.

  • Google unplugs AMP, hooks it into OpenJS Foundation after critics turn up the volume [Ed: Microsoft Tim on Google passing a bunch of EEE to a foundation headed by a Microsoft ‘mole’, 'open'JS ]

    AMP – which originally stood for Accelerated Mobile Pages though not any more – was launched in 2015, ostensibly to speed up page loading on smartphones. The technology includes AMP HTML, which is a set of performance-optimized web components, and the AMP Cache, which serves validated AMP pages. Most AMP pages are served by Google’s AMP Cache.

  • Google USB-C Titan Security Keys Begin Shipping Tomorrow

    Google announced their new USB-C Titan Security Key will begin shipping tomorrow for offering two-factor authentication support with not only Android devices but all the major operating systems as well.

    The USB-C Titan Security Key is being manufactured by well known 2FA key provider Yubico. This new security key is using the same chip and firmware currently used by Google's existing USB-A/NFC and Bluetooth/NFC/USB Titan Security Key models.

Manjaro | Review from an openSUSE User

Filed under
Reviews

There are many flavors of Linux, we call them distributions but in a way, I think “flavor” is a good word for it as some some are a sweet and delightful experience while with others a lingering, foul taste remains. Manjaro has not left a foul taste in any way. In full disclosure, I am not a fan of Arch based Linux distributions. I appreciate the idea of this one-step-removed Gentoo and for those that really like to get into the nitty-gritty bits Arch is good for that. My problem with Arch is the lack of quality assurance. The official repository on Arch Wiki describes the process of how core packages need to be signed off by developers before they are allowed to move from staging into the official repositories. With the rate at which packages come in, it is almost an impossibility that through manual testing software will continue to work well with other software as some dependencies may change. Admittedly, I don’t use it daily, outside of VMs for testing nor do I have a lot of software installed so this is not going to be a problem I am likely to experience.

Manjaro, from my less than professional opinion, is a slightly slower rolling Arch that seems to do more testing and the process, from what I understand, is similar. Developers have to approve the packages before they are moved into the official repositories. I also understand that there isn’t any automated QA to perform any testing so this is all reliant on user or community testing, which, seemingly, Manjaro is doing a good job of it.

My dance with Manjaro is as part of a BigDaddyLinuxLive Community challenge, to give it a fair shake and share your experience.

This is my review of Manjaro with the Plasma Desktop. Bottom Line Up Front, this is quite possibly the safest and most stable route if you like the Arch model. In the time I ran it, I didn’t have any issues with it. The default Plasma Desktop is quite nice, and the default themes are also top notch. The graphical package manager works fantastically well and you do have Snap support right out of the gate. It’s truly a great experience. Was it good enough to push me from my precious openSUSE? No, but it has made for a contender and something about which to think.

Read more

Open source interior design with Sweet Home 3D

Filed under
OSS

Historically, I practiced the little-known fourth principle: don't have furniture. However, since I became a remote worker, I've found that a home office needs conveniences like a desk and a chair, a bookshelf for reference books and tech manuals, and so on. Therefore, I have been formulating a plan to populate my living and working space with actual furniture, made of actual wood rather than milk crates (or glue and sawdust, for that matter), with an emphasis on plan. The last thing I want is to bring home a great find from a garage sale to discover that it doesn't fit through the door or that it's oversized compared to another item of furniture.

Read more

Audiocasts/Shows: LINUX Unplugged, mintCast and Python Shows

Filed under
Development
GNU
Linux

Drupal shows leadership on diversity and inclusion

Filed under
Interviews
Drupal

Drupal is far from alone among open source communities with a diversity gap, and I think it deserves a lot of credit for tackling these issues head-on. Diversity and inclusion is a much broader topic than most of us realize. Before I read DDI's August newsletter, the history of indigenous people in my community was something that I hadn't really thought about before. Thanks to DDI's project, I'm not only aware of the people who lived in Maryland long before me, but I've come to appreciate and respect what they brought to this land.

I encourage you to learn about the native people in your homeland and record their history in DDI's Land Acknowledgements blog. If you're a member of another open source project, consider replicating this project there. The more we know about people who differ from us, the more we respect and appreciate our collective roles as members of the human race.

Read more

Databricks brings its Delta Lake project to the Linux Foundation

Filed under
Linux

Databricks, the big data analytics service founded by the original developers of Apache Spark, today announced that it is bringing its Delta Lake open-source project for building data lakes to the Linux Foundation and under an open governance model. The company announced the launch of Delta Lake earlier this year and even though it’s still a relatively new project, it has already been adopted by many organizations and has found backing from companies like Intel, Alibaba and Booz Allen Hamilton.

“In 2013, we had a small project where we added SQL to Spark at Databricks […] and donated it to the Apache Foundation,” Databricks CEO and co-founder Ali Ghodsi told me. “Over the years, slowly people have changed how they actually leverage Spark and only in the last year or so it really started to dawn upon us that there’s a new pattern that’s emerging and Spark is being used in a completely different way than maybe we had planned initially.”

This pattern, he said, is that companies are taking all of their data and putting it into data lakes and then do a couple of things with this data, machine learning and data science being the obvious ones. But they are also doing things that are more traditionally associated with data warehouses, like business intelligence and reporting. The term Ghodsi uses for this kind of usage is ‘Lake House.’ More and more, Databricks is seeing that Spark is being used for this purpose and not just to replace Hadoop and doing ETL (extract, transform, load). “This kind of Lake House patterns we’ve seen emerge more and more and we wanted to double down on it.”

Read more

Configuring Automatic Login and Lock Screen on Ubuntu 19.10

Filed under
Ubuntu
HowTos

Whether it’s Linux or Windows, Ubuntu, or Fedora, I am not an ‘automatic’ type of guy. That is to say, and I don’t want my login automated, nor do I want my updates automatically installed. This preference directly results from over thirty years in Information Technology, prudence, habit, and experience. Plus, it’s just plain smart security sense.

However, I further realize that as Linux users get younger and younger, I am increasingly in the minority in this sense. While I strongly disagree with automatic logins and updates, I can understand the desire for it.

So, with that understanding, let’s go about the business of instituting automated logins in Ubuntu. We will also take the time to address the Ubuntu Lock Screen setting. Configuring automatic Ubuntu software updates is much more in-depth. We will discuss this in a separate dedicated article at a later date.

Read more

Programming: Python, LLVM and Erlang

Filed under
Development
  • Sending Emails in Python — Tutorial with Code Examples

    What do you need to send an email with Python? Some basic programming and web knowledge along with the elementary Python skills. I assume you’ve already had a web app built with this language and now you need to extend its functionality with notifications or other emails sending.

    [...]

    Sending multiple emails to different recipients and making them personal is the special thing about emails in Python.

    To add several more recipients, you can just type their addresses in separated by a comma, add Cc and Bcc. But if you work with a bulk email sending, Python will save you with loops.

    One of the options is to create a database in a CSV format (we assume it is saved to the same folder as your Python script).

    We often see our names in transactional or even promotional examples. Here is how we can make it with Python.

  • Binning Data with Pandas qcut and cut

    When dealing with continuous numeric data, it is often helpful to bin the data into multiple buckets for further analysis. There are several different terms for binning including bucketing, discrete binning, discretization or quantization. Pandas supports these approaches using the cut and qcut functions. This article will briefly describe why you may want to bin your data and how to use the pandas functions to convert continuous data to a set of discrete buckets. Like many pandas functions, cut and qcut may seem simple but there is a lot of capability packed into those functions. Even for more experience users, I think you will learn a couple of tricks that will be useful for your own analysis.

    [...]

    The concept of breaking continuous values into discrete bins is relatively straightforward to understand and is a useful concept in real world analysis. Fortunately, pandas provides the cut and qcut functions to make this as simple or complex as you need it to be. I hope this article proves useful in understanding these pandas functions. Please feel free to comment below if you have any questions.

  • Analysing music habits with Spotify API and Python

    I’m using Spotify since 2013 as the main source of music, and back at that time the app automatically created a playlist for songs that I liked from artists’ radios. By innertion I’m still using the playlist to save songs that I like. As the playlist became a bit big and a bit old (6 years, huh), I’ve decided to try to analyze it.

  • Python IDEs and Code Editors

    A code editor is a tool that is used to write and edit code. They are usually lightweight and can be great for learning. However, once your program gets larger, you need to test and debug your code, that's where IDEs come in.

    An IDE (Integrated Development Environment) understand your code much better than a text editor. It usually provides features such as build automation, code linting, testing and debugging. This can significantly speed up your work. The downside is that IDEs can be complicated to use.

  • Announcing Anaconda Distribution 2019.10

    As there were some significant changes in the previous Anaconda Distribution 2019.07 installers, this release focuses on polishing up rough edges in that release and bringing all the packages up to date with the latest available in repo.anaconda.com. This means many key packages are updated including Numpy, Scipy, Scikit-Learn, Matplotlib, Pandas, Jupyter Notebook, and many more. As many of the package updates have addressed Common Vulnerabilities and Exposures (CVEs), it is important to update to the latest.

    Another key change since the last release is that Apple released macOS version 10.15 – Catalina. Unfortunately, this was a breaking release for previous versions of Anaconda that used the pkg installer. The Anaconda Distribution 2019.10 installers address the issues and should install without trouble on macOS Catalina. If you would rather repair your current Anaconda installation, please check out this blog post for tips.

  • Apple's Numbers and the All-in-One CSV export

    The hierarchical form requires a number of generator functions for Sheet-from-CSV, Table-from-CSV, and Row-from-CSV. Each of these works with a single underlying iterator over the source file and a fairly complex hand-off of state. If we only use the sheet iterator, the tables and rows are skipped. If we use the table within a sheet, the first table name comes from the header that started a sheet; the table names come from distinct headers until the sheet name changes.

    The table-within-sheet iteration is very tricky. The first table is a simple yield of information gathered by the sheet iterator. Any subsequent tables, however, may be based one one of two conditions: either no rows have been consumed, in which case the table iterator consumes (and ignores) rows; or, all the rows of the table have been consumed and the current row is another "sheet: table" header.

  • Formatting NFL data for doing data science with Python

    No matter what medium of content you consume these days (podcasts, articles, tweets, etc.), you'll probably come across some reference to data. Whether it's to back up a talking point or put a meta-view on how data is everywhere, data and its analysis are in high demand.

    As a programmer, I've found data science to be more comparable to wizardry than an exact science. I've coveted the ability to get ahold of raw data and glean something useful and concrete from it. What a useful talent!

  • Sony Pushes More AMD Jaguar Optimizations To Upstream LLVM 10 Compiler

    Sony engineers working on the PlayStation compiler toolchain continue upstreaming various improvements to the LLVM source tree for helping the AMD APUs powering their latest game console.

    Several times now we've pointed out Sony engineers contributing AMD "btver2" improvements to upstream LLVM with the company using LLVM/Clang as their default code compiler and the PlayStation 4 relying on a Jaguar APU.

  • [llvm-dev] GitHub Migration Schedule and Plans
    Hi,
    
    We're less than 2 weeks away from the developer meeting, so I wanted to
    give an update on the GitHub migration and what's (hopefully) going to
    happen during the developer meeting.
    
    Everyone who has added their information to the github-usernames.txt
    file in SVN before today should have received an invite to become a collaborator
    on the llvm-project repository.  If you did not receive an invite and think
    you should have, please contact me off-list.  I will continue to monitor the
    file for new updates and periodically send out new batches of invites.
    
    There is still some ongoing work to get the buildbots ready and the mailing lists
    ready, but we are optimistic that the work will be done in time.
    
    The team at GitHub has finished implementing the "Require Linear History"
    branch protection that we requested.  The feature is in beta and currently
    enabled in the llvm-project repository.  This means that we will have the
    option to commit directly via git, in addition to using the git-llvm script.
    A patch that updates git-llvm to push to git instead of svn can be found here:
    https://reviews.llvm.org/D67772.  You should be able to test it out on your
    own fork of the llvm-project repository.
    
    The current plan is to begin the final migration steps on the evening (PDT)
    of October 21.  Here is what will happen:
    
    1. Make SVN read-only.
    2. Turn-off the SVN->git update process.
    3. Commit the new git-llvm script directly to github.
    4. Grant all contributors write access to the repository.
    5. Email lists announcing that the migration is complete.
    
    Once the migration is complete, if you run into any issues, please file
    a bug, and mark it as a blocker for the github metabug PR39393.
    
    If you have any questions or think I am missing something, please
    let me know.
    
    Thanks,
    Tom
    
    
  • LLVM Plans To Switch From Its SVN To Git Workflow Next Week

    On 21 October they plan to make LLVM's SVN repository read-only and finish their git-llvm script to bring all the changes into Git, and then allow developers to begin contributing to the LLVM GitHub project as the new official source repository.

  • Excellent Free Books to Learn Erlang

    Erlang is a general-purpose, concurrent, declarative, functional programming language and runtime environment developed by Ericsson, a Swedish multinational provider of communications technology and services. Erlang is dynamically typed and has a pattern matching syntax. The language solves difficult problems inherent in parallel, concurrent environments. It uses sets of parallel supervised processes, not a single sequential process as found in most programming languages.

    Erlang was created in 1986 at the Ellemtel Telecommunication Systems Laboratories for telecommunication systems. The objective was to build a simple and efficient programming language resilient large-scale concurrent industrial applications.

    Besides telecommunication systems and applications and other large industrial real-time systems, Erlang is particularly suitable for servers for internet applications, e-commerce, and networked database applications. The versatility of the language is, in part, due to its extensive collection of libraries.

Kubernetes at SUSE and Red Hat

Filed under
Red Hat
SUSE
  • Eirinix: Writing Extensions for Eirini

    At the recent Cloud Foundry Summit EU in the Netherlands, Vlad Iovanov and Ettore Di Giacinto of SUSE presented a talk about Eirini — a project that allows the deployment and management of applications on Kubernetes using the Cloud Foundry Platform. They introduced eirinix — a framework that allows developers to extend Eirini. Eirinix is built from the Quarks codebase, which leverages Kubernetes Mutating Webhooks. With the flexibility of Kubernetes and Eirini’s architecture, developers can now build features around Eirini, like Persi support, access to the application via SSH, ASGs via Network Policies and more. In this talk, they explained how this can be done, and how everyone can start contributing to a rich ecosystem of extensions that will improve Eirini and the developer experience of Cloud Foundry.

  • Building an open ML platform with Red Hat OpenShift and Open Data Hub Project

    Unaddressed, these challenges impact the speed, efficiency and productivity of the highly valuable data science teams. This leads to frustration, lack of job satisfaction and ultimately the promise of AI/ML to the business is not redeemed.

    IT departments are being challenged to address the above. IT has to deliver a cloud-like experience to data scientists. That means a platform that offers freedom of choice, is easy to access, is fast and agile, scales on-demand and is resilient. The use of open source technologies will prevent lockin, and maintain long term strategic leverage over cost.

    In many ways, a similar dynamic has played out in the world of application development in the past few years that has led to microservices, the hybrid cloud and automation and agile processes. And IT has addressed this with containers, kubernetes and open hybrid cloud.

    So how does IT address this challenge in the world of AI – by learning from their own experiences in the world of application development and applying to the world of AI/ML. IT addresses the challenge by building an AI platform that is container based, that helps build AI/ML services with agile process that accelerates innovation and is built with the hybrid cloud in mind.

  • Launching OpenShift/Kubernetes Support for Solarflare Cloud Onload

    This is a guest post co-written by Solarflare, a Xilinx company. Miklos Reiter is Software Development Manager at Solarflare and leads the development of Solarflare’s Cloud Onload Operator. Zvonko Kaiser is Team Lead at Red Hat and leads the development of the Node Feature Discovery operator.

Python Across Platforms

Filed under
OS
Development
  • Chemists bitten by Python scripts: How different OSes produced different results during test number-crunching

    Chemistry boffins at the University of Hawaii have found, rather disturbingly, that different computer operating systems running a particular set of Python scripts used for their research can produce different results when running the same code.

    In a research paper published last week in the academic journal Organic Letters, chemists Jayanti Bhandari Neupane, Ram Neupane, Yuheng Luo, Wesley Yoshida, Rui Sun, and Philip Williams describe their efforts to verify an experiment involving cyanobacteria, better known as blue-green algae.

    Williams, associate chair and professor in the department of chemistry at the University of Hawaii at Manoa, said in a phone interview with The Register on Monday this week that his group was looking at secondary metabolites, like penicillin, that can be used to treat cancer or Alzheimer's.

  • Chemists discover cross-platform Python scripts not so cross-platform

    In a paper published October 8, researchers at the University of Hawaii found that a programming error in a set of Python scripts commonly used for computational analysis of chemistry data returned varying results based on which operating system they were run on—throwing doubt on the results of more than 150 published chemistry studies. While trying to analyze results from an experiment involving cyanobacteria, the researchers—Jayanti Bhandari Neupane, Ram Neupane, Yuheng Luo, Wesley Yoshida, Rui Sun, and Philip Williams—discovered significant variations in results run against the same nuclear magnetic resonance spectroscopy (NMR) data.

    The scripts, called the "Willoughby-Hoye" scripts after their authors—Patrick Willoughby and Thomas Hoye of the University of Minnesota—were found to return correct results on macOS Mavericks and Windows 10. But on macOS Mojave and Ubuntu, the results were off by nearly a full percent.

today's leftovers

Filed under
Misc
  • Fedora Removes 32bit, System76 Coreboot, Flatpak, Valve, Atari VCS, Docker | This Week in Linux 84

    On this episode of This Week in Linux, we talk about Fedora Removing 32-bit, well sort of. System76’s announced two laptops using Coreboot firmware. There is some interesing news regarding Docker and its future. Then we’ll check out some Linux Gaming news with some really exciting news from Valve! 

  • PostgreSQL 12 boosts open source database performance

    Performance gains are among the key highlights of the latest update of the open source PostgreSQL 12 database.

    PostgreSQL 12 became generally available Oct. 3, providing users of the widely deployed database with multiple enhanced capabilities including SQL JSON query support and improved authentication and administration options. The PostgreSQL 12 update will potentially affect a wide range of use cases in which the database is deployed, according to Noel Yuhanna, an analyst at Forrester Research.

    "Organizations are using PostgreSQL to support all kinds of workloads and use cases, which is pushing the needs for better performance, improved security, easier access to unstructured data and simplified deployments," Yuhanna said. "To address this, PostreSQL12 improves performance by improving its indexing that requires less space and has better optimization to deliver faster access."

  • Olimex Launches NB-IoT DevKit Based on Quectel BC66 Module for 19 Euros

    There are three LPWAN standards currently dominating the space LoRaWAN, NB-IoT, and Sigfox. 

  • Intel Denverton based Fanless Network Appliance Comes with 6x Ethernet Ports, 2x SFP Cages
  • Heading levels

    the headings would be “Apples” (level 1), “Taste” (level 2), “Sweet” (level 3), “Color” (level 2). Determining the level of any given heading requires traversing through its previous siblings and their descendants, its parent and the previous siblings and descendants of that, et cetera. That is too much complexity and optimizing it with caches is evidently not deemed worth it for such a simple feature.

    However, throwing out the entire feature and requiring everyone to use h1 through h6 forever, adjusting them accordingly based on the document they end up in, is not very appealing to me. So I’ve been trying to come up with an alternative algorithm that would allow folks to use h1 with sectioning elements exclusively while giving assistive technology the right information (default styling of h1 is already adjusted based on nesting depth).

    The simpler algorithm only looks at ancestors for a given heading and effectively only does so for h1 (unless you use hgroup). This leaves the above example in the weird state it is in in today’s browsers, except that the h1 (“Color”) would become level 2. It does so to minimally impact existing documents which would usually use h1 only as a top-level element or per the somewhat-erroneous recommendation of the HTML Standard use it everywhere, but in that case it would dramatically improve the outcome.

  • openSUSE OBS Can Now Build Windows WSL Images

    As Windows Subsystem for Linux (WSL) is becoming a critical piece of Microsoft’s cloud and data-center audience, openSUSE is working on technologies that help developers use distributions of their choice for WSL. Users can run the same WSL distribution that they run in the cloud or on their servers.

    The core piece of openSUSE’s WSL offering is the WSL appx files, which are basically zip files that contain a tarball of a Linux system (like a container) and a Windows exe file, the so called launcher.

2D using Godot

Filed under
Development
OSS
Gaming

This brings me to the GUI parts. I’m still not convinced that I understand how to properly layout stuff using Godot, but at least it looks ok now – at the cost of some fixed element sizes and such. I need to spend some more time to really understand how the anchoring and stretching really works. I guess I have a hard time wrapping my head around it as the approach is different from what I’m used to from Qt.

Looking at the rest of the code, I’ve tried to make all the other scenes (in Godot, everything is a scene) like independent elements. For instance, the card scene has a face, and an is_flipped state. It can also signal when it is being flipped and clicked. Notice that the click results in a signal that goes to the table scene, which decides if the card needs to be flipped or not.

The same goes for the GUI parts. They simple signal what was clicked and the table scene reacts. There are some variables too, e.g. the number of pairs setting in the main menu, and the points in the views where that is visible.

Read more

Linux Graphics Stack: Intel, AMD and More

Filed under
Graphics/Benchmarks
Linux
  • Intel Linux Graphics Driver Adds Bits For Jasper Lake PCH

    Details are still light on Jasper Lake, but volleyed onto the public mailing list today was the initial support for the Jasper Lake PCH within the open-source Linux graphics driver side.

    The patch adds in the Jasper Lake PCH while acknowledging it's similar to Icelake and Tigerlake behavior. The Jasper Lake PCI device ID is 0x4D80. The patch doesn't reveal any other notable details but at least enough to note that the Jasper Lake support is on the way. Given the timing, the earliest we could see Intel Jasper Lake support out in the mainline kernel would be for Linux 5.5, which will be out as stable as the first kernel series of 2020 and in time for the likes of Ubuntu 20.04 LTS and Fedora 32.

  • Linux Graphics Drivers Could Have User-Space API Changes More Strictly Evaluated

    In response to both the AMD Radeon and Intel graphics drivers adding new user-space APIs for user-space code that just gets "[thrown] over the wall instead of being open source developed projects" and the increase of Android drivers introducing their own UAPI headaches, Airlie is looking at enforcing more review/oversight when DRM drivers want to make user-space API changes.

    The goal ultimately is to hopefully yield more cross-driver UAPI discussions and in turn avoiding duplicated efforts, ensuring good development implementations prior to upstreaming, and better quality with more developers reviewing said changes.

  • xf86-video-ati 19.1 Released With Crash & Hang Fixes

    For those making use of xf86-video-ati on X.Org-enabled Linux desktops, the version 19.1 release brings just a handful of new fixes. This release was announced today by Michel Dänzer who last month departed AMD to now work on Red Hat's graphics team. Michel is sticking around the Mesa/X.Org world for Red Hat's duties but is hoping someone else will be picking up maintenance of the xf86-video-ati/xf86-video-amdgpu DDX drivers going forward. Granted, not a lot of activity happens to these X.Org DDX drivers these days considering more Linux desktops slowly moving over to Wayland, many X11 desktops using the generic xf86-video-modesetting, and these AMD drivers being fairly basic now with all of the big changes in the AMDGPU DRM kernel driver.

Syndicate content