Language Selection

English French German Italian Portuguese Spanish

About Tux Machines

Thursday, 18 Jul 19 - Tux Machines is a community-driven public service/news site which has been around for over a decade and primarily focuses on GNU/LinuxSubscribe now Syndicate content

Search This Site

Quick Roundup

  • 07/07/2019 - 5:40pm
    JamieCull
  • 04/07/2019 - 7:09pm
    ksanaj
  • 18/07/2018 - 6:58am
    arindam1989
  • 14/08/2017 - 5:04pm
    2daygeek
  • 11/07/2017 - 9:36am
    itsfoss
  • 04/05/2017 - 11:58am
    Variscite
  • 09/04/2017 - 4:47pm
    mwilmoth
  • 11/01/2017 - 12:02am
    tishacrayt
  • 11/01/2017 - 12:01am
    lashayduva
  • 10/01/2017 - 11:56pm
    neilheaney

Server Leftovers

Filed under
Software
  • Deploying Kubernetes at the edge – Part I: building blocks

    What exactly is edge computing? Edge computing is a variant of cloud computing, with your infrastructure services – compute, storage, and networking – placed physically closer to the field devices that generate data. Edge computing allows you to place applications and services closer to the source of the data, which gives you the dual benefit of lower latency and lower Internet traffic. Lower latency boosts the performance of field devices by enabling them to not only respond quicker, but to also respond to more events. And lowering Internet traffic helps reduce costs and increase overall throughput – your core datacenter can support more field devices. Whether an application or service lives in the edge cloud or the core datacenter will depend on the use case.

    How can you create an edge cloud? Edge clouds should have at least two layers – both layers will maximise operational effectiveness and developer productivity – and each layer is constructed differently.

  • Certifications for DevOps engineers

    DevOps teams appreciate using DevOps processes, especially in multi- and hybrid cloud infrastructures, for many reasons. For one thing, DevOps breaks down barriers and enables agile software development and continuous delivery of IT operations. It is also popular in enterprises because it helps accelerate business outcomes through digital transformation.

  • SUSE YES Certification for SLE 15 SP1 Now Available
  • Deploy a SUSE Enterprise Storage test environment in about 30 minutes
  • MTTR is dead, long live CIRT

    The game is changing for the IT ops community, which means the rules of the past make less and less sense. Organizations need accurate, understandable, and actionable metrics in the right context to measure operations performance and drive critical business transformation.

    The more customers use modern tools and the more variation in the types of incidents they manage, the less sense it makes to smash all those different incidents into one bucket to compute an average resolution time that will represent ops performance, which is what IT has been doing for a long time.

  • RHEL 8 enables containers with the tools of software craftsmanship

    With the release of the Red Hat Enterprise Linux 8, there is a new set of container tools which allow users to find, run, build, and share containers. This set of tools allows you to start simple with podman, and adopt more sophisticated tools (buildah, and skopeo) as you discover advanced use cases. They are released in two streams, fast and stable, to meet developer and operations use cases. Finally, these tools are compliant with the same Open Containers Initiative (OCI) standards, just like Docker, allowing you go build once, and run anywhere.

  • IBM's big deal for Red Hat gives it a chance to reshape open source
  • Cloudera Commits to 100% Open Source

    The old Cloudera developed and distributed its Hadoop stack using a mix of open source and proprietary methods and licenses. But the new Cloudera will be.

  • Cloudera relents, adopts pure open-source strategy

    Although billed as a “merger of relative equals,” last fall’s combination of Cloudera Inc. and Hortonworks Inc. was by all accounts a Cloudera acquisition of its smaller big-data rival. But it now appears that Hortonworks’ open-source business model has won the day. Cloudera Wednesday quietly announced changes to its licensing policy that will make its entire product portfolio available under open-source terms, effectively adopting Hortonworks’ business model.

    The move has important implications for the industry’s ongoing debate about how business models can be built upon a foundation of free software. Although Cloudera is a major contributor to open-source projects, its decade-old business has always been based on selling licensed software.

Audiocasts/Shows: Ubuntu Podcast, Bad Voltage and BSD Now

Filed under
Interviews
  • Ubuntu Podcast from the UK LoCo: S12E14 – Sega Rally Championship

    This week we’ve been installing macOS and Windows on a Macbook Pro and a Dell XPS 15. We discuss Running Challenges, bring you some command line love and go over all your feedback.

    It’s Season 12 Episode 14 of the Ubuntu Podcast! Mark Johnson, Martin Wimpress and Laura Cowen are connected and speaking to your brain.

  • Bad Voltage 2×55: Moaner Lisa

    Stuart Langridge, Jono Bacon, and Jeremy Garcia present Bad Voltage, in which the Mona Lisa is bobbins, it is important to have your privacy policy meet the overall goals you’re pushing, and:

  • Comparing Hammers | BSD Now 306

    Am5x86 based retro UNIX build log, setting up services in a FreeNAS Jail, first taste of DragonflyBSD, streaming Netflix on NetBSD, NetBSD on the last G4 Mac mini, Hammer vs Hammer2, and more.

Games: Zachtronics, Valve, SuperTuxKart/Wayland, and Blobs From Canonical

Filed under
Gaming
  • All Zachtronics games are now available on itch.io

    Some good news for fans of high quality puzzle games, as Zachtronics entire library is now available to purchase on itch.io.

  • Valve has launched "Steam Labs", a place where Valve will show off new experiments

    Valve emailed in today to let us know about the new Steam Labs, a dedicated section on Steam for Valve to show off some experiments they're doing and for you to test and break them.

  • Valve Rolls Out Steam Labs

    Steam Labs was announced today with three initial experiments: Micro Trailers, The Interactive Recommender, and The Automated Show. Micro Trailers are six-second game trailers, The Interactive Recommender uses machine learning to show game titles you might like, and The Automated Show is a showpiece for secondary displays for highlighting different games.

  • Network transparency with Wayland

    I've managed to get hardware video encoding and decoding using VAAPI working with waypipe, although of course the hardware codecs are less flexible and introduce additional restrictions on the image formats and dimensions. For example, buffers currently need to have an XRGB8888 pixel format (or a standard permutation thereof), as the Intel/AMD VAAPI implementations otherwise do not appear to support hardware conversions from the RGB color space to the YUV color space used by video formats, and in the other direction. It's also best if the buffers have 64-byte aligned strides, and 16-pixel aligned widths and heights. The result of this can run significantly faster than encoding with libx264, although to maintain the same level of visual quality the bitrate must be increased.

    For games, using video compression with waypipe is probably worth the tradeoffs now. In some instances, it can even be faster. A 1024 by 768 SuperTuxKart window during a race, running with linear-format DMABUFs, losslessly replicated without compression via ssh on localhost, requires about 130MB/s of bandwidth and runs at about 40 FPS. (Using LZ4 or Zstd for compression would reduce bandwidth, but on localhost or a very fast network would take more time than would be saved by the bandwidth reduction.)

  • Ubuntu LTS releases (and so derivatives too) to get updated NVIDIA drivers without PPAs

    Good news everyone! Canonical will now be offering NVIDIA users up to date graphics drivers without the need to resort to a PPA or anything else.

    Since this will be for the Ubuntu LTS releases, this means other Linux distributions based on Ubuntu like Linux Mint, elementary OS, Zorin OS and probably many others will also get these updated NVIDIA drivers too—hooray!

    This is really great, as PPAs are not exactly user friendly and sometimes they don't get the testing they truly need when serving so many people. Having the Ubuntu team push out NVIDIA driver updates via an SRU (Stable Release Update), which is the same procedure they use to get you newer Firefox version, is a good way to do it.

Kernel Development Updates: Linux 5.3, AMD and Wayland's Weston

Filed under
Graphics/Benchmarks
Linux
  • Linux 5.3 Is Another Busy Kernel Merge Window Even For The Summer Months

    While just being a few days into the two-week long merge window for Linux 5.3, it's certainly another busy cycle even when considering the summer months tend to be a bit slower for developers. 

  • Kernel Address Space Isolation Aims To Prevent Leaking Data From Hyper Threading Attacks

    Kernel Address Space Isolation is an experimental feature in development by Oracle in aiming to prevent leaking sensitive data from Intel Hyper Threading due to speculative execution attacks like L1TF. 

    While disabling Intel Hyper Threading has become recommended for fending off newer speculative execution attacks, obviously many don't want to lose out on those extra threads. In particular, data centers and public cloud providers certainly don't want to give up on Hyper Threading as it will hurt their margins hard. Oracle began working on address space isolation for the Kernel-based Virtual Machine (KVM) but now that has evolved into Kernel Address Space Isolation as a generic address-space isolation framework and KVM simply being one of the consumers of this framework. 

  • AMD "GFX908" Additions Land In LLVM 9.0 For New Workstation GPU

    Weeks ahead of SIGGRAPH and days ahead of the LLVM 9.0 code branching, a number of big "GFX908" commits have been landing in the AMDGPU LLVM shader compiler back-end over the past day.

    GFX908 is an unreleased product we haven't seen much driver activity on to date. Yes, GFX9 is Vega, but AMD has previously communicated that Vega will live on for select workstation/compute products and that was also reiterated back during the Navi media briefings last month.

  • AMD's GPU Performance API 3.4 Adds Navi Support, Other Features

    The GPU Performance API is their cross-platform library for accessing the hardware's performance counters and being able to analyze performance/execution characteristics. GPA pairs nicely with their other open-source tooling like CodeXL and the Compute Profiler for finding bottlenecks and other areas for optimization.

  • RADV Picks Up Geometry Shader Support For Navi/GFX10

    It's on a daily basis we are seeing improvements to the newly-added Radeon RX 5700 "Navi" support with the open-source Linux graphics driver stack. Today brings geometry shader support for the Mesa RADV Vulkan driver.

    AMD's official Vulkan driver, AMDVLK, has yet to publish its (open-source) Navi support but that is hopefully just days away. Meanwhile RADV is off to the races in aiming for good Navi/GFX10 support with the Mesa 19.2 release due out at the end of next month.

  • Wayland's Weston Gets Option To Enable HDCP Support Per-Output

    An Intel open-source developer contributed support to Wayland's reference Weston compositor for enabling HDCP support on a per-output basis using a new allow_hdcp option.

    From the weston.ini configuration file, High-bandwidth Digital Content Protection can be enabled per-output via the "allow_hdcp" option within each output section. HDCP otherwise is always enabled by default for the display outputs.

Programming With Python

Filed under
Development
  • For loop in Django template

    For loop is used to iterate over any iterable object, accessing one item at a time and making it available inside the for loop body.

  • Creating custom template tags in Django

    Sometimes existing templates tags are not enough for rebellious developers. They need to create custom template tags to use.

  • Python Anywhere: Using our file API

    Our API supports lots of common PythonAnywhere operations, like creating and managing consoles, scheduled and always-on tasks, and websites. We recently added support for reading/writing files; this blog post gives a brief overview of how you can use it to do that.

  • Make an RGB cube with Python and Scribus

    When I decided I wanted to play with color this summer, I thought about the fact that colors are usually depicted on a color wheel. This is usually with pigment colors rather than light, and you lose any sense of the variation in color brightness or luminosity.

    As an alternative to the color wheel, I came up with the idea of displaying the RGB spectrum on the surfaces of a cube using a series of graphs. RGB values would be depicted on a three-dimensional graph with X-, Y-, and Z-axes. For example, a surface would keep B (or blue) at 0 and the remaining axes would show what happens as I plot values as colors for R (red) and G (green) from 0 to 255.

    It turns out this is not very difficult to do using Scribus and its Python Scripter capability. I can create RGB colors, make rectangles showing the colors, and arrange them in a 2D format. I decided to make value jumps of 5 for the colors and make rectangles measuring 5 points on a side. Thus, for each 2D graph, I would make about 250 colors, and the cube would measure 250 points to a side, or 3.5 inches.

  • Wing Python IDE 7.0.4

    Wing 7 introduces an improved code warnings and code quality inspection system that includes built-in error detection and tight integration with Pylint, pep8, and mypy. This release also adds a new data frame and array viewer, a MATLAB keyboard personality, easy inline debug data display with Shift-Space, improved stack data display, support for PEP 3134 chained exceptions, callouts for search and other code navigation features, four new color palettes, improved bookmarking, a high-level configuration menu, magnified presentation mode, a new update manager, stepping over import internals, simplified remote agent installation, and much more.

  • Data School: My top 25 pandas tricks (video)

    In my new pandas video, you're going to learn 25 tricks that will help you to work faster, write better code, and impress your friends. These are the most useful tricks I've learned from 5 years of teaching Python's pandas library.

    Each trick is about a minute long, so you're going to learn a ton of new pandas skills in less than 30 minutes!

  • ODSC webinar: End-to-End Data Science Without Leaving the GPU

    In this webinar sponsored by the Open Data Science Conference (ODSC), I outline a brief history of GPU analytics and the problems that using GPU analytics solves relative to using other parallel computation methods such as Hadoop. I also demonstrate how OmniSci fits into the broader GPU-accelerated data science workflow, with examples provided using Python.

  • Convert hexadecimal number to decimal number with Python program
  • Introduction to unit testing with Python
  • Python 3.7.3 : Three examples with BeautifulSoup.
  • SongSearch autocomplete rate now 2+ per second
  • 2019 PSF Fundraiser - Thank you & debrief
  • PSF GSoC students blogs: Week #6
  • PSF GSoC students blogs: Fourth Blog - GSOC 2019
  • PSF GSoC students blogs: Coding and Communication

Alpine 3.10.1 released

Filed under
GNU
Linux

The Alpine Linux project is pleased to announce the immediate availability of version 3.10.1 of its Alpine Linux operating system.

Read more

Sparky Linux 4.11 LXDE

Filed under
Reviews

Today we are looking at Sparky 4.11 LXDE. It comes with the LXDE desktop environment which Lubuntu previously used, but it is no longer in development, the last release was two years ago but it is great to still have a supported Linux Distro which is using it.

The main feature, of this release, is that it changed the repository from Debian Stable to Old-Stable, so still, Debian 9 which tells me that they won't keep it going for long, but it will still be supported for 2 years, like Debian 9.

It uses about 300 MB of ram when idling and Linux Kernel 4.9 which is dated but playing with the distro, the apps can be a bit slow to open up the first time but perfectly workable and for old machines or any machine for which you want to use all the system resources for your work and the minimum for your system.

Read more

Direct/video: Sparky Linux 4.11 LXDE Run Through

What is Silverblue?

Filed under
Red Hat

Fedora Silverblue is becoming more and more popular inside and outside the Fedora world. So based on feedback from the community, here are answers to some interesting questions about the project. If you do have any other Silverblue related questions, please leave it in the comments section and we will try to answer them in a future article.

Silverblue is a codename for the new generation of the desktop operating system, previously known as Atomic Workstation. The operating system is delivered in images that are created by utilizing the rpm-ostree project. The main benefits of the system are speed, security, atomic updates and immutability.

Read more

Kdenlive 19.04.3 is out

Filed under
KDE

While the team is out for a much deserved summer break the last minor release post-refactoring is out with another huge amount of fixes. The highlights include fixing compositing and speed effect regressions, thumbnail display issues of clips in the timeline and many Windows fixes. With this release we finished polishing the rough edges and now we can focus on adding new features while fixing other small details left. As usual you can get the latest AppImage from our download page.

Speaking of that, the next major release is less than a month away and it already has some cool new features implemented like changing the speed of a clip by ctrl + resize and pressing shift and hover over a thumb of a clip in the Project Bin to preview it. We’ve also bumped the Qt version to 5.12.4 and updated to the latest MLT. You can grab it from here to test it. Also planned is finishing the 3 point editing workflow and improvements to the speed effect. Stay tuned for more info soon.

Read more

Firefox 68 available now in Fedora

Filed under
Red Hat
Moz/FF

Earlier this week, Mozilla released version 68 of the Firefox web browser. Firefox is the default web browser in Fedora, and this update is now available in the official Fedora repositories.

This Firefox release provides a range of bug fixes and enhancements, including:

Better handling when using dark GTK themes (like Adwaita Dark). Previously, running a dark theme may have caused issues where user interface elements on a rendered webpage (like forms) are rendered in the dark theme, on a white background. Firefox 68 resolves these issues. Refer to these two Mozilla bugzilla tickets for more information.
The about:addons special page has two new features to keep you safer when installing extensions and themes in Firefox. First is the ability to report security and stability issues with addons directly in the about:addons page. Additionally, about:addons now has a list of secure and stable extensions and themes that have been vetted by the Recommended Extensions program.

Read more

KDE Applications 19.04 Reaches End of Life, KDE Apps 19.08 Arrives on August 15

Filed under
KDE

Launched on April 18th, 2019, the KDE Applications 19.04 open-source software suite series received a total of three maintenance updates, the last one being released today as KDE Applications 19.04.3, which fixes some remaining issues but also marks the end of life of KDE Applications 19.04.

KDE Applications 19.04.3 brings numerous changes across various of the included applications, but the most important changes are the fact that the Konqueror and Kontact apps no longer crash on exit when QtWebEngine 5.13 is used and the Python importer in the Umbrello UML app now supports parameters with default arguments.

Read more

Also: Applications 19.04.3

Linux 5.3, LWN's Kernel Coverage and the Linux Foundation

Filed under
Linux
  • Linux 5.3 Enables "-Wimplicit-fallthrough" Compiler Flag

    The recent work on enabling "-Wimplicit-fallthrough" behavior for the Linux kernel has culminated in Linux 5.3 with actually being able to universally enable this compiler feature.

    The -Wimplicit-fallthrough flag on GCC7 and newer warns of cases where switch case fall-through behavior could lead to potential bugs / unexpected behavior.

  • EXT4 For Linux 5.3 Gets Fixes & Faster Case-Insensitive Lookups

    The EXT4 file-system updates have already landed for the Linux 5.3 kernel merge window that opened this week.

    For Linux 5.3, EXT4 maintainer Ted Ts'o sent in primarily a hearty serving of fixes. There are fixes from coverity warnings being addressed to typos and other items for this mature and widely-used Linux file-system.

  • Providing wider access to bpf()

    The bpf() system call allows user space to load a BPF program into the kernel for execution, manipulate BPF maps, and carry out a number of other BPF-related functions. BPF programs are verified and sandboxed, but they are still running in a privileged context and, depending on the type of program loaded, are capable of creating various types of mayhem. As a result, most BPF operations, including the loading of almost all types of BPF program, are restricted to processes with the CAP_SYS_ADMIN capability — those running as root, as a general rule. BPF programs are useful in many contexts, though, so there has long been interest in making access to bpf() more widely available. One step in that direction has been posted by Song Liu; it works by adding a novel security-policy mechanism to the kernel.
    This approach is easy enough to describe. A new special device, /dev/bpf is added, with the core idea that any process that has the permission to open this file will be allowed "to access most of sys_bpf() features" — though what comprises "most" is never really spelled out. A non-root process that wants to perform a BPF operation, such as creating a map or loading a program, will start by opening this file. It then must perform an ioctl() call (BPF_DEV_IOCTL_GET_PERM) to actually enable its ability to call bpf(). That ability can be turned off again with the BPF_DEV_IOCTL_PUT_PERM ioctl() command.

    Internally to the kernel, this mechanism works by adding a new field (bpf_flags) to the task_struct structure. When BPF access is enabled, a bit is set in that field. If this patch goes forward, that detail is likely to change since, as Daniel Borkmann pointed out, adding an unsigned long to that structure for a single bit of information is unlikely to be popular; some other location for that bit will be found.

  • The io.weight I/O-bandwidth controller

    Part of the kernel's job is to arbitrate access to the available hardware resources and ensure that every process gets its fair share, with "its fair share" being defined by policies specified by the administrator. One resource that must be managed this way is I/O bandwidth to storage devices; if due care is not taken, an I/O-hungry process can easily saturate a device, starving out others. The kernel has had a few I/O-bandwidth controllers over the years, but the results have never been entirely satisfactory. But there is a new controller on the block that might just get the job done.
    There are a number of challenges facing an I/O-bandwidth controller. Some processes may need a guarantee that they will get at least a minimum amount of the available bandwidth to a given device. More commonly in recent times, though, the focus has shifted to latency: a process should be able to count on completing an I/O request within a bounded period of time. The controller should be able to provide those guarantees while still driving the underlying device at something close to its maximum rate. And, of course, hardware varies widely, so the controller must be able to adapt its operation to each specific device.

    The earliest I/O-bandwidth controller allows the administrator to set maximum bandwidth limits for each control group. That controller, though, will throttle I/O even if the device is otherwise idle, causing the loss of I/O bandwidth. The more recent io.latency controller is focused on I/O latency, but as Tejun Heo, the author of the new controller, notes in the patch series, this controller really only protects the lowest-latency group, penalizing all others if need be to meet that group's requirements. He set out to create a mechanism that would allow more control over how I/O bandwidth is allocated to groups.

  • TurboSched: the return of small-task packing

    CPU scheduling is a difficult task in the best of times; it is not trivial to pick the next process to run while maintaining fairness, minimizing energy use, and using the available CPUs to their fullest potential. The advent of increasingly complex system architectures is not making things easier; scheduling on asymmetric systems (such as the big.LITTLE architecture) is a case in point. The "turbo" mode provided by some recent processors is another. The TurboSched patch set from Parth Shah is an attempt to improve the scheduler's ability to get the best performance from such processors.
    Those of us who have been in this field for far too long will, when seeing "turbo mode", think back to the "turbo button" that appeared on personal computers in the 1980s. Pushing it would clock the processor beyond its original breathtaking 4.77MHz rate to something even faster — a rate that certain applications were unprepared for, which is why the "go slower" mode was provided at all. Modern turbo mode is a different thing, though, and it's not just a matter of a missing front-panel button. In short, it allows a processor to be overclocked above its rated maximum frequency for a period of time when the load on the rest of system overall allows it.

    Turbo mode can thus increase the CPU cycles available to a given process, but there is a reason why the CPU's rated maximum frequency is lower than what turbo mode provides. The high-speed mode can only be sustained as long as the CPU temperature does not get too high and, crucially (for the scheduler), the overall power load on the system must not be too high. That, in turn, implies that some CPUs must be powered down; if all CPUs are running, there will not be enough power available for any of those CPUs to go into the turbo mode. This mode, thus, is only usable for certain types of workloads and will not be usable (or beneficial) for many others.

  • EdgeX Foundry Announces Production Ready Release Providing Open Platform for IoT Edge Computing to a Growing Global Ecosystem

    EdgeX Foundry, a project under the LF Edge umbrella organization within the Linux Foundation that aims to establish an open, interoperable framework for edge IoT computing independent of hardware, silicon, application cloud, or operating system, today announced the availability of its “Edinburgh” release. Created collaboratively by a global ecosystem, EdgeX Foundry’s new release is a key enabler of digital transformation for IoT use cases and is a platform for real-world applications both for developers and end users across many vertical markets. EdgeX community members have created a range of complementary products and services, including commercial support, training and customer pilot programs and plug-in enhancements for device connectivity, applications, data and system management and security.

    Launched in April 2017, and now part of the LF Edge umbrella, EdgeX Foundry is an open source, loosely-coupled microservices framework that provides the choice to plug and play from a growing ecosystem of available third party offerings or to augment proprietary innovations. With a focus on the IoT Edge, EdgeX simplifies the process to design, develop and deploy solutions across industrial, enterprise, and consumer applications.

Proprietary Software and Security Failures

Filed under
Security
  • Apple has pushed a silent Mac update to remove hidden Zoom web server

    Apple has released a silent update for Mac users removing a vulnerable component in Zoom, the popular video conferencing app, which allowed websites to automatically add a user to a video call without their permission.

    The Cupertino, Calif.-based tech giant told TechCrunch that the update — now released — removes the hidden web server, which Zoom quietly installed on users’ Macs when they installed the app.

  • Microsoft denies it will move production out of China

    Nikkei had also previously reported in June that Apple is similarly considering moving between 15% and 30% of all iPhone production out of China and has asked its major suppliers to weigh up the costs.

  • Microsoft's reseller chief explains why it's angering some of its partners by taking away a key perk: 'We can't afford to run every single partner's organization for free anymore'

    Gavriella Schuster, corporate vice president and One Commercial Partner channel chief at Microsoft, says that while it cost the company practically nothing to provide partners with traditional software, it would be a significant expense for the company to provide cloud services like Office 365 for free.

  • KRP: At least 1,000 devices compromised in data breach in Lahti

    KRP on Tuesday revealed that its pre-trial investigation shows that the unauthorised access detected in the city’s data systems earlier this summer was an organised attack rather than an error by an individual user.

    The attacker or attackers managed to cause damage by actively spreading a malware, compromising at least a thousand devices.

  • GnuPG 2.2.17 released to mitigate attacks on keyservers

    gpg: Ignore all key-signatures received from keyservers. This change is required to mitigate a DoS due to keys flooded with faked key-signatures. The old behaviour can be achieved by adding keyserver-options no-self-sigs-only,no-import-clean to your gpg.conf. [#4607]

  • Security updates for Thursday

    Security updates have been issued by Debian (dosbox and openjpeg2), Oracle (dbus and kernel), Scientific Linux (dbus), Slackware (mozilla), and SUSE (fence-agents, libqb, postgresql10, and sqlite3).

  • What Is Zero Trust Architecture?

    Zero Trust architecture might be popular now, but that doesn’t necessarily mean it’s for you. If you find your needs are met by your current security, you may not want to switch. That said, keep in mind that waiting until you have a security breach isn’t an ideal way to evaluate your security.

  • OpenPGP certificate flooding

    A problem with the way that OpenPGP public-key certificates are handled by key servers and applications is wreaking some havoc, but not just for those who own the certificates (and keys)—anyone who has those keys on their keyring and does regular updates will be affected. It is effectively a denial of service attack, but one that propagates differently than most others. The mechanism of this "certificate flooding" is one that is normally used to add attestations to the key owner's identity (also known as "signing the key"), but because of the way most key servers work, it can be used to fill a certificate with "spam"—with far-reaching effects.

    The problems have been known for many years, but they were graphically illustrated by attacks on the keys of two well-known members of the OpenPGP community, Daniel Kahn Gillmor ("dkg") and Robert J. Hansen ("rjh"), in late June. Gillmor first reported the attack on his blog. It turned out that someone had added multiple bogus certifications (or attestations) to his public key in the SKS key server pool; an additional 55,000 certifications were added, bloating his key to 17MB in size. Hansen's key got spammed even worse, with nearly 150,000 certifications—the maximum number that the OpenPGP protocol will support.

    The idea behind these certifications is to support the "web of trust". If user Alice believes that a particular key for user Bob is valid (because, for example, they sat down over beers and verified that), Alice can so attest by adding a certification to Bob's key. Now if other users who trust Alice come across Bob's key, they can be reasonably sure that the key is Bob's because Alice (cryptographically) said so. That is the essence of the web of trust, though in practice, it is often not really used to do that kind of verification outside of highly technical communities. In addition, anyone can add a certification, whether they know the identity of the key holder or not.

  • FinSpy Malware ‘Returns’ To Steal Data On Both Android And iOS

    As per the researchers, the spyware was again active in 2018 and the latest activity was spotted in Myanmar in June 2019. These implants are capable of collecting personal information such as SMS, Emails, Calendars, Device Locations, Multimedia and even messages from some popular social media apps.

    If you are an iOS user, then the implant is only observed to work on jailbroken devices. If an iOS device is already jailbroken then this spyware can be remotely installed via different mediums like messaging, email, etc. However, the implants have not been observed on the latest version of iOS.

  • New FinSpy iOS and Android implants revealed ITW

    FinSpy is spyware made by the German company Gamma Group. Through its UK-based subsidiary Gamma International Gamma Group sells FinSpy to government and law enforcement organizations all over the world. FinSpy is used to collect a variety of private user information on various platforms. Its implants for desktop devices were first described in 2011 by Wikileaks and mobile implants were discovered in 2012. Since then Kaspersky has continuously monitored the development of this malware and the emergence of new versions in the wild. According to our telemetry, several dozen unique mobile devices have been infected over the past year, with recent activity recorded in Myanmar in June 2019. Late in 2018, experts at Kaspersky looked at the functionally latest versions of FinSpy implants for iOS and Android, built in mid-2018. Mobile implants for iOS and Android have almost the same functionality. They are capable of collecting personal information such as contacts, SMS/MMS messages, emails, calendars, GPS location, photos, files in memory, phone call recordings and data from the most popular messengers.

today's howtos

Filed under
HowTos

Fedora: Google Code-in, Python and NeuroFedora

Filed under
Red Hat
  • Fedora Community Blog: GCI 2018 mentor’s summit @ Google headquarters

    Google Code-in is a contest to introduce students (ages 13-17) to open source software development. Since 2010, 8,108 students from 107 countries have completed over 40,100 open source tasks Because Google Code-in is often the first experience many students have with open source, the contest is designed to make it easy for students to jump right in. I was one of the mentors in this first time for Fedora program. We had 125 students participating in Fedora and the top 3 students completed 26, 25 and 22 tasks each.

    Every year Google invites the Grand-Prize winners and their parents, and a mentor to it’s headquarters in San Francisco, California for a 4 days trip. I was offered the opportunity to go and represent Fedora in the summit and meet these 2 brilliant folks in person. This report covers activities and other things that happened there.

  • Fedora mulls its "python" version

    There is no doubt that the transition from Python 2 to Python 3 has been a difficult one, but Linux distributions have been particularly hard hit. For many people, that transition is largely over; Python 2 will be retired at the end of this year, at least by the core development team. But distributions will have to support Python 2 for quite a while after that. As part of any transition, the version that gets run from the python binary (or symbolic link) is something that needs to be worked out. Fedora is currently discussing what to do about that for Fedora 31.

    Fedora program manager Ben Cotton posted a proposal to make python invoke Python 3 in Fedora 31 to the Fedora devel mailing list. The proposal, titled "Python means Python 3", is also on the Fedora wiki. The idea is that wherever "python" is used it will refer to version 3, including when it is installed by DNF (i.e. dnf install python) or when Python packages are installed, so installing "python-requests" will install the Python 3 version of the Requests library. In addition, a wide array of associated tools (e.g. pip, pylint, idle, and flask) will also use the Python 3 versions.

    The "Requests" link above does point to a potential problem area, however. It shows that Requests for Python 3 III is not fully finished, with an expected release sometime "before PyCon 2020" (mid-April 2020), which is well after the expected October 2019 release of Fedora 31. The distribution already has a python3-requests package, though, so that will be picked up as python-requests in Fedora 31 if this proposal is adopted. There may be other packages out there where Python 3 support is not complete but, at this point, most of the major libraries have converted.

  • NeuroFedora poster at CNS*2019

    With CNS*2019 around the corner, we worked on getting the NeuroFedora poster ready for the poster presentation session. Our poster is P96, on the first poster session on the 14th of July.

    [...]

    Unfortunately, this time, no one from the team is able to attend the conference, but if you are there and want to learn more about NeuroFedora, please get in touch with us using any of our communication channels.

    To everyone that will be in Barcelona for the conference, we hope you have a fruitful one, and of course, we hope you are able to make some time to rest at the beach too.

Syndicate content

More in Tux Machines

Operating-System-Directed Power-Management (OSPM) Summit

  • The third Operating-System-Directed Power-Management summit

    he third edition of the Operating-System-Directed Power-Management (OSPM) summit was held May 20-22 at the ReTiS Lab of the Scuola Superiore Sant'Anna in Pisa, Italy. The summit is organized to collaborate on ways to reduce the energy consumption of Linux systems, while still meeting performance and other goals. It is attended by scheduler, power-management, and other kernel developers, as well as academics, industry representatives, and others interested in the topics.

  • The future of SCHED_DEADLINE and SCHED_RT for capacity-constrained and asymmetric-capacity systems

    The kernel's deadline scheduling class (SCHED_DEADLINE) enables realtime scheduling where every task is guaranteed to meet its deadlines. Unfortunately SCHED_DEADLINE's current view on CPU capacity is far too simple. It doesn't take dynamic voltage and frequency scaling (DVFS), simultaneous multithreading (SMT), asymmetric CPU capacity, or any kind of performance capping (e.g. due to thermal constraints) into consideration. In particular, if we consider running deadline tasks in a system with performance capping, the question is "what level of guarantee should SCHED_DEADLINE provide?". An interesting discussion about the pro and cons of different approaches (weak, hard, or mixed guarantees) developed during this presentation. There were many different views but the discussion didn't really conclude and will have to be continued at the Linux Plumbers Conference later this year. The topic of guaranteed performance will become more important for mobile systems in the future as performance capping is likely to become more common. Defining hard guarantees is almost impossible on real systems since silicon behavior very much depends on environmental conditions. The main pushback on the existing scheme is that the guaranteed bandwidth budget might be too conservative. Hence SCHED_DEADLINE might not allow enough bandwidth to be reserved for use cases with higher bandwidth requirements that can tolerate bandwidth reservations not being honored.

  • Scheduler behavioral testing

    Validating scheduler behavior is a tricky affair, as multiple subsystems both compete and cooperate with each other to produce the task placement we observe. Valentin Schneider from Arm described the approach taken by his team (the folks behind energy-aware scheduling — EAS) to tackle this problem.

  • CFS wakeup path and Arm big.LITTLE/DynamIQ

    "One task per CPU" workloads, as emulated by multi-core Geekbench, can suffer on traditional two-cluster big.LITTLE systems due to the fact that tasks finish earlier on the big CPUs. Arm has introduced a more flexible DynamIQ architecture that can combine big and LITTLE CPUs into a single cluster; in this case, early products apply what's known as phantom scheduler domains (PDs). The concept of PDs is needed for DynamIQ so that the task scheduler can use the existing big.LITTLE extensions in the Completely Fair Scheduler (CFS) scheduler class. Multi-core Geekbench consists of several tests during which N CFS tasks perform an equal amount of work. The synchronization mechanism pthread_barrier_wait() (i.e. a futex) is used to wait for all tasks to finish their work in test T before starting the tasks again for test T+1. The problem for Geekbench on big.LITTLE is related to the grouping of big and LITTLE CPUs in separate scheduler (or CPU) groups of the so-called die-level scheduler domain. The two groups exists because the big CPUs share a last-level cache (LLC) and so do the LITTLE CPUs. This isn't true any more for DynamIQ, hence the use of the "phantom" notion here. The tasks of test T finish earlier on big CPUs and go to sleep at the barrier B. Load balancing then makes sure that the tasks on the LITTLE CPUs migrate to the big CPUs where they continue to run the rest of their work in T before they also go to sleep at B. At this moment, all the tasks in the wake queue have a big CPU as their previous CPU (p->prev_cpu). After the last task has entered pthread_barrier_wait() on a big CPU, all tasks on the wake queue are woken up.

  • I-MECH: realtime virtualization for industrial automation

    The typical systems used in industrial automation (e.g. for axis control) consist of a "black box" executing a commercial realtime operating system (RTOS) plus a set of control design tools meant to be run on a different desktop machine. This approach, besides imposing expensive royalties on the system integrator, often does not offer the desired degree of flexibility for testing/implementing novel solutions (e.g., running both control code and design tools on the same platform).

  • Virtual-machine scheduling and scheduling in virtual machines

    As is probably well known, a scheduler is the component of an operating system that decides which CPU the various tasks should run on and for how long they are allowed to do so. This happens when an OS runs on the bare hardware of a physical host and it is also the case when the OS runs inside a virtual machine. The only difference being that, in the latter case, the OS scheduler marshals tasks among virtual CPUs. And what are virtual CPUs? Well, in most platforms they are also a kind of special task and they want to run on some CPUs ... therefore we need a scheduler for that! This is usually called the "double-scheduling" property of systems employing virtualization because, well, there literally are two schedulers: one — let us call it the host scheduler, or the hypervisor scheduler — that schedules the virtual CPUs on the host physical CPUs; and another one — let us call it the guest scheduler — that schedules the guest OS's tasks on the guest's virtual CPUs. Now what are these two schedulers? That depends on the virtualization platform. They are always different, in the sense that it will never happen that, at runtime, a scheduler has to deal with scheduling virtual CPUs and also scheduling tasks that want to run on those same virtual CPUs (well, it can happen, but then you are not doing virtualization). They can be the same, in terms of code, or they can be completely different from that respect as well.

  • Rock and a hard place: How hard it is to be a CPU idle-time governor

    In the opening session of OSPM 2019, Rafael Wysocki from Intel gave a talk about potential problems faced by the designers of CPU idle-time-management governors, which was inspired by his own experience from the timer-events oriented (TEO) governor work done last year. In the first place, he said, it should be noted that "CPU idleness" is defined at the level of logical CPUs, which may be CPU cores or simultaneous multithreading (SMT) threads, depending on the hardware configuration of the processor. In Linux, a logical CPU is idle when there are no runnable tasks in its queue, so it falls back to executing the idle task associated with it (there is one idle task for each logical CPU in the system, but they all share the same code, which is the idle loop). Therefore "CPU idleness" is an OS (not hardware) concept and if the idle loop is entered by a CPU, there is an opportunity to save some energy with a relatively small impact on performance (or even without any impact on performance at all) — if the hardware supports that. The idle loop runs on each idle CPU and it only takes this particular CPU into consideration. As a rule, two code modules are invoked in every iteration of it. The first one, referred to as the CPU idle-time-management governor, is responsible for deciding whether or not to stop the scheduler tick and what to tell the hardware to do; the second one, called the CPU idle-time-management driver, passes the governor's decisions down to the hardware, usually in an architecture- or platform-specific way. Then, presumably, the processor enters a special state in which the CPU in question stops fetching instructions (that is, it does literally nothing at all); that may allow the processor's power draw to be reduced and some energy to be saved as a result. If that happens, the processor needs to be woken up from that state by a hardware event after spending some time, referred to as the idle duration, in it. At that point, the governor is called again so it can save the idle-duration value for future use.

Red Hat/IBM and Fedora Leftovers

  • An introduction to cloud-native CI/CD with Red Hat OpenShift Pipelines

    Red Hat OpenShift 4.1 offers a developer preview of OpenShift Pipelines, which enable the creation of cloud-native, Kubernetes-style continuous integration and continuous delivery (CI/CD) pipelines based on the Tekton project. In a recent article on the Red Hat OpenShift blog, I provided an introduction to Tekton and pipeline concepts and described the benefits and features of OpenShift Pipelines. OpenShift Pipelines builds upon the Tekton project to enable teams to build Kubernetes-style delivery pipelines that they can fully control and own the complete lifecycle of their microservices without having to rely on central teams to maintain and manage a CI server, plugins, and its configurations.

  • IBM's New Open Source Kabanero Promises to Simplify Kubernetes for DevOps

    At OSCON, IBM unveiled a new open source platform that promises to make Kubernetes easier to manage for DevOps teams.

  • MySQL for developers in Red Hat OpenShift

    As a software developer, it’s often necessary to access a relational database—or any type of database, for that matter. If you’ve been held back by that situation where you need to have someone in operations provision a database for you, then this article will set you free. I’ll show you how to spin up (and wipe out) a MySQL database in seconds using Red Hat OpenShift. Truth be told, there are several databases that can be hosted in OpenShift, including Microsoft SQL Server, Couchbase, MongoDB, and more. For this article, we’ll use MySQL. The concepts, however, will be the same for other databases. So, let’s get some knowledge and leverage it.

  • What you need to know to be a sysadmin

    The system administrator of yesteryear jockeyed users and wrangled servers all day, in between mornings and evenings spent running hundreds of meters of hundreds of cables. This is still true today, with the added complexity of cloud computing, containers, and virtual machines. Looking in from the outside, it can be difficult to pinpoint what exactly a sysadmin does, because they play at least a small role in so many places. Nobody goes into a career already knowing everything they need for a job, but everyone needs a strong foundation. If you're looking to start down the path of system administration, here's what you should be concentrating on in your personal or formal training.

  • Building blocks of syslog-ng

    Recently I gave a syslog-ng introductory workshop at Pass the SALT conference in Lille, France. I got a lot of positive feedback, so I decided to turn all that feedback into a blog post. Naturally, I shortened and simplified it, but still managed to get enough material for multiple blog posts.

  • PHP version 7.2.21RC1 and 7.3.8RC1

    Release Candidate versions are available in testing repository for Fedora and Enterprise Linux (RHEL / CentOS) to allow more people to test them. They are available as Software Collections, for a parallel installation, perfect solution for such tests (for x86_64 only), and also as base packages. RPM of PHP version 7.387RC1 are available as SCL in remi-test repository and as base packages in the remi-test repository for Fedora 30 or remi-php73-test repository for Fedora 28-29 and Enterprise Linux. RPM of PHP version 7.2.20RC1 are available as SCL in remi-test repository and as base packages in the remi-test repository for Fedora 28-29 or remi-php72-test repository for Enterprise Linux.

  • QElectroTech version 0.70

    RPM of QElectroTech version 0.70, an application to design electric diagrams, are available in remi for Fedora and Enterprise Linux 7. A bit more than 1 year after the version 0.60 release, the project have just released a new major version of their electric diagrams editor.

Endeavour OS 2019.07.15

Today we are looking at the first stable release of Endeavour OS. It is a project that started to continue the spirit of the recently discontinued Antergos. The developing team exists out of Antergos developers and community members. As you can see in this first stable release, it is far from just a continuing of Antergos as we know it. The stable release is an offline Calamres installer and it just came with a customized XFCE desktop environment. They are planning to have an online installer again in the future, which will give a person an option to choose between 10 desktop environments, similar to Antergos. It is based on Arch, Linux Kernel 5.2, XFCE 4.14 pre2 and it uses about 500mb of ram. Read more Direct/video: Endeavour OS 2019.07.15 Run Through

Linux File Manager: Top 20 Reviewed for Linux Users

A file manager is the most used software in any digital platform. With the help of this software, you can access, manage, and decorate the files on your device. For the Linux system, this is also an important factor to have an effective and simple file manager. In this curated article, we are going to discuss a set of best Linux file manager tools which definitely help you to operate the system effectively. Read more