Language Selection

English French German Italian Portuguese Spanish

Games: Small Mode returns to Steam, Chasm and Summer Daze at Hero-U

Filed under
Gaming
  • Small Mode returns to Steam, Broadcast Settings appear on Linux and more on Steam Cloud Gaming

    Valve continue upgrading the experience for the new Steam Library with another Beta update available now.

    I know plenty of people missed Small Mode, well the good news is that it has returned! If you go to View -> Small Mode in the top menu it will now correctly switch to it. It has been updated too, so you can view your Collections in it too. Don't know what Small Mode is?

    [...]

    For the Linux client, Valve updated vaapi decoding to libva2 compatibility, they applied some fixes to free disk space checking due to issues with some NFS mounts and Steam Input's F12 binding was fixed as well. See the full changelog here.

  • Sweet action-adventure Chasm is now available on itch.io

    Bit Kid have just recently put up their successfully crowdfunded action-adventure game Chasm on itch.io. Announced a few days ago, it's good to see more developers support the very indie friendly store.

    In Chasm you play as a new recruit taking on your first mission for the Guildean Kingdom. You investigate various rumours about a vital mine being shut down, but what you discover is worse than you had imagined. The whole town is empty, kidnapped by supernatural creatures emerging from the depths. That's the basic setup anyway, although each play-through will be different thanks to the randomized map.

  • Summer Daze at Hero-U is successfully funded and on the way to Linux

    Summer Daze at Hero-U, the prequel to Hero-U: Rogue to Redemption from Lori and Corey Cole has been funded on Kickstarter so that's another game on the way to Linux.

    Their campaign ended a few days ago with $106,155 in funding (just over their 99k goal), showing that there's plenty of gamers out there interested in a Visual Novel that mixes in light RPG and adventure game elements. It did look a bit touch-and-go a few days before the end, thankfully though they got a good boost at the end of the campaign to push it over.

Bite the Bullet

  • Learn about eating enemies in the new Bite the Bullet trailer

    The upcoming run and gun game Bite the Bullet from Mega Cat Studios is looking really good in the latest feature trailer. Currently in development and due to release in Q1 2020, Bite the Bullet is a fast-paced action platformer RPG with diet-based skill trees.

    You work for DarwinCorp, with a mission to collect genetic material from every possible species. A lot of species don't seem happy about it, so they send you in to collect their genetic data with brute force. What DarwinCorp don't know, is that you collect it by eating your targets as a half Human, half Ghoul. When you eat you gain access to new abilities, you can transform and unlock more. It's a gross idea but also somewhat amusing too.

Comment viewing options

Select your preferred way to display the comments and click "Save settings" to activate your changes.

More in Tux Machines

Audiocasts/Shows/Screencasts: System76, Linux Headlines and More

  • The System76 Superfan III Event: Gardiner and Jay Chat About Their AWESOME Experience There

    The System76 Superfan III event occurred on November 16th, 2019 and it was a ton of fun! Gardiner Bryant and I talk about our experience there, some of the things they revealed, and other geeky topics around System76 and their computers

  • 2019-11-18 | Linux Headlines

    The Oracle vs. Google copyright case goes to the Supreme Court, NextCry attacks Nextcloud servers, Chromebooks prepare to use LVFS, and Debian takes the systemd debate to the next level.

  • Things are Looking Pod-tastic | Fall Time Blathering

    I started to produce some video content on YouTube and this site to enhance some of my content and later, I thought I would cut my teeth on a podcast of my own to talk about the nerdy things I enjoy. My reoccurring topics consist of my additional thoughts about a subject or two of the last BDLL show and an openSUSE corner but truth be told, openSUSE weaves itself throughout my “noodlings”. In September of 2019, the formation of Destination Linux Network was announced where these well established content creators have pooled their resources to draw together their somewhat discrete communities and provide a forum for interaction in greater depth than what Telegram, Discord or YouTube can provide on their own.

  • Test and Code: 94: The real 11 reasons I don't hire you - Charity Majors

    If you get the job, and you enjoy the work, awesome, congratulations. If you don't get the job, it'd be really great to know why. Sometimes it isn't because you aren't a skilled engineer. What other reasons are there? Well, that's what we're talking about today. Charity Majors is the cofounder and CTO of Honeycomb.io, and we're going to talk about reasons for not hiring someone. This is a very informative episode both for people who job hunt in the future and for hiring managers and people on the interview team.

  • Bluestar Linux 5.3.11 Run Through

    In this video, we are looking at Bluestar Linux 5.3.11. 

Fedora: Fedora Toolbox, Building Successful Products, Nano Promoted and Apparel

  • Fedora Toolbox. Unprivileged development environment at maximum

    Fedora Toolbox is a tool for developing and debugging software that basically is a frontend to the Podman container system. A simple way to test applications without getting billions of dependencies and cluttering up your operating system. First, Podman (Pod Manager tool) is a daemon less container engine for developing, managing, and running OCI Containers on your Linux System. With Podman, you can manage pods, containers, and container images. You can consult (Podman.io) the official website to learn more about Podman and container tooling. Fedora Toolbox gives you a quick frontend to Podman and it also creates an interactive container based on your current system. Toolbox (actually, Fedora Toolbox is now just Toolbox) use is particularly useful for the development and testing environment.

  • Building Successful Products

    Building a new product is hard. Building a successful new product is even harder. And building a profitable new product is the greatest challenge! To make things even more interesting, the fundamental customer requirements for a product change as the product and market mature. The very things that are required for success in an early stage product will hinder or even prevent success later on. Markets, technologies and products go through a series of predictable stages. Understanding this evolution – and understanding what to do at each stage! – is vital for navigating the shoals of building a successful and profitable product.

  • Fedora Developers Looking To Change The Default Text Editor From Vi To Nano

    Fedora will be adding the Nano text editor to their default Fedora Workstation installs as complementary to Vi but their stakeholders intend to submit a system-wide proposal that would change the default installed editor from Vi to Nano. The Fedora Workstation flavor can add the Nano text editor by default to their spins without replacing it as the default terminal-based text editor, which is currently held by Vi. At today's Fedora Workstation meeting they refrained from trying to change the default text editor just for Fedora Workstation and instead will issue a system-wide proposal to change it to Nano for all of Fedora's spins.

  • Fedora shirts and sweatshirts from HELLOTUX

    Linux clothes specialist HELLOTUX from Europe recently signed an agreement with Red Hat to make embroidered Fedora t-shirts, polo shirts and sweatshirts. They have been making Debian, Ubuntu, openSUSE, and other Linux shirts for more than a decade and now the collection is extended to Fedora.

Games: Valve, Half-Life, and Counter-Strike: Global Offensive

  • Valve Announcing Half-Life: Alyx VR Game On Thursday

    Valve has confirmed recent rumors around one of their new virtual reality games in development being Half-Life: Alyx. Valve tweeted out a short time ago that Half-Life: Alyx will be announced on Thursday. However, the VR game isn't expected to ship until sometime in 2020.

  • Valve has now confirmed Half-Life: Alyx, their new VR flagship title

    Well, that was a little sooner than expected. Valve have now officially confirmed Half-Life is back with their VR title Half-Life: Alyx.

  • Counter-Strike: Global Offensive releases the huge Operation Shattered Web update

    Not content with just announcing Half-Life: Alyx, their new VR flagship title, Valve also updated Counter-Strike: Global Offensive with a big new operation called Shattered Web. I have to admit, I'm really loving the humour from whoever has been running the CS:GO Twitter account lately. Earlier today they put up a poll on Twitter, asking what people preferred between a new Operation and a weapon nerf. They then quickly replied with "Loud and clear, Twitter. We'll get started." and then minutes later "OK, we're done"—brilliant. Not great for me mind you, being in the UK the timings are never great with it now gone midnight but here I am…

Supercomputing Articles

  • Exascale meets hyperscale: How high-performance computing is transitioning to cloud-like environments

    Twice a year the high-performance computing (HPC) community anxiously awaits the announcement of the latest edition of the Top500 list, cataloging the most powerful computers on the planet. The excitement of a supercomputer breaking the coveted exascale barrier and moving into the top position typically overshadows the question of which country will hold the record. As it turned out, the top 10 systems on the November 2019 Top500 list are unchanged from the previous revision with Summit and Sierra still holding #1 and #2 positions, respectively. Despite the natural uncertainty around the composition of the Top500 list, there is little doubt about software technologies that are helping to reshape the HPC landscape. Starting at the International Supercomputing conference earlier this year, one of the technologies leading this charge is containerization, lending further credence to how traditional enterprise technologies are influencing the next generation of supercomputing applications. Containers are borne out of Linux, the operating system underpinning Top500 systems. Because of that, the adoption of container technologies has gained momentum and many supercomputing sites already have some portion of their workflows containerized. As more supercomputers are being used to run artificial intelligence (AI) and machine learning (ML) applications to solve complex problems in science-- including disciplines like astrophysics, materials science, systems biology, weather modeling and cancer research, the focus of the research is transitioning from using purely computational methods to AI-accelerated approaches. This often requires the repackaging of applications and restaging the data for easier consumption, where containerized deployments are becoming more and more important.

  • Exploring AMD’s Ambitious ROCm Initiative

    Three years ago, AMD released the innovative ROCm hardware-accelerated, parallel-computing environment [1] [2]. Since then, the company has continued to refine its bold vision for an open source, multiplatform, high-performance computing (HPC) environment. Over the past three years, ROCm developers have contributed many new features and components to the ROCm open software platform. ROCm is a universal platform for GPU-accelerated computing. A modular design lets any hardware vendor build drivers that support the ROCm stack [3]. ROCm also integrates multiple programming languages and makes it easy to add support for other languages. ROCm even provides tools for porting vendor-specific CUDA code into a vendor-neutral ROCm format, which makes the massive body of source code written for CUDA available to AMD hardware and other hardware environments.

  • High-Performance Python – GPUs

    When GPUs became available, C code via CUDA, a parallel computing platform and programming model developed by Nvidia for GPUs, was the logical language of choice. Since then, Python has become the tool of choice for machine learning, deep learning, and, to some degree, scientific code in general. Not long after the release of CUDA, the Python world quickly created tools for use with GPUs. As with new technologies, a plethora of tools emerged to integrate Python with GPUs. For some time, the tools and libraries were adequate, but soon they started to show their age. The biggest problem was incompatibility. If you used a tool to write code for the GPU, no other tools could read or use the data on the GPU. After making computations on the GPU with one tool, the data had to be copied back to the CPU. Then a second tool had to copy the data from the CPU to the GPU before commencing its computations. The data movement between the CPU and the GPU really affected overall performance. However, these tools and libraries allowed people to write functions that worked with Python. In this article, I discuss the Python GPU tools that are being actively developed and, more importantly, likely to interoperate. Some tools don’t need to know CUDA for GPU code, and other tools do need to know CUDA for custom Python kernels.

  • Porting CUDA to HIP

    You’ve invested money and time in writing GPU-optimized software with CUDA, and you’re wondering if your efforts will have a life beyond the narrow, proprietary hardware environment supported by the CUDA language. Welcome to the world of HIP, the HPC-ready universal language at the core of AMD’s all-open ROCm platform [1]. You can use HIP to write code once and compile it for either the Nvidia or AMD hardware environment. HIP is the native format for AMD’s ROCm platform, and you can compile it seamlessly using the open source HIP/​Clang compiler. Just add CUDA header files, and you can also build the program with CUDA and the NVCC compiler stack (Figure 1).

  • OpenMP – Coding Habits and GPUs

    When first using a new programming tool or programming language, it’s always good to develop some good general habits. Everyone who codes with OpenMP directives develops their own habits – some good and some perhaps not so good. As this three-part OpenMP series finishes, I highlight best practices from the previous articles that can lead to good habits. Enamored with new things, especially those that drive performance and scalability, I can’t resist throwing a couple more new directives and clauses into the mix. After covering these new directives and clauses, I will briefly discuss OpenMP and GPUs. This pairing is fairly recent, and compilers are still catching up to the newer OpenMP standards, but it is important for you to understand that you can run OpenMP code on targeted offload devices (e.g., GPUs).

  • News and views on the GPU revolution in HPC and Big Data:

    Exploring AMD's Ambitious ROCm Initiative Porting CUDA to HIP Python with GPUs OpenMP – Coding Habits and GPUs