Language Selection

English French German Italian Portuguese Spanish

Kde Planet

Syndicate content
Planet KDE - http://planetKDE.org/
Updated: 12 hours 59 min ago

KDE Onboarding Sprint Report

Wednesday 24th of July 2019 07:08:36 PM

These are my notes from the onboarding sprint. I had to miss the evenings, because I’m not very fit at the moment, so this is just my impression from the days I was able to be around, and from what I’ve been trying to do myself.

Day 1

The KDE Onboarding Sprint happened in Nuremberg, 22 and 23 July. The goal of the sprint was to come closer to making getting started working on existing projects in the KDE community easier: more specifically, this sprint was held to work on the technical side of the developer story. Of course, onboarding in the wider sense also means having excellent documentation (that is easy to find), a place for newcomers to ask questions (that is easy to find).

Ideally, an interested newcomer would be able to start work without having to bother building (many) dependencies, without needing the terminal at first, would be able to start improving libraries like KDE frameworks as a next step, and be able to create a working and installable release of his work, to use or to share.

Other platforms have this story down pat, with the proviso that these platforms promote greenfield development, not extending existing projects, as well as working within the existing platform framework, without additional dependencies:

  • Apple: download XCode, and you can get started.
  • Windows: download Visual Studio, and you are all set.
  • Qt: download the Qt installer with Qt Creator, and documention, examples and project templates are all there, no matter for which platform you develop: macOS, Windows, Linux, Android or iOS.

GNOME Builder is also a one-download, get-started offering. But Builder adds additional features to the three above: it can download and build extra dependencies (with atrocious user-feedback, it has to be said), and it offers a list of existing GNOME projects to start hacking on. (Note: I do not know what happens when getting Builder on a system that lacks git, cmake or meson.)

KDE has nothing like this at the moment. Impressive as the kdesrc-build scripts are technically (thanks go to Michael Pyne for giving an in-depth presentation), they are not newcomer-friendly, with a complicated syntax, configuration files and dependency on working from the terminal. KDE also has much more diversity in its projects than GNOME:

  • Unlike GNOME, KDE software is cross-platform — though note that not every person present at the sprint was convinced of that, even dismissing KDE applications ported to Windows as “not used in the real world”.
  • Part of KDE is the Frameworks set of additional Qt libraries that are used in many KDE projects
  • Some KDE projects, like Krita, build from a single git repository, some projects build from dozens of repositories, where adding one feature, means working on half a dozen features at the same time, or in the case of Plasma replaces the entire desktop on Linux.
  • Some KDE projects are also deployed to mobile systems (iOS, Android, Plasma Mobile)

Ideally, no matter the project the newcomer selects, the getting-started story should be the same!

When the sprint team started evaluating technologies that are currently used in the KDE community to build KDE software, things started getting confused quite a bit. Some of the technologies discussed were oriented towards power users, some towards making binary releases. It is necessary to first determine which components need to be delivered to make a seamless newcomer experience possible:

  • Prerequisite tools: cmake, git, compiler
  • A way to fetch the repository or repositories the newcomer wants to work on
  • A way to fetch all the dependencies the project needs, where some of those dependencies might need to transition from dependency to project-being-worked on
  • A way to build, run and debug the project
  • A way to generate a release from the project
  • A way to submit the changes made to the original project
  • An IDE that integrates all of this

The sprint has spent most of its time on the dependencies problem, which is particularly difficult on Linux. An inventory of ways KDE projects “solve” the problem of providing the dependencies for a given project currently includes:

  • Using distribution-provided dependencies: this is unmaintainable because there are too many distributions with too much variation in the names of their packages to make it possible to keep full and up-to-date lists per project — and newcomers cannot find the deps from the name given in the cmake find modules.
  • Building the dependencies as CMake external projects per project: is only feasible for projects with a manageable number of dependencies and enough manpower to maintain it.
  • Building the dependencies as CMake external projects on binary factory, and using a docker image identical to the one used on the binary factory + these builds to develop the project in: same problem.
  • Building the (KDE, though now also some non-KDE) dependencies using kdesrc-build, getting the rest of the dependencies as distribution packages: this combines the first problem with fragility and a big learning curve.
  • Using the KDE flatpak SDK to provide the KDE dependencies, and building non-KDE dependencies manually, or fetching them from other flatpak SDK’s. (Query: is this the right terminology?) This suffers from inter- and intra-community politicking problems.

ETA: I completely forgot to mention craft here. Craft is a python based system close to emerge that has been around for ages. We used it initially for our Krita port to Windows; back then it was exclusively Windows oriented. These days, it also works on Linux and Windows. It can build all KDE and non-KDE dependencies that KDE applications need. But then why did I forget to mention it in my write-up? Was it because there was nobody at the sprint from the craft team? Or because nobody had tried it on Linux, and there was a huge Linux bias in any case? I don’t know… It was discussed during the meeting, though.

As an aside, much time was spent discussing docker, but when it was discussed, it was discussed as part of the dependency problem. However, it is properly a solution for running a build without affecting the rest of the developers system. (Apparently, there are people who install their builds into their system folders.) Confusingly, part of this discussion was also about setting environment variables to make it possible to run their builds when installed outside the system, or uninstalled. Note: the XDG environment variables that featured in this discussion are irrelevant for Windows and macOS.

Currently, newcomers for pim, frameworks or plasma development are pointed towards kdesrc-build (https://community.kde.org/Get_Involved/development, four clicks from www.kde.org), which not unexpectedly, leads to a heavy support burden in the kde development communication channels. For other applications, like Krita, there are build guides (https://docs.krita.org/en/contributors_manual.html, two clicks from krita.org), and that also is not a feasible solution.

As a future solution, Ovidiu Bogdan presented Conan, which is a cross-platform binary package manager for C++ libraries. This could solve the dependency problem, and only the dependency problem, but at the expense of making the run problem much harder because each library is in its own location. See https://conan.io/ .

Day 2

The attendendants decided to try to tackle the dependency problem. A certain amount of agreement was reached on acknowledging that this is a big problem, so this was discussed in-depth. Note again, that the focus was on Linux again, relegating the cross-platform story to second place. Dmitry noted that when he tries to recruit students for Krita, only one in ten is familiar with Linux, pointing out we’re really limiting ourselves with this attitude.

A KDE application, kruler, was selected, as a prototype, for building with dependencies provided either by flatpak or conan.

Dmitry and Ovidiu dug into Conan. From what I observed, laying down the groundwork is a lot of work, and by the end of the evening, Dmitry and Ovidiu has packaged about half of the Qt and KDE dependencies for kruler. Though the Qt developers are considering moving to Conan for Qt’s 3rdparty deps, Qt in particular turned out to be a problem. Qt needs to be modularized in Conan, instead of being a big, fat monolith. See https://bintray.com/kde.

Aleix Pol had already made a begin integrating flatpak and docker support into KDevelop, as well as providing a flatpak runtime for KDE applications (https://community.kde.org/Flatpak).

This made it relatively easy to package kruler, okular and krita using flatpak. There are now maintained nightly stable and unstable flatpak builds for Krita.

The problems with flatpak, apart from the politicking, consist in two different opinions of what an appstream should contain, checks that go beyond what freedesktop.org standards demand, weird errors in general (you cannot have a default git branch tag that contains a slash…) an opaque build system and an appetite for memory that goes beyond healthy: my laptop overheated and hung when trying to build a krita flatpak locally.

Note also that the flatpak (and docker) integration in KDevelop are not done yet, and didn’t work properly when testing. I am also worried that KDevelop is too complex and intimidating to use as the IDE that binds everything together for new developers. I’d almost suggest we repurpose/fork Builder for KDE…

Conclusion

We’re not done with the onboarding sprint goal, not by a country mile. It’s as hard to get started with hacking on a KDE project, or starting a new KDE project as has ever been. Flatpak might be closer to ready than conan for solving the dependency problem, but though flatpak solves more problems than just the dependency problem, it is Linux-only. Using conan to solve the dependency problem will be very high-maintenance.

I do have a feeling we’ve been looking at this problem at a much too low level, but I don’t feel confident about what we should be doing instead. My questions are:

* Were we right on focusing first on the dependency problem and nothing but the dependency problem?
* Apart from flatpak and conan, what solutions exist to deliver prepared build environments to new developers?
* Is kdevelop the right IDE to give new developers?
* How can we make sure our documentation is up to date and findable?
* What communication channels do we want to make visible?
* How much effort can we afford to put into this?

DBus connection on macOS

Wednesday 24th of July 2019 04:18:14 PM
What is DBus

DBus is a concept of software bus, an inter-process communication (IPC), and a remote procedure call (RPC) mechanism that allows communication between multiple computer programs (that is, processes) concurrently running on the same machine. DBus was developed as part of the freedesktop.org project, initiated by Havoc Pennington from Red Hat to standardize services provided by Linux desktop environments such as GNOME and KDE.

In this post, we only talk about how does DBus daemon run and how KDE Applications/Frameworks connect to it. For more details of DBus itself, please move to DBus Wiki.

QDBus

There are two types of bus: session bus and system bus. The user-end applications should use session bus for IPC or RPC.

For the DBus connection, there is already a good enough library named QDBus provided by Qt. Qt framework and especially QDBus is widely used in KDE Applications and Frameworks on Linux.

A mostly used function is QDBusConnection::sessionBus() to establish a connection to default session DBus. All DBus connection are established through this function.

Its implementation is:
1
2
3
4
5
6
QDBusConnection QDBusConnection::sessionBus()
{
if (_q_manager.isDestroyed())
return QDBusConnection(nullptr);
return QDBusConnection(_q_manager()->busConnection(SessionBus));
}

where _q_manager is an instance of QDBusConnectionManager.

QDBusConnectionManager is a private class so that we don’t know what exactly happens in the implementation.

The code can be found in qtbase.

DBus connection on macOS

On macOS, we don’t have a pre-installed dbus. When we compile it from source code, or install it from HomeBrew or somewhere, a configuration file session.conf and a launchd configuration file org.freedesktop.dbus-session.plist are delivered and expected to install into the system.

session.conf

In session.conf, one important thing is <listen>launchd:env=DBUS_LAUNCHD_SESSION_BUS_SOCKET</listen>, which means socket path should be provided by launchd through the environment DBUS_LAUNCHD_SESSION_BUS_SOCKET.

org.freedesktop.dbus-session.plist

On macOS, launchd is a unified operating system service management framework, starts, stops and manages daemons, applications, processes, and scripts. Just like systemd on Linux.

The file org.freedesktop.dbus-session.plist describes how launchd can find a daemon executable, the arguments to launch it, and the socket to communicate after launching daemon.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
<?xml version="1.0" encoding="UTF-8"?>
<!DOCTYPE plist PUBLIC "-//Apple//DTD PLIST 1.0//EN" "http://www.apple.com/DTDs/PropertyList-1.0.dtd">
<plist version="1.0">
<dict>
<key>Label</key>
<string>org.freedesktop.dbus-session</string>

<key>ProgramArguments</key>
<array>
<string>/{DBus install path}/bin/dbus-daemon</string>
<string>--nofork</string>
<string>--session</string>
</array>

<key>Sockets</key>
<dict>
<key>unix_domain_listener</key>
<dict>
<key>SecureSocketWithKey</key>
<string>DBUS_LAUNCHD_SESSION_BUS_SOCKET</string>
</dict>
</dict>
</dict>
</plist>

Once the daemon is successfully launched by launchd, the socket will be provided in DBUS_LAUNCHD_SESSION_BUS_SOCKET env of launchd.

We can get it with following command:
1
launchctl getenv DBUS_LAUNCHD_SESSION_BUS_SOCKET

Current solution in KDE Connect

KDE Connect needs urgently DBus to make, the communication between kdeconenctd and kdeconnect-indicator or other components, possible.

First try

Currently, we delivered dbus-daemon in the package, and run

1
./Contents/MacOS/dbus-daemon --config-file=./Contents/Resources/dbus-1/session.conf --print-address --nofork --address=unix:tmpdir=/tmp

--address=unix:tmpdir=/tmp provides a base directory to store a random unix socket descriptor. So we could have serveral instances at the same time, with different addresse.

--print-address can let dbus-daemon write its generated, real address into standard output.

Then we redirect the output of dbus-daemon to
KdeConnectConfig::instance()->privateDBusAddressPath(). Normally, it should be $HOME/Library/Preferences/kdeconnect/private_dbus_address. For example, the address in it is unix:path=/tmp/dbus-K0TrkEKiEB,guid=27b519a52f4f9abdcb8848165d3733a6.

Therefore, our program can access this file to get the real DBus address, and use another function in QDBus to connect to it:
1
QDBusConnection::connectToBus(KdeConnectConfig::instance()->privateDBusAddress(), QStringLiteral(KDECONNECT_PRIVATE_DBUS_NAME));

We redirect all QDBusConnection::sessionBus to QDBusConnection::connectToBus to connect to our own DBus.

Fake a session DBus

With such solution, kdeconnectd and kdeconnect-indicator coworks well. But in KFrameworks, there are lots of components which are using QDBusConnection::sessionBus rather than QDBusConnection::connectToBus. We cannot change all of them.

Then I came up with an idea, try to fake a session bus on macOS.

To hack and validate, I tried to launch a dbus-daemon using /tmp/dbus-K0TrkEKiEB as address, and then I tried type this in my terminal:
1
launchctl setenv DBUS_LAUNCHD_SESSION_BUS_SOCKET /tmp/dbus-K0TrkEKiEB

Then I launched dbus-monitor --session. It did connect to the bus that I launched.

And then, any QDBusConnection::sessionBus can establish a stable connection to the faked session bus. So components in KFramework can use the same session bus as well.

To implement it in KDE Connect, after starting dbus-daemon, I read the file content, filter the socket address, and call launchctl to set DBUS_LAUNCHD_SESSION_BUS_SOCKET env.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
// Set launchctl env
QString privateDBusAddress = KdeConnectConfig::instance()->privateDBusAddress();
QRegularExpressionMatch path;
if (privateDBusAddress.contains(QRegularExpression(
QStringLiteral("path=(?<path>/tmp/dbus-[A-Za-z0-9]+)")
), &path)) {
qCDebug(KDECONNECT_CORE) << "DBus address: " << path.captured(QStringLiteral("path"));
QProcess setLaunchdDBusEnv;
setLaunchdDBusEnv.setProgram(QStringLiteral("launchctl"));
setLaunchdDBusEnv.setArguments({
QStringLiteral("setenv"),
QStringLiteral(KDECONNECT_SESSION_DBUS_LAUNCHD_ENV),
path.captured(QStringLiteral("path"))
});
setLaunchdDBusEnv.start();
setLaunchdDBusEnv.waitForFinished();
} else {
qCDebug(KDECONNECT_CORE) << "Cannot get dbus address";
}

Then everything works!

Possible improvement
  1. Since we can directly use session bus, the redirect from QDBusConnection::sessionBus to QDBusConnection::connectToBus is not necessary anymore. Everyone can connect it in convenience.
  2. Each time we launch kdeconnectd, a new dbus-daemon is launched and the environment in launchctl is overwritten. To improve this, we might detect whether there is already an available dbus-daemon through testing connectivity of returned QDBusConnection::sessionBus. This might be done by a bootstrap script.
  3. It will be really nice if we can have a unified way for all KDE Applications on macOS.
Conclusion

I’m looking forward to a general DBus solution for all KDE applications :)

Improved rendering of mathematical expressions in Cantor

Tuesday 23rd of July 2019 09:59:08 PM
Hello everyone!

In the previous post I mentioned that the render of mathematical expressions in Cantor has bad performance. This heavily and negatively influences user experience. With my recent code changes I addressed this problem and it should be solved now. In this blog post I wand to provide some details on what was done.

First, I want to show some numbers proving the performance improvements. For example, loading of the notebook "Rigid-body transformations in a plane (2D)" - one of the notebooks I’m using for testing - took 15.9 seconds (this number and all other numbers mentioned in the following are average values of 5 consequent measurements). With the new implementation it takes only 4.06 seconds. And this acceleration comes without loosing render quality.

This is a example, how modern render looks like compared with Jupyter renderer (as you can see, Cantor doesn't show images from web in Markdown entries, but I will fix it soon).




I did further measurements by executing all the tests I wrote for the Jupiter import which cover several Jupyter notebooks. Here the results:
  • Without math rendering - 7.75 seconds.
  • New implementation - 14.014 seconds.
  • Old implementation - 41.296 seconds.
To quickly summarize, we get an average of 535% performance improvement. This result depends on the number of available cores and I’ ll explain below why.

To get these results I solved two main problems of math rendering in Cantor.
First, I changed the code for the LaTeX renderer. In the old implementation the rendering process consisted of the following steps:
  1. create TeX document using a page template and the code provided by the user.
  2. run latex executable on the produced TEX file to generate the DVI file.
  3. run dvips executable to convert the DVI file to an EPS file.
  4. convert the produced EPS file to a QImage using Libspectre library.
After these four steps the produced QImage is shown Cantor’s worksheet (QGraphicsScene/View). As you see, the overall chain of steps to get the image out of a mathematical expression is quite long - there are several steps where the time is spent. In total, for a usual mathematical expression these operations take ~500 ms where the first three steps take 300 ms and the last step takes 200 ms. The complexity and the size of the mathematical expressions have a negligible impact on the numbers shown above. Also, the time spent in Cantor for expressions of other types is relatively small. So, for example if you have a notebook with 20 different mathematical expressions and some other entries of other types, Cantor will load the project in ca 20*500ms=10s.

I reduced this chain to three elements by merging the steps two and three. This was achieved by using pdflatex as the LaTeX engine which produces a PDF file directly out of the TEX file. Furthermore, I replaced libspectre library with Poppler pdf rendering library. This brought the overall time down to 330 ms with pdflatex process taking 300 ms and with the rendering in Poppler (converting PDF to QImage) taking only 30 ms. With this, for our example notebook with 20 mathematical expressions mentioned above the rendering take only 6.6 seconds. In this new chain, the LaTeX process is the bottle neck and I’m looking for potential acceleration here but until now I didn’t find any "magic" parameters which would help to reduce this time spent in latex rendering.

Despite this progress, loading of somewhat bigger documents will hurt in Cantor. For example, for a project having 100 formulas openning of the file will take ca. 33 seconds.

The problem here is that the rendering process is a blocking and non-parallelized operation - only one mathematical expression is processed simultanuosly and the UI is blocked for the whole processing time. Obviously, this behaviour is unsatisfactory and under-utilizes the modern multi-core CPUs. So I  decided to run the rendering part in many parallel tasks asynchronously without blocking the main GUI thread. Fortunately, Qt helps here a log with it's classes QThreadPool managing the pool of threads and QRunnable providing an interface to define "tasks" that will be executed in parallel.

Now when openning a project, for every Markdown entry containing mathematical expression, Cantor creates a render task for every expression, sends this task to the thread pool and continues with the processing of entries in the document. Once such a task has finished, the result is shown in Cantor's worksheet. With this a further good performance improvement can be achieved. Clearly, the more cores you have the faster the processing will be. Of course, if you have only a small number of physical threads possible on your computer, you won't notice a huge difference. But still, you should see an improvement compared to the old single-threaded implementation in Cantor.

For a notebook comparable in size to  "Rigid-body transformations in a plane (2D)" project which has 109 mathematical expressions, the loading of the notebook takes a reasonable and acceptable time on my hardware (I have 8 physical cores in the CPU, so that is why the render acceleration is so big). And, thanks to the asynchron processing, the user can interact with the notebook even though the rendering of the mathematical expressions is still in process.

Since my previous post, not only math renderer have changed, there is also a huge change in Markdown support - Cantor finally handles embeded math expressions, like $...$ and $$...$$ in accordance with the Markdown syntax. In the next blog post I'll describe how it works now.

Day 58

Tuesday 23rd of July 2019 03:05:40 PM

Since the last update, I worked on the Khipu interface and created some models to manage the information on the screen. Actually, Khipu is looking like:

The interface is not finished and I’ll leave the design to the end, because there’s a lot to do in the back end now.
The remove button works, but the rename hasn’t yet been implemented.
I made a SpaceModel that manages the spaces, and a PlotModel that manages the information on the black retangle on the menu.
I started to link my code with Analitza, the KDE mathematical library, using their qml components such as Graph2D and Graph3D to show these interactive spaces and the Expression class holds the user input (a mathematical function such as “y = x**2”), and now I’m stuck trying to learn about the AnalitzaPlot library. I need to understand how it works to show functions on these spaces. I’m using KAlgebra Mobile, an another KDE mathematical software, as a reference for my code.

More in Tux Machines

Programming: DevNation, Python, RcppAnnoy and More

  • Plumbing Kubernetes CI/CD with Tekton

    Our first DevNation Live regional event was held in Bengaluru, India in July. This free technology event focused on open source innovations, with sessions presented by elite Red Hat technologists. In this session, Kamesh Sampath introduces Tekton, which is the Kubernetes-native way of defining and running CI/CD. Sampath explores the characteristics of Tekton—cloud-native, decoupled, and declarative—and shows how to combine various building blocks of Tekton to build and deploy a cloud-native application.

  • Coverage 5.0 beta 1

    I want to finish coverage.py 5.0. It has some big changes, so I need people to try it and tell me if it’s ready. Please install coverage.py 5.0 beta 1 and try it in your environment. I especially want to hear from you if you tried the earlier alphas of 5.0. There have been some changes in the SQLite database that were needed to make measurement efficient enough for large test suites, but that hinder ad-hoc querying.

  • How to get current date and time in Python?

    In this article, you will learn to get today's date and current date and time in Python. We will also format the date and time in different formats using strftime() method. There are a number of ways you can take to get the current date. We will use the date class of the datetime module to accomplish this task.

  • RcppAnnoy 0.0.14

    A new minor release of RcppAnnoy is now on CRAN, following the previous 0.0.13 release in September. RcppAnnoy is the Rcpp-based R integration of the nifty Annoy library by Erik Bernhardsson. Annoy is a small and lightweight C++ template header library for very fast approximate nearest neighbours—originally developed to drive the famous Spotify music discovery algorithm. This release once again allows compilation on older compilers. The 0.0.13 release in September brought very efficient 512-bit AVX instruction to accelerate computations. However, this could not be compiled on older machines so we caught up once more with upstream to update to conditional code which will fall back to either 128-bit AVX or no AVX, ensuring buildability “everywhere”.

  • The Royal Mint eyes fresh IT talent to power digital drive

    The Royal Mint has been manufacturing coins for 1,100 years, originally from the Tower of London and, since 1967, from its current site in South Wales. Today, it is the world’s largest export mint, printing 3.3 billion coins and blanks a year, and now is looking to expand its digital reach to serve retail customers online.

  • Google plans to give slow websites a new badge of shame in Chrome

    A new badge could appear in the future that’s designed to highlight sites that are “authored in a way that makes them slow generally.” Google will look at historical load latencies to figure out which sites are guilty of slow load times and flag them, and the Chrome team is also exploring identifying sites that will load slowly based on device hardware or network connectivity.

  • Moving towards a faster web

    In the future, Chrome may identify sites that typically load fast or slow for users with clear badging. This may take a number of forms and we plan to experiment with different options, to determine which provides the most value to our users.

    Badging is intended to identify when sites are authored in a way that makes them slow generally, looking at historical load latencies. Further along, we may expand this to include identifying when a page is likely to be slow for a user based on their device and network conditions.

  • The Maturing of QUIC

    QUIC continues to evolve through a collaborative and iterative process at the IETF — of adding features, implementing them, evaluating them, reworking or discarding them because they don’t stand up to continued scrutiny, and refining them. And in doing so, QUIC has matured in more ways than we imagined, yielding a protocol that is remarkably different and substantially better than it was in the beginning. So, keeping your arms and legs inside the ride at all times, let us take you on this journey of how QUIC has gone from an early experiment to a standard poised to modernize the [Internet].

  • HEADS UP: ntpd changing [in OpenBSD]

    Probably after 6.7 we'll delete the warning. Maybe for 6.8 we'll remove -s and -S from getopt, and starting with those options will fail.

today's howtos

"Wireshark For The Terminal" Termshark 2.0 Adds Stream Reassembly, Piped Input And Dark Mode

Termshark, a Wireshark-like terminal interface for TShark written in Go, was updated to version 2.0.0. This release includes support for dark mode, piped input, and stream reassembly, as well as performance optimizations that make the tool faster and more responsive. Read more

Red Hat Leftovers

  • GitHub report surprises, serverless hotness, and more industry trends

    Now, let's discuss how developers can use Quarkus to bring Java into serverless, a place where previously, it was unable to go. Quarkus introduces a comprehensive and seamless approach to generating an operating system specific (aka native) executable from your Java code, as you do with languages like Go and C/C++. Environments such as event-driven and serverless, where you need to start a service to react to an event, require a low time-to-first-response, and traditional Java stacks simply cannot provide this. Knative enables developers to run cloud-native applications as serverless containers in seconds and the containers will go down to zero on demand. In addition to compiling Java to Knative, Quarkus aims to improve developer productivity. Quarkus works out of the box with popular Java standards, frameworks and libraries like Eclipse MicroProfile, Apache Kafka, RESTEasy, Hibernate, Spring, and many more. Developers familiar with these will feel at home with Quarkus, which should streamline code for the majority of common use cases while providing the flexibility to cover others that come up.

  • When Quarkus Meets Knative Serverless Workloads

    Daniel Oh is a principal technical product marketing manager at Red Hat and works CNCF ambassador as well. He's well recognized in cloud-native application development, senior DevOps practices in many open source projects and international conferences.

  • Making things Go: Command Line Heroes draws infrastructure

    Most of our episodes feature languages that have clear arcs. "The Infrastructure Effect" was different. By all accounts, COBOL is a language heading the way of Latin. There are only a few specialists who are proficient COBOL coders. But it’s still vital to many long-lasting institutions that affect millions: the banking industry, the IRS, and manufacturing. And the world of tech infrastructure is moving on—to Go. Where does that leave COBOL in the next few years? And how do you tease all of that in an image? We had to decide what visual themes could we use to depict each language—and then, how to combine them into a single, coherent frame. COBOL and Go have a similar function, so we wanted to make sure each language had clear, distinct imagery. We decided to rely on some of their real-world applications: the bank and subways for COBOL, and the cloud-based applications for Go.

  • Using the Red Hat OpenShift tuned Operator for Elasticsearch

    I recently assisted a client to deploy Elastic Cloud on Kubernetes (ECK) on Red Hat OpenShift 4.x. They had run into an issue where Elasticsearch would throw an error similar to: Max virtual memory areas vm.max_map_count [65530] likely too low, increase to at least [262144] According to the official documentation, Elasticsearch uses a mmapfs directory by default to store its indices. The default operating system limits on mmap counts are likely to be too low, which may result in out of memory exceptions. Usually, administrators would just increase the limits by running: sysctl -w vm.max_map_count=262144 However, OpenShift uses Red Hat CoreOS for its worker nodes and, because it is an automatically updating, minimal operating system for running containerized workloads, you shouldn’t manually log on to worker nodes and make changes. This approach is unscalable and results in a worker node becoming tainted. Instead, OpenShift provides an elegant and scalable method to achieve the same via its Node Tuning Operator.

  • bcc-tools brings dynamic kernel tracing to Red Hat Enterprise Linux 8.1

    In Red Hat Enterprise Linux 8.1, Red Hat ships a set of fully supported on x86_64 dynamic kernel tracing tools, called bcc-tools, that make use of a kernel technology called extended Berkeley Packet Filter (eBPF). With these tools, you can quickly gain insight into certain aspects of system performance that would have previously required more time and effort from the system and operator. The eBPF technology allows dynamic kernel tracing without requiring kernel modules (like systemtap) or rebooting of the kernel (as with debug kernels). eBPF accomplishes this while maintaining minimal overhead for each trace point, making these tools an ideal way to instrument running kernels in production.

  • What open communities teach us about empowering customers

    When it comes to digital transformation, businesses seem to be on the right track improving their customers' experiences through the use of technologies. Today, so much digital transformation literature describes the benefits of "delivering new value to customers" or "delivering value to customers in new ways."