Language Selection

English French German Italian Portuguese Spanish

Leftovers: KDE

Filed under
KDE
  • Baloo and NodeJs
  • KDE Connect on Github

    A month ago I created two mirrors for the KDE Connect repositories in Github, and I’m really happy with it. Projects in Github are more discoverable than in our internal KDE repo (our GIT web interface is not even indexed by search engines!), and makes it easier for new developers to get involved and send contributions, like these pull requests.

  • GSOC near its end
  • GCompris goes to KDE Randa Meeting 2015

    The Randa Meeting is an annual KDE sprint that takes place in Randa, Switzerland from the 6th to the 13th of September.

  • A (or the) secret about the Randa Meetings

    This year we hold the sixth edition of the Randa Meetings and during the year we had some really important (for KDE and the users of our software and products) and far-reaching events that happened in the middle of the Swiss Alps.

More in Tux Machines

This week in KDE: And now time for some UI polishing

This week we’ve mixed in a lot of user interface polishing with our usual assortment of bugfixes! 15-Minute Bugs Resolved Current number of bugs: 57, down from 59. 0 added, 1 found to already be fixed, and 1 resolved: When using screen scaling with the on-by-default Systemd startup in Plasma, the wrong scale factor is no longer sometimes used immediately upon login, which would cause Plasma to be blurry (on Wayland) or everything to be displayed at the wrong size (on X11) (David Edmundson, Plasma 5.25.2) Read more Also: Weekly Updates on GCompris : 1

ROMA Linux laptop to feature quad-core RISC-V SoC, support Web3, NFT, cryptocurrencies, etc..

ROMA is an upcoming Linux laptop equipped with an unnamed quad-core RISC-V processor with GPU and NPU, up to 16GB RAM, 256GB storage, primarily aimed at software developers, and with Web3 technology integration. The ROMA laptop will be born out of the collaboration between DeepComputing working on engineering and Xcalibyte taking care of system tuning, plus PW (assembly), ECP (security), XC (crypto), Rexeen (voice), and the LatticeX Foundation (PoS blockchain, NFT). Read more

PSPP 1.6.2 has bene released.

I'm very pleased to announce the release of a new version of GNU PSPP. PSPP is a program for statistical analysis of sampled data. It is a free replacement for the proprietary program SPSS. Read more

Programming Leftovers

  • Logistic Regression in R

    In data science and Statistics, it is a regression model if the dependent variable results in categorical values like True/False, Yes/No, or 0/1. Usually, the logistic regression model is binomial. However, it can be extended. It measures the probability of the successfulness or failure of an event as a dependent variable which is based on a mathematical equation. This equation relates the dependent variable (response variable) with the independent variables (predictor). We can say that logistic regression is a generalized form of linear regression but the main difference is in the predicted value range is (-∞, ∞) while the range of predicted value in logistic regression is (0,1). In this post, we will learn about logistic regression and how to implement it in the R programming language.

  • The Universe of Discourse : Things I wish everyone knew about Git (Part I)

    This is a writeup of a talk I gave in December for my previous employer.

  • The new learntla is now online!

    One lesson I’ve learned the hard way is that keeping lots of assets in sync is an absolute nightmare. So this version has a lot more software managing that for me. In particular, I built a pipeline for handling spec assets. I have several XML spec templates that represent “sequences of iterations” on a spec. A python script unpacks the template into a set of .tla files. After I put in appropriate metadata, a second script cleans up each spec into a “presentable” form, loads the metadata into the appropriate files, and places the asset in the appropriate path.

  • Generate Array with elements in given range and median as K

    Given two integers N and K and a range [L, R], the task is to build an array whose elements are unique and in the range [L, R] and the median of the array is K.

  • C++ Program to check if two Arrays are Equal or not

    Given two arrays arr1[] and arr2[] of length N and M respectively, the task is to check if the two arrays are equal or not.

  • Seaborn Regplot

    Seaborn is a Matplotlib-based visual analytics library. It has a high-level framework for defining the visually appealing analytical graphs. Matplotlib package is the foundation of the Seaborn module. To visualize the statistics and regression analysis, we use the regplot() function. To evaluate the regression model, there are many other interrelated contradictory approaches. Whenever the predicted output is a continuous as well as a cumulative value, it is referred to as a prediction model. Numerous other approaches can be employed. The most basic of which is the linear model. It integrates the values to the optimal higher dimensional space that passes through all of the vertices. The regplot() function is used to create the regression plots. Regression Analysis is a technique used for evaluating the associations between one or more independent factors or predictors and the dependent attributes or covariates. The variations in the requirements in correlation to modifications in specific determinants are analyzed through the Regression Analysis. The criteria’s declarative requirement is dependent on the indicators, which give the new value of the dependent attributes whenever the data points are updated. Evaluating the intensity of covariates, anticipating an outcome, and estimating are the three important applications of a regression model.

  • Seaborn HeatMap Colors

    Heatmaps are colored maps that display data in a two-dimensional format. Color variation is achieved by using hue, saturation, or brightness to portray the varied information on the color maps. This color variation provides the readers with visual information about the size of quantitative values. Heatmaps substitute numbers with colors since the human mind understands views better than the textual data. Considering that humans are primarily visual, it makes sense to present the data in any manner. Heatmaps are simple-to-understand visual representations of data. As a result, data visualization tools like Heatmaps are becoming increasingly popular. Heatmaps are used to display patterns, variance, and anomalies, as well as to depict the saturation or intensity of the variables. Relationships between variables are depicted via heatmaps. Both axes are used to plot these variables. By observing the color shift in the cell, we can look for the patterns. It only takes numerical input and shows it on the grid, with different data values displayed by the varying color intensity. Many various color schemes can be used to depict the heat map, each with its own set of perceptual advantages and disadvantages. Colors in the Heatmap indicate patterns in the data, thus the color palette decisions are more than just cosmetic. The finding of patterns can be facilitated by the appropriate color palettes but can also be hindered by the poor color choices. Colormaps are used to visualize heatmaps since they are a simple and effective way to see data. Diverse colormaps could be utilized for different sorts of heatmaps. In this article, we’ll explore how to interact with Seaborn heatmaps using the colormaps.

  • How to Create Array of zeros using Numpy in Python

    In this article, we will cover how to create a Numpy array with zeros using Python.

  • How To Do Train Test Split Using Sklearn In Python

    In this article, let’s learn how to do a train test split using Sklearn in Python. Train Test Split Using Sklearn The train_test_split() method is used to split our data into train and test sets. First, we need to divide our data into features (X) and labels (y). The dataframe gets divided into X_train,X_test , y_train and y_test. X_train and y_train sets are used for training and fitting the model. The X_test and y_test sets are used for testing the model if it’s predicting the right outputs/labels. we can explicitly test the size of the train and test sets. It is suggested to keep our train sets larger than the test sets.