Server
Nginx/Rambler Dispute Over Code
Submitted by Roy Schestowitz on Saturday 14th of December 2019 11:17:50 AM Filed under


-
What’s yours is ours Rambler Group claims exclusive rights to world’s most popular web-server software, six months after it's sold to U.S. company for 670 million dollars
On Thursday, December 12, Russian law enforcement raided the Moscow office of the IT company “Nginx,” which owns the eponymous web-server used by almost 500 million websites around the world. According to several reports, Nginx co-founders Igor Sysoev and Maxim Konovalov spent several hours in police interrogation. The search is part of a criminal case based on charges by a company tied to the Russian billionaire and Rambler Group co-owner Alexander Mamut, whose businesses believe they own the rights to the Nginx web-server because Sysoev started developing the code while working for Rambler in 2004. Meduza’s correspondent Maria Kolomychenko looks at how Sysoev and his partners spent 15 years creating the world’s most popular web-server before selling it to an American firm for $670 million, and how Rambler decided, half a year later, that it owns the technology.
-
‘A typical racket, simple as that’ Nginx co-founder Maxim Konovalov explains Rambler's litigation against his company, which develops the world’s most popular web-server
Russia’s IT industry is in the midst of a major conflict between businesses belonging to “Rambler Group” co-owner Alexander Mamut and the company “Nginx,” created by Igor Sysoev and his partner Maxim Konovalov. Nginx’s key product is the eponymous web-server used by more than a third of the world’s websites. Sysoev first released the software in 2004, while still an employee at Rambler, which is now claiming exclusive rights to Nginx, based on its interpretation of Russian law. The police have already joined the dispute, launching a criminal investigation and searching Nginx’s Moscow office. In an interview with Meduza, Nginx co-founder Maxim Konovalov described the police raid and explained why he thinks it took Rambler 15 years to claim ownership over the coveted web-server technology, which recently sold to the American corporation “F5 Networks” for $670 million.
- Login or register to post comments
Printer-friendly version
- Read more
- 990 reads
PDF version
Servers: Kubernetes, SUSE and Red Hat
Submitted by Roy Schestowitz on Thursday 12th of December 2019 03:49:27 PM Filed under
-
Creating Kubernetes distributions
Making a comparison between Linux and Kubernetes is often one of apples to oranges. There are, however, some similarities and there is an effort within the Kubernetes community to make Kubernetes more like a Linux distribution. The idea was outlined in a session about Kubernetes release engineering at KubeCon + CloudNativeCon North America 2019. "You might have heard that Kubernetes is the Linux of the cloud and that's like super easy to say, but what does it mean? Cloud is pretty fuzzy on its own," Tim Pepper, the Kubernetes release special interest group (SIG Release) co-chair said. He proceeded to provide some clarity on how the two projects are similar.
Pepper explained that Kubernetes is a large open-source project with lots of development work around a relatively monolithic core. The core of Kubernetes doesn't work entirely on its own and relies on other components around it to enable a workload to run, in a model that isn't all that dissimilar to a Linux distribution. Likewise, Pepper noted that Linux also has a monolithic core, which is the kernel itself. Alongside the Linux kernel is a whole host of other components that are chosen to work together to form a Linux distribution. Much like a Linux distribution, a Kubernetes distribution is a package of core components, configuration, networking, and storage on which application workloads can be deployed.
Linux has community distributions, such as Debian, where there is a group of people that help to build the distribution, as well as a community of users that can install and run the distribution on their own. Pepper argued that there really isn't a community Kubernetes distribution like Debian, one that uses open-source tools to build a full Kubernetes platform that can then be used by anyone to run their workloads. With Linux, community-led distributions have become the foundation for user adoption and participation, whereas with Kubernetes today, distributions are almost all commercially driven.
-
The total cost of software-defined storage
In the current economic climate, the cost of everything is often closely examined to be sure we’re not paying too much. However, many focus on just the cost of acquisition – the capital expenditure – as opposed to looking at the bigger picture – the total cost of ownership, or TCO.
In the world of IT, it’s easy to forget that the cost of owning servers, networking and storage equipment is more than the purchase price of the hardware. The total cost also includes installation, software licenses, service, support, training and upgrades amongst other things.
-
Red Hat Gets NIST Recertification for ‘Enterprise Linux’ Operating System; Paul Smith Quoted
A Red Hat operating system offering has earned recertification that validates the platform's capacity to process sensitive information in line with National Institute of Standards and Technology requirements.
Red Hat said Tuesday it renewed Federal Information Processing Standard 140-2 cryptography certification for the Enterprise Linux 7.6 software built to support agencies and organizations in government-regulated industries.
- Login or register to post comments
Printer-friendly version
- Read more
- 934 reads
PDF version
Servers: SysAdmins, Public 'Clouds' and Cautionary Tales
Submitted by Roy Schestowitz on Wednesday 11th of December 2019 10:04:01 AM Filed under
-
Do I need a college degree to be a sysadmin?
If we could answer that question with a simple "yes" or "no," this would not be much of a story. Reality is a little more nuanced, though. An accurate answer begins with one of "Yes, but…" or "No, but…"—and the answer depends on who you ask, among other important variables, including industry, company size, and so forth.
On the "yes" front, IT job descriptions don’t typically buck the "degree required" assumption, sysadmin roles included. This fact is perhaps especially true in the corporate business world across a wide range of sectors, and it isn’t limited to large companies, either. Consider a recent opening posted on the jobs site Indeed.com for an IT system administrator position at Crest Foods, a 650-person food manufacturing company in Ashton, Ill. The description includes plenty of familiar requirements for a sysadmin. The first bullet point under "Desired Education & Experience" reads: "Bachelor’s degree in computer science, networking, IT, or relevant field."
"Generally, systems administrators will have [degrees] from four-year universities," says Jim Johnson, district president at the recruiting firm Robert Half Technology. While some employers don’t specify a particular degree field, Johnson notes the bachelor’s in computer information systems (CIS) as a good fit for the sysadmin field and overlapping IT roles.
That said, Johnson also points out that there are other options out there for people that don’t pursue a traditional degree path. That’s especially true given the growth of online education and training, as well as in-person opportunities such as technical schools.
"There are [sysadmins] with computer systems professional or computer operator certificates from technical or online schools," Johnson says.
Moreover, a potential employer’s "desired" educational background can be just that: An ideal scenario, but not a dealbreaker. This fact can be true even if a degree is listed as "required," perhaps especially in markets with a tight supply of qualified candidates. If you’ve got the technical chops, a degree might become much more optional than a job description might lead you to believe.
-
Resource scarcity in Public Clouds
In addition to this, there are some “special” moments, such as Thanksgiving and the nearby days that, by now, have become a widespread event even beyond the countries where they used to be celebrated. Probably, in the data-centers in areas where those festivities are celebrated (or at least where the capitalistic part of the celebration is celebrated), the load reaches the annual peak, due to the e-commerce websites.
To make the situation even worst, many Cloud customers are rewriting and improving their applications, making them more cloud-native. Now, you’ll wonder how cloud-native applications can make things worse? The reason is very simple: the cloud-native applications scale. This means that during the off-peak season the applications will drastically reduce their footprint, creating the false feeling of resource abundancy.
This situation creates some problems, in my opinion.
First of all, since it’s very hard for the Public Cloud provider to estimate the load - and in the future, it will be even harder - we will have to live with frequent resource exhaustion in public clouds, which will make a single-cloud single-region application fragile. This will be true, not even considering the economic aspect of the problem. There will be situations where it will not be economically convenient for the Cloud Provider to provision enough resources to manage the peaks since the additional provisioning cost would not be repaid during the short periods those resources will be used.
-
Notice: Linode Classic Manager Users
Our legacy Linode Manager will be decommissioned on January 31, 2020. After that time, you will be automatically redirected to the Cloud Manager when logging in to manage your infrastructure on Linode.
- Login or register to post comments
Printer-friendly version
- Read more
- 879 reads
PDF version
Kubernetes 1.17
Submitted by Roy Schestowitz on Tuesday 10th of December 2019 03:54:17 AM Filed under


-
Kubernetes 1.17: Stability
We’re pleased to announce the delivery of Kubernetes 1.17, our fourth and final release of 2019! Kubernetes v1.17 consists of 22 enhancements: 14 enhancements have graduated to stable, 4 enhancements are moving to beta, and 4 enhancements are entering alpha.
-
Kubernetes 1.17 Feature: Kubernetes Volume Snapshot Moves to Beta
The Kubernetes Volume Snapshot feature is now beta in Kubernetes v1.17. It was introduced as alpha in Kubernetes v1.12, with a second alpha with breaking changes in Kubernetes v1.13. This post summarizes the changes in the beta release.
-
Kubernetes 1.17 Feature: Kubernetes In-Tree to CSI Volume Migration Moves to Beta
The Kubernetes in-tree storage plugin to Container Storage Interface (CSI) migration infrastructure is now beta in Kubernetes v1.17. CSI migration was introduced as alpha in Kubernetes v1.14.
Kubernetes features are generally introduced as alpha and moved to beta (and eventually to stable/GA) over subsequent Kubernetes releases. This process allows Kubernetes developers to get feedback, discover and fix issues, iterate on the designs, and deliver high quality, production grade features.
- 2 comments
Printer-friendly version
- Read more
- 1913 reads
PDF version
Red Hat, IBM and Server Leftovers
Submitted by Roy Schestowitz on Monday 9th of December 2019 05:04:27 AM Filed under

-
Red Hat’s David Egts Talks Open-Source Approaches to Digital Transformation
David Egts, chief technologist of Red Hat's (NYSE: RHT) North American public sector business, has said that open-source procedures can help organizations meet digital transformation goals while promoting mobility and addressing a skills gap.
In a Fedscoop interview posted Monday, Egts noted that Red Hat’s Open Innovation Labs works with government customers to help them reduce workload processing time through new software development methods.
-
Empowering the open source community
Red Hat invests heavily in open source communities, offering our employees' time and skills in many upstreams to advance the pace of innovation and support our customers' interests. And when Red Hat purchases a company, it ensures that any proprietary software becomes available as open source. For instance, just this month, Red Hat shared Quay, the formerly proprietary container registry and security scanner software, as an open source upstream available to all.
[...]
Awareness of open source in the Middle East is growing in many sectors, particularly in the telecommunications sphere. As operators seek to evolve from physical to digital players, open source ecosystems and solutions are being implemented to optimise and simplify operations, reduce costs, and facilitate digital transformation agendas. From Egypt, Saudi Arabia, and the UAE, to everywhere in between, open source solutions are being unlocked as cost-effective, flexible, reliable, secure, and alternative foundational systems to drive innovation and digital transformation. For telecommunications organisations, open source will enable improved delivery of digital services, the ability to introduce new digital services faster, and the capabilities to leverage insights from data to create new revenue streams.
-
Coders are the new superheroes of natural disasters
The film, produced by IBM and directed by Austin Peck, centers on the increasing incidents of the devastation of natural disasters, and a cadre of coders who've dedicated their attentions and tech talent to help facilitate and expedite the responders' response to natural disasters. The social-activist developers serve as a frontline defense against some of the society-at-large greatest dangers.
-
Explore Kubernetes with OpenShift in a workshop near you
The Kubernetes with OpenShift World Tour is a series of in-person workshops around the globe that help you build the skills you need to quickly modernize your applications. This World Tour provides a hands-on experience and teaches the basics of working with the hybrid-cloud, enterprise container platform Red Hat® OpenShift® on IBM Cloud™. You learn coding skills in the world of containerized, cloud-native development with expert developer advocates, who have deep technical experience building cloud microservices and applications with Red Hat OpenShift.
-
IBM VP of ‘opentech’ on the open road ahead
Moore and his team of open source developers work with open source communities such as the Apache Software Foundation, Linux Foundation, eClipse, OSGi, OpenStack, Cloud Foundry, Docker, JS, Node.js and more.
-
5 Not to miss Linux hosting providers
Next to this, Linux based servers have proved to be stable and capable of handling numerous requests at the time. Because no one wants a site that crashes when visitors are trying to get to it. It can be very annoying and bad for business. Linux has a very dedicated community and on the various forums, you can find useful information in dealing with a certain problem that you may encounter.
- Login or register to post comments
Printer-friendly version
- Read more
- 921 reads
PDF version
10 skills every Linux system administrator should have
Submitted by Roy Schestowitz on Thursday 5th of December 2019 02:53:58 PM Filed under
I know what you're saying. You're saying, "Oh, great, someone else telling me that I need soft skills." Yes, that's what I'm telling you. Honing your interviewing skills can not only determine if you get a particular job, it can also be a major factor in the salary you get. It's true. Let's say, for example, that the salary range for a mid-level SA job is $56k to $85k per year. You might be fully qualified for the top of the range, but the company offers you $70k instead and mentions some nonsense about growth potential or they tell you that they'll bring you along when the time is right.
You need to practice answering questions. Answer the question that's asked. Don't give so much information that you see eyes glazing over, but giving answers that are too short will make you appear arrogant or flippant. Give enough examples of your work to let the interviewer(s) know that you know what you're talking about. They can ask for more details if they want to.
You have to learn to watch other people's behaviors. Are they listening to you? Are they focused on you and the interview? Do they look as though you haven't said enough when you pause to allow them to speak or ask another question? Watch and learn. Practice with other system administrators in your group. Do mock interviews with the group. I know it might sound silly, but it's important to be able to speak to other people about what you do. This practice can also be good for you in speaking with managers. Don't get too deep into the weeds with non-technical people. Keep your answers concise and friendly, and offer examples to illustrate your points.
- Login or register to post comments
Printer-friendly version
- Read more
- 998 reads
PDF version
ARM Linux on AWS
Submitted by Roy Schestowitz on Wednesday 4th of December 2019 06:48:16 PM Filed under



-
Amazon Talks Up Big Performance Gains For Their 7nm Graviton2 CPUs
If Amazon's numbers are accurate, Graviton2 should deliver a big performance boost for Amazon's ARM Linux cloud potential. Graviton2 processors are 7nm designs making use of Arm Neoverse cores. Amazon says they can deliver up to seven times the performance of current A1 instances, twice the FP performance, and support more memory channels as well as doubling the per-core cache.
-
AWS announces new ARM-based instances with Graviton2 processors
AWS has been working with operating system vendors and independent software vendors to help them release software that runs on ARM. ARM-based EC2 instances support Amazon Linux 2, Ubuntu, Red Hat, SUSE, Fedora, Debian and FreeBSD. It also works with multiple container services (Docker, Amazon ECS, and Amazon Elastic Kubernetes Service).
-
Coming Soon – Graviton2-Powered General Purpose, Compute-Optimized, & Memory-Optimized EC2 Instances
We launched the first generation (A1) of Arm-based, Graviton-powered EC2 instances at re:Invent 2018. Since that launch, thousands of our customers have used them to run many different types of scale-out workloads including containerized microservices, web servers, and data/log processing.
-
AWS EC2 6th Gen Arm Instances are 7x Faster thanks to Graviton 2 Arm Neoverse N1 Custom Processor
Last year Amazon introduced their first 64-bit Arm-based ECS2 “A1” instances which were found to deliver up to 45% cost savings over x86 Instances for the right workloads.
-
AWS launches Braket, its quantum computing service
With Braket, developers can get started on building quantum algorithms and basic applications and then test them in simulations on AWS, as well as the quantum hardware from its partners. That’s a smart move on AWS’s part, as it’s hedging its bets without incurring the cost of trying to build a quantum computer itself. And for its partners, AWS provides them with the kind of reach that would be hard to achieve otherwise. Developers and researchers, on the other hand, get access to all of these tools through a single interface, making it easier for them to figure out what works best for them.
- 1 comment
Printer-friendly version
- Read more
- 2145 reads
PDF version
News About Servers (SUSE, Ubuntu, Red Hat and More)
Submitted by Roy Schestowitz on Tuesday 3rd of December 2019 03:45:44 PM Filed under

-
What is Cloud Native?
Cloud native is more than just a buzzword, though. It's an approach used by some of the largest organizations on the planet, including Walmart, Visa, JP Morgan Chase, China Mobile, Verizon and Target, among others. Cloud native is an approach that enable developers and organization to be more agile, providing workload portability and scalability.
-
What is Kata Containers and why should I care?
Kata Containers can significantly improve the security and isolation of your container workloads. It combines the benefits of using a hypervisor, such as enhanced security, and container orchestration capabilities provided by Kubernetes.
Together with Eric Erns from Intel, we have recently performed a webinar in which we presented the benefits of using Kata Containers in a Charmed Kubernetes environment. In this blog, we aim to highlight the key outcomes from this webinar.
-
An idiot's guide to Kubernetes, low-code developers, and other industry trends
As part of my role as a senior product marketing manager at an enterprise software company with an open source development model, I publish a regular update about open source community, market, and industry trends for product marketers, managers, and other influencers. Here are five of my and their favorite articles from that update.
-
A blueprint for OpenStack and bare metal
The bare metal cloud is an abstraction layer for the pools of dedicated servers with different capabilities (processing, networking or storage) that can be provisioned and consumed with cloud-like ease and speed. It embraces the orchestration and automation of the cloud and applies them to bare metal workload use cases.
The benefit to end users is that they get access to the direct hardware processing power of individual servers and are able to provision workloads without the overhead of the virtualization layer—providing the ability to provision environments in an Infrastructure-as-code methodology with separation of tenants and projects.
-
Software Development, Microservices & Container Management – Part III – Why Kubernetes? A Deep Dive into Kubernetes world
Together with my colleague Bettina Bassermann and SUSE partners, we will be running a series of blogs and webinars from SUSE (Software Development, Microservices & Container Management, a SUSE webinar series on modern Application Development), and try to address the former questions and doubts about K8s and Cloud Native development and how it is not compromising quality and control.
-
Epic Performance with New Tuning Guide – SUSE Linux Enterprise Server on AMD EPYC* 7002 Series Processors
EPYC is AMD’s flagship mainstream server microprocessors and supports 1-way and 2-way multiprocessing. The first generation was originally announced back in May 2017 and replaced the previous Opteron server family with the introduction of the Zen microarchitecture for the mainstream market.
-
Content Lifecycle Management in SUSE Manager
Content Lifecycle management is managing how patches flows through your infra in a staged manner. In ideal infra, latest patches will always be applied on development servers. If everything is good there then those patches will be applied to QA servers and lastly to production servers. This enables sysadmins to catch issues if any and hence preventing patching of prod system which may create downtime of live environments.
SUSE Manager gives you this control via content lifecycle. In this, you create custom channels in SUSE Manager for example dev, qa and prod. Then you register your systems to those channels according to their criticality. Now whenever channels gets the new patches it will be available to respective systems (registered to those channels) to install. So if you control channels you control the patch availability to systems.
In content lifecycle management, suse manager enables you to push patches to channels manually. Like on first deploy all latest patches will be available to dev channels and hence dev systems. At this stage, if you run update commands (zypper up, yum update) they will show latest patches only on dev servers. QA and prod servers wont show any new patches.
-
The Early History of Usenet, Part VII: Usenet Growth and B-News
For quite a while, it looked like my prediction — one to two articles per day — was overly optimistic. By summer, there were only four new sites: Reed College, University of Oklahoma (at least, I think that that's what uucp node uok is), vax135, another Bell Labs machine — and, cruciallyy, U.C. Berkeley, which had a uucp connection to Bell Labs Research and was on the ARPANET.
In principle, even a slow rate of exponential growth can eventually take over the world. But that assumes that there are no "deaths" that will drive the growth rate negative. That isn't a reasaonable assumption, though. If nothing else, Jim Ellis, Tom Truscott, Steve Daniel, and I all planned to graduate. (We all succeeded in that goal.) If Usenet hadn't shown its worth to our successors by then, they'd have let it wither. For that matter, university faculty or Bell Labs management could have pulled the plug, too. Usenet could easily have died aborning. But the right person at Berkeley did the right thing.
Mary Horton was then a PhD student there. (After she graduated, she joined Bell Labs; she and I were two of the primary people who brought TCP/IP to the Labs, where it was sometimes known as the "datagram heresy". The phone network was, of course, circuit-switched…) Known to her but unknown to us, there were two non-technical ARPANET mailing lists that would be of great interest to many potential Usenet users, HUMAN-NETS and SF-LOVERS. She set up a gateway that relayed these mailing lists into Usenet groups; these were at some point moved to the fa ("From ARPANET") hierarchy. (For a more detailed telling of this part of the story, see Ronda Hauben's writings.) With an actual traffic source, it was easy to sell folks on the benefits of Usenet. People would have preferred a real ARPANET connection but that was rarely feasible and never something that a student could set up: ARPANET connections were restricted to places that had research contracts with DARPA. The gateway at Berkeley was, eventually, bidirectional for both Usenet and email; this enabled Usenet-style communication between the networks.
- Login or register to post comments
Printer-friendly version
- Read more
- 1262 reads
PDF version
Kubernetes: Helm and Gardener Projects
Submitted by Roy Schestowitz on Tuesday 3rd of December 2019 03:50:20 AM Filed under

-
Helm Package Manager for Kubernetes Moves Forward
The official release of version 3.0 of the Helm package manager for Kubernetes is designed to make it easier for IT organizations to discover and securely deploy software on Kubernetes clusters more easily.
Taylor Thomas, a core contributor to Helm who is also a software developer for Nike, says for the last year the committee that oversees the development of Helm under the auspices of the Cloud Native Computing Foundation (CNCF) has been structuring the package manager to rely more on the application programming interfaces (APIs) that Kubernetes exposes to store records of installation. Helm Charts, which are collections of YAML files describing a related set of Kubernetes resources, now can be rendered on the client, eliminating the need for the Tiller resource management tool resident in the previous release of Helm that ran on the Kubernetes cluster.
In addition to providing a more secure way to render Helm Charts, Thomas says this approach provides a more streamlined mechanism for packaging software using Helm. Helm 3.0 also updates Helm Charts and associated libraries.
Additionally, a revamped Helm Go software development kit (SDK) is designed to make Helm more accessible, with the aim of sharing and reusing code the Helm community has open-sourced with the broader Go community, says Thomas. -
Gardener Project Update
Last year, we introduced Gardener in the Kubernetes Community Meeting and in a post on the Kubernetes Blog. At SAP, we have been running Gardener for more than two years, and are successfully managing thousands of conformant clusters in various versions on all major hyperscalers as well as in numerous infrastructures and private clouds that typically join an enterprise via acquisitions.
We are often asked why a handful of dynamically scalable clusters would not suffice. We also started our journey into Kubernetes with a similar mindset. But we realized that applying the architecture and principles of Kubernetes to productive scenarios, our internal and external customers very quickly required the rational separation of concerns and ownership, which in most circumstances led to the use of multiple clusters. Therefore, a scalable and managed Kubernetes as a service solution is often also the basis for adoption. Particularly, when a larger organization runs multiple products on different providers and in different regions, the number of clusters will quickly rise to the hundreds or even thousands.
Today, we want to give an update on what we have implemented in the past year regarding extensibility and customizability, and what we plan to work on for our next milestone.
- Login or register to post comments
Printer-friendly version
- Read more
- 1208 reads
PDF version
Kubernetes, IBM and Red Hat
Submitted by Roy Schestowitz on Sunday 1st of December 2019 04:34:52 PM Filed under

-
Analysts Say Kubernetes Is a Services-Building Opportunity for the Channel
-
KubeCon Showed Kubernetes Is Big, but Is It a Unicorn?
-
Oracle, Red Hat See the Value of Kubernetes for Channel Partners
-
IBM ships out two new open-source tools for Kubernetes developers
Taking center stage at the Kubecon + CloudNativeCon co-located events in San Diego today, IBM Corp. announced two new open-source tools for the Kubernetes ecosystem, as well as updates to two of its existing projects.
The new tools include Kui, which is meant to ease the oftentimes “chunky experience” developers have to deal with when working with hybrid or multicloud application deployments. There’s also Iter8, which is a tool for collecting data and telemetry generated by the open-source software service mesh Istio.
-
Red Hat: Platform For E2E Processes And Digital Excellence
For many years, Linux was the only open source component in the SAP community. Now Red Hat is launching a complete open source platform. Peter M. Faerbinger spoke to Jochen Glaser about this unique solution.
-
Red Hat: The Goal Is Digital Excellence
SAP is building the intelligent enterprise. Red Hat stands fully behind this strategy. The upcoming modernization will involve drastic changes. They relate to the migration to Hana and the migration of existing SAP applications, including custom code, to S/4.
-
Digital transformation: 3 people pain points
Considering the insatiable customer appetite for better and more efficient service, digital transformation is now a bit of an arms race: Companies that resist it risk being left behind by their competitors and customers.
During my long tenure as an IT leader, I’ve found that the biggest challenges in any company change always boils down to people. In today’s world, where the modus operandi is to get things done as quickly as possible, it can be easy to lose sight of the things that will help a project go well.
- Login or register to post comments
Printer-friendly version
- Read more
- 2352 reads
PDF version

More in Tux Machines
- Highlights
- Front Page
- Latest Headlines
- Archive
- Recent comments
- All-Time Popular Stories
- Hot Topics
- New Members
Press release about this | 1 hour 50 min ago |
Microsoft Brings First 365 Application to Linux Desktop | 2 hours 41 min ago |
KDE Developers Are Busy As Ever Ahead Of The 2019 Holidays | 3 hours 49 min ago |
Several Linux Browsers Blocked from Accessing Google Services | 7 hours 24 min ago |
Google prevents some Linux users from signing into its services | 12 hours 15 min ago |