Language Selection

English French German Italian Portuguese Spanish

better stability & security

Rolling release is good for

Rolling release is good for one reason. You get the full security and bug fix updates as intended by upstream.

No amount of backporting fixes is enough to keep a system secure and bug free. It's as simple as that. If I backport fixes from kernel git tree to a stable kernel 2.6.2x release, I'm most likely going to miss a lot of fixes. Cherry picking fixes for popular bugs only isn't a solution and causes weakness in Static release distributions.

The only requirement for a rolling release to work is to keep the base system as simple as possible. Theoretically, no downstream patching should be done in packages such as glibc, gcc or kernel unless it is a patch waiting to be eventually merged in a future upstream release.

re: poll

For servers - Static release/repo.

The "theory" of rolling releases is great, but the real world application, not so much.

Servers MUST be stable and secure. With a rolling release, you rely too much on the upstream vendor not to fubar something your system must have (not that it can't be done - mainframes have been doing rolling upgrades for decades - it's just EXPENSIVE to do it right).

RHEL/CENTOS has the right business model. Forget the fluff (and or bleeding edge stuff), only put well tested software into their repo's, backport security as needed, and support the whole thing for 5 years (or longer for security patches)

Of course it doesn't really matter what method the upstream vendor uses, you still need to run a parallel test environment along side your production environment, and test everything (and I mean EVERYTHING) in the first before rolling it out on the second.

It's just easier (for me anyways) to plan your server environments (and their future) if you have static (but not the ridiculously short 6 month timeframe) releases.

Which would you say is better for a linux server?

I have heard the topic discussed in various forums and points of view.

Which would you say is the better choice for a linux based server?

Please give reasoning for your answers and not post "sux" or "rules" nonsense.

Big Bear

More in Tux Machines

Mesa and Intel Graphics

RadeonSI OpenGL vs. RADV Vulkan Performance For Mad Max

Feral Interactive today released their first Linux ported game into public beta that features a Vulkan renderer. Mad Max on Linux now supports Vulkan and OpenGL, making for some fun driver/GPU benchmarking. Up first are some Radeon RX 480 and R9 Fury Vulkan vs. OpenGL benchmarks for Mad Max when using Mesa 17.1-dev Git. Read more

Ubuntu 17.04: A mouse-sized step forward

It's almost the fourth month of the year. You know what that means. A new Ubuntu release is upon us. This time around, the release number is 17.04 and the name is Zesty Zapus. For those that don't know, a zapus is a genus of North American jumping mice and the only extant mammal with a total of 18 teeth. Which means the zapus is quite unique. Does that translate over to the upcoming release of one of the most popular Linux distributions on the planet (currently listed as fourth on Distrowatch)? Let's find out. Read more

Quad-core Atom thin client offers hardened ThinLinux

Dell revealed a tiny “Wyse 3040” thin client that runs ThinOS or a hardened new ThinLinux on a quad-core Intel SoC, and supports Citrix, MS, and VMware. Dell has launched its “lightest, smallest and most power-efficient thin client” yet, with a 101.6 x 101.6 x 27.9mm Wyse 3040 system that weighs 0.24kg and runs on under 5 Watts. The device is powered by a quad-core, 1.44GHz Intel Atom x5-Z8350 “Cherry Trail” SoC, giving it 30 percent better performance than “previous generations,” says Dell, presumably referring to the single-core Wyse 3010 and the dual-core 3020 and 3030. The power-efficient (2W SDP) SoC also runs on the UP board and UP Core SBCs. Read more