Language Selection

English French German Italian Portuguese Spanish

Slackware Documentation Project

Filed under

A while back, Eric Hameleers (Alien Bob), Niki Kovacs, and others in conversation at LinuxQuestions.org tossed around the idea of creating a wiki for Slackware similar to the excellent one the Arch Linux community maintains. The dream became a reality. Smile

Visit, learn, participate...

http://docs.slackware.com/

Regards,

V. T. Eric Layton (Nocturnal Slacker, vtel57)
Tampa, Florida, USA
http://vtel57.com

Subjectivity Rules

A lot of it is personal preference, Dr. Roy, as you probably know. My first foray into GNU/Linux was in 2006 with Ubuntu 6.06 Dapper Drake. From there, I tried boat loads of distributions over the next few weeks. I settled on Slackware because I liked its simplicity, stability, and attitude. A close second to Slackware for me would be pure Debian (not derivatives); for similar reasons, particularly the stability.

Were I running a server in a commercial or private implementation, I would have to run it with Slackware as the first option or Debian as the second. There actually are commercial distributions like RedHat, but you can get CentOS for free and the only difference is basically the documentation and support. With RH, you get to talk to someone when you have an issue. With CentOS, you'll have to do a bit of research on your own.

As far as Arch goes, mostly what scares people away from that distribution is the fact that it doesn't really have an installer, per se. The entire installation process is a series of command line commands and script/file edits. It's not that difficult, but if one isn't comfortable outside of the GUI realm, it can be scary. And while Arch is pretty stable as far as it goes, it's vulnerable to breakage occasionally because it is a rolling-release distribution that stays very near the bleeding edge when it comes to the apps in its repos.

Gentoo? HA! Won't even go there. That much-loved (by its hardcore adherents) distribution is primarily for those who enjoy self-flagellation and other fun masochistic hobbies. Wink No, seriously... I respect Gentoo Linux people. I've tried it. It's not my cup o', but when done right, it can be a very stable and efficient operating system.

As the title of this post says, it's really just a personal choice. Folks do their homework (hopefully) and decide upon a distribution that works best for them in their circumstances. That's the wonderful thing about GNU/Linux and Open Source... FREEDOM OF CHOICE!

There you have it...

~Eric

Choice

vtel57 wrote:

A lot of it is personal preference, Dr. Roy, as you probably know. My first foray into GNU/Linux was in 2006 with Ubuntu 6.06 Dapper Drake. From there, I tried boat loads of distributions over the next few weeks. I settled on Slackware because I liked its simplicity, stability, and attitude. A close second to Slackware for me would be pure Debian (not derivatives); for similar reasons, particularly the stability.

I use Debian more and more, but not as my main distro. I like the simplicity of E18 for some things, whereas KDE is still the most functional (where resources permit). For me, Ubuntu started in 2004, but I had used other distros before it (Red Hat was my first). SUSE was a favourite before the Microsoft-Novell deal.

vtel57 wrote:

Were I running a server in a commercial or private implementation, I would have to run it with Slackware as the first option or Debian as the second. There actually are commercial distributions like RedHat, but you can get CentOS for free and the only difference is basically the documentation and support. With RH, you get to talk to someone when you have an issue. With CentOS, you'll have to do a bit of research on your own.

CentOS powers Techrights and Tux Machines. I can cope with it fine, but it takes some learning if you come from a DEB world and must also adapt to third-party repos. The upgrades to CentOS 6 made things easier. CentOS 5 was getting long in the tooth.

vtel57 wrote:

As far as Arch goes, mostly what scares people away from that distribution is the fact that it doesn't really have an installer, per se. The entire installation process is a series of command line commands and script/file edits. It's not that difficult, but if one isn't comfortable outside of the GUI realm, it can be scary. And while Arch is pretty stable as far as it goes, it's vulnerable to breakage occasionally because it is a rolling-release distribution that stays very near the bleeding edge when it comes to the apps in its repos.

Arch is used by many people I know, but I just don't see the big advantage of it. I know the pros and cons and the latter outweighs the former. I want a simple binary distro with good, reliable, extensive repos.

vtel57 wrote:

Gentoo? HA! Won't even go there. That much-loved (by its hardcore adherents) distribution is primarily for those who enjoy self-flagellation and other fun masochistic hobbies. Wink No, seriously... I respect Gentoo Linux people. I've tried it. It's not my cup o', but when done right, it can be a very stable and efficient operating system.

Gentoo is for ricers, some say...

vtel57 wrote:

As the title of this post says, it's really just a personal choice. Folks do their homework (hopefully) and decide upon a distribution that works best for them in their circumstances. That's the wonderful thing about GNU/Linux and Open Source... FREEDOM OF CHOICE!

Which is spun as a negative by the proprietary software proponents.

Linux Is Linux Is Linux...

People often ask me what are the major differences between distributions of Linux. I tell them that Linux is Linux is Linux... meaning, the distributions are all GNU/Linux at their heart. The major differences between the distributions mostly have to do with methods of package management; along with some other minor differences like init methods, daemon handling, etc.

To learn Linux, I found it was best try as many distributions as I could manage. At one time, I had machines in my shop or home that had 20+ operating systems installed on them at any given time. If you learn the package management and the other minor things from each distro, you begin to get a feel and a competence when dealing with any of them. Familiarization with the command line is a plus.

I remember a mentor of mine, Bruno Knaapen of Amsterdam - Senior All Things Linux Admin at Scot's Newsletter Forums, once told me that if I wanted to surf the net and read emails, run Ubuntu. If I wanted to learn Linux, run Slackware. I chose the latter path.

8 years later, I'm no guru, but I can command line my way out of a paper bag if I have to. Wink

P.S. I was always impressed with CentOS. Up until recently, there were almost always installations of CentOS and Debian on all my systems along with my primary OS, Slackware. Lately though, I've suspended experimentation for the most part. I'm just happy using my rock solid Slackware OS. The lessening of tinkering has also been a large part of the reason my writing output on my blog has diminished, unfortunately.

Blog focus

Yes, I recently took another look at the blog and realised it's no longer so GNU/Linux-centric.

Blog Evolution

Well, it was never really meant to be purely GNU/Linux. Nocturnal Slacker v1.0 is the direct descendant of my original blog from when I was writing on Chris Pirillo's LockerGnome site a few years back. It was technically oriented, but also had general topics.

When I left LockerGnome, after changes were made to that site, I divided Nocturnal Slacker down into two distinct blogs: v1.0 remains technical and v2.0 is purely general topics. The original Nocturnal Slacker blog is still available as an archive, though.

They can be accessed from my website --> http://vtel57.com

All techie stuff and no general topic rants make Eric a dull boy. Wink

LockerGnome and Pirillo

I have not seen anything from LockerGnome or even Pirillo for a long time. Did he collapse with Windows' demise?

Pirillo Still Kickin'

Chris is still around --> http://www.lockergnome.com/

Lockergnome has gone through some serious transformations and refocusing over the last few years, though. Chris seemed to move himself almost primarily to his video channel on YouTube. The original Lockergnome site is still up, but it's much different that it was in the past.

Lockergnome

One sure thing is, Lockergnome is no longer influential.

I used to see many links to/articles in Lockergnome.

Last I spoke to Chris, it was about removing some USENET archives he had put there (he removed). That was a very long time ago.

Slackware

I used to want to move to Arch or its derivatives, but I found the documentation a bit daunting. The same goes for Slackware and I used to stay out of Debian for similar reasons (until several years ago). Gentoo was out of the question and it doesn't seem to be quite so active anymore (barely any releases).

What would be the advantage of using Slackware on a server or desktop at this stage?

More in Tux Machines

Programming: WebAssembly, Mozilla GFX, Qt and Python

  • WebAssembly for speed and code reuse

    Imagine translating a non-web application, written in a high-level language, into a binary module ready for the web. This translation could be done without any change whatsoever to the non-web application's source code. A browser can download the newly translated module efficiently and execute the module in the sandbox. The executing web module can interact seamlessly with other web technologies—with JavaScript (JS) in particular. Welcome to WebAssembly. As befits a language with assembly in the name, WebAssembly is low-level. But this low-level character encourages optimization: the just-in-time (JIT) compiler of the browser's virtual machine can translate portable WebAssembly code into fast, platform-specific machine code. A WebAssembly module thereby becomes an executable suited for compute-bound tasks such as number crunching. Which high-level languages compile into WebAssembly? The list is growing, but the original candidates were C, C++, and Rust. Let's call these three the systems languages, as they are meant for systems programming and high-performance applications programming. The systems languages share two features that suit them for compilation into WebAssembly. The next section gets into the details, which sets up full code examples (in C and TypeScript) together with samples from WebAssembly's own text format language.

  • Mozilla GFX: moz://gfx newsletter #47

    Hi there! Time for another mozilla graphics newsletter. In the comments section of the previous newsletter, Michael asked about the relation between WebRender and WebGL, I’ll try give a short answer here. Both WebRender and WebGL need access to the GPU to do their work. At the moment both of them use the OpenGL API, either directly or through ANGLE which emulates OpenGL on top of D3D11. They, however, each work with their own OpenGL context. Frames produced with WebGL are sent to WebRender as texture handles. WebRender, at the API level, has a single entry point for images, video frames, canvases, in short for every grid of pixels in some flavor of RGB format, be them CPU-side buffers or already in GPU memory as is normally the case for WebGL. In order to share textures between separate OpenGL contexts we rely on platform-specific APIs such as EGLImage and DXGI. Beyond that there isn’t any fancy interaction between WebGL and WebRender. The latter sees the former as a image producer just like 2D canvases, video decoders and plain static images.

  • The Titler Revamp: QML Producer in the making

    At the beginning of this month, I started testing out the new producer as I had a good, rough structure for the producer code, and was only facing a few minor problems. Initially, I was unclear about how exactly the producer is going to be used by the titler so I took a small step back and spent some time figuring out how kdenlivetitle worked, which is the producer in use. Initially, I faced integration problems (which are the ones you’d normally expect) when I tried to make use of the QmlRenderer library for rendering and loading QML templates – and most of them were resolved by a simple refactoring of the QmlRenderer library source code. To give an example, the producer traditionally stores the QML template in global variables which is taken as a character pointer argument (which is, again, traditional C) The QmlRenderer lib takes a QUrl as its parameters for loading the Qml file, so to solve this problem all I had to do was to overload the loadQml() method with one which could accommodate the producer’s needs – which worked perfectly fine. As a consequence, I also had to compartmentalise (further) the rendering process so now we have 3 methods which go sequentially when we want to render something using the library ( initialiseRenderParams( ) -> prepareRenderer( ) -> renderQml( ) ) [...] The problem was resolved (thank you JB) finally and it was not due to OpenGL but it was simply because I hadn’t created an QApplication for the producer (which is necessary for qt producers). The whole month’s been a steep curve, definitely not easy, but, I enjoyed it! Right now, I have a producer which is, now, almost complete and with a little more tweaking, will be put to use, hopefully. I’m still facing a few minor issues which I hope to resolve soon and get a working producer. Once we get that, I can start work on the Kdenlive side. Let’s hope for the best!

  • How to Make a Discord Bot in Python

    In a world where video games are so important to so many people, communication and community around games are vital. Discord offers both of those and more in one well-designed package. In this tutorial, you’ll learn how to make a Discord bot in Python so that you can make the most of this fantastic platform.

  • Qt Visual Studio Tools 2.4 RC Released

    The Visual Studio Project System is widely used as the build system of choice for C++ projects in VS. Under the hood, MSBuild provides the project file format and build framework. The Qt VS Tools make use of the extensibility of MSBuild to provide design-time and build-time integration of Qt in VS projects — toward the end of the post we have a closer look at how that integration works and what changed in the new release. Up to this point, the Qt VS Tools extension managed its own project settings in an isolated manner. This approach prevented the integration of Qt in Visual Studio to fully benefit from the features of VS projects and MSBuild. Significantly, it was not possible to have Qt settings vary according to the build configuration (e.g. having a different list of selected Qt modules for different configurations), including Qt itself: only one version/build of Qt could be selected and would apply to all configurations, a significant drawback in the case of multi-platform projects. Another important limitation that users of the Qt VS Tools have reported is the lack of support for importing Qt-related settings from shared property sheet files. This feature allows settings in VS projects to be shared within a team or organization, thus providing a single source for that information. Up to now, this was not possible to do with settings managed by the Qt VS Tools.

Screenshots/Screencasts: 10 GNU/Linux Distros (Screenshots) and New Screencast/Video of Endeavour OS 2019.08.17

  • 10 Linux distros: From different to dangerous

    One of the great benefits of Linux is the ability to roll your own. Throughout the years, individuals, organizations, and even nation states have done just that. In this gallery, we're going to showcase some of those distros. Be careful, though. You may not want to load these, or if you do, put them in isolated VMs. We're not kidding when we say they could be dangerous.

  • Endeavour OS 2019.08.17 Run Through

    In this video, we are looking at Endeavour OS 2019.08.17.

A Cycle of Renewal, Broken: How Big Tech and Big Media Abuse Copyright Law to Slay Competition

In 1950, a television salesman named Robert Tarlton put together a consortium of TV merchants in the town of Lansford, Pennsylvania to erect an antenna tall enough to pull down signals from Philadelphia, about 90 miles to the southeast. The antenna connected to a web of cables that the consortium strung up and down the streets of Lansford, bringing big-city TV to their customers — and making TV ownership for Lansfordites far more attractive. Though hobbyists had been jury-rigging their own "community antenna television" networks since 1948, no one had ever tried to go into business with such an operation. The first commercial cable TV company was born. The rise of cable over the following years kicked off decades of political controversy over whether the cable operators should be allowed to stay in business, seeing as they were retransmitting broadcast signals without payment or permission and collecting money for the service. Broadcasters took a dim view of people using their signals without permission, which is a little rich, given that the broadcasting industry itself owed its existence to the ability to play sound recordings over the air without permission or payment. The FCC brokered a series of compromises in the years that followed, coming up with complex rules governing which signals a cable operator could retransmit, which ones they must retransmit, and how much all this would cost. The end result was a second way to get TV, one that made peace with—and grew alongside—broadcasters, eventually coming to dominate how we get cable TV in our homes. By 1976, cable and broadcasters joined forces to fight a new technology: home video recorders, starting with Sony's Betamax recorders. In the eyes of the cable operators, broadcasters, and movie studios, these were as illegitimate as the playing of records over the air had been, or as retransmitting those broadcasts over cable had been. Lawsuits over the VCR continued for the next eight years. In 1984, the Supreme Court finally weighed in, legalizing the VCR, and finding that new technologies were not illegal under copyright law if they were "capable of substantial noninfringing uses." Read more

Software, HowTos and Storage

  • Pause Music When Locking The Screen And Resume On Unlock For Spotify, Rhythmbox, Others

    When you lock your computer screen (without suspending the system), most desktop audio players continue playback in the background, sometimes not emitting any sound ¹. Due to this you may unintentionally skip parts of podcasts or songs in a playlist, etc. Enter pause-on-lock, a Bash script that pauses your music player when you lock the screen and resumes playback once the screen is unlocked. pause-on-lock works on Unity, GNOME, Cinnamon and MATE desktop environments, and by default it supports Spotify and Rhythmbox. With the help of playerctl (a command line controller for controlling media players that support the MPRIS D-Bus interface), this script can extend its supported music players to many others, including Audacious, VLC, Cmus, and others.

  • Easy Way to Screen Mirroring Android on Ubuntu!

    Screen Mirroring is one of the features found on smartphones, one of which is on Android. This feature serves to display the smartphone to a computer. This is very useful for example when used for demo applications that you make, or maybe for other things related to smartphones. In Ubuntu, we can do screen mirroring with applications available on Android, for example is AirDroid which can be used for screen mirroring through a browser. But I feel less optimal when using this instant method. Because there is a lag between activity on the smartphone and on the monitor screen on the computer, and the results are less than optimal. What might be the cause because it is opened through a browser and uses wi-fi? (Personal question). I am looking for another application for screen mirroring on Ubuntu, and one of the very good applications is Scrcpy. This application can be used for screen mirroring without a root device.

  • Command line quick tips: Searching with grep
  • How to Install Cezerin on Debian 9
  • How to Create a Bootable USB Stick from the Ubuntu Terminal
  • How to Install Git on Debian 10
  • How to Copy/Move a Docker Container to Another Host
  • Six practical use cases for Nmap
  • The Next Stage of Flash Storage: Computational Storage
  • NAS upgrade

    At some point in the future I hope to spend a little bit of time on the software side of things, as some of the features of my set up are no longer working as they should: I can't remote-decrypt the main disk via SSH on boot, and the first run of any backup fails due to some kind of race condition in the systemd unit dependencies. (The first attempt does not correctly mount the backup partition; the second attempt always succeeds).

  • Storage Concepts And Technologies Explained In Detail