Exploring the Future of Computing
Updated: 1 hour 5 min ago
Okay so I'm using this perfectly fine article as an excuse to bring something up, so bear with me here.
If you haven't been paying attention to the PC world lately, you might not have noticed that the lowly PC has seen a bit of a resurgence, with interesting designs and unique concepts. We saw this come to bear at CES just a couple of months ago, where PC makers such as Dell, Lenovo, and HP all trotted out interesting laptop designs.
But the laptop isn't the only PC that's seen a design-focused revival. The lowly desktop PC has transformed from a boring beige or black box into a centerpiece of a modern desk space. An all-in-one computer in 2017 is both functional as a computer and beautiful to appreciate as a piece of design.
This is only slightly related, but it's something that has been bugging me for years, and since I was confronted with it again this past weekend, I might as well get it out of my system: why is nobody innovating anymore in the field of building your own computer? So many aspects of building your own computer are completely crazy when you think about it, and it seems like nobody is really doing anything to fix them.
For instance, why haven't we come up with a way to increase the power you can draw from a PCI-E slot, so that graphics cards don't have to be plugged into the PSU directly with unwieldy power cables, with connectors in the most boneheaded location on the graphics card?
Why are we still using those horrible internal 9/10-pin connectors for USB, the front panel, audio, and so on? These are absolutely dreadful connectors, spread out all over the motherboard in illogical places forcing you to route cabling in unnatural ways, and the pins can easily bend. This is terrible 80s technology that we should've fixed by now.
And the most idiotic connector of them all, which is huge, stiff, almost impossible to plug in, remove, or route properly: the ATX power plug from the PSU to the motherboard. This thing is probably one of the worst connectors you can possibly find inside any computer, and the slot on the motherboard is in an incredibly illogical place considering most case layouts. To make matters worse, the CPU power connector sits at the top-left (usually) of the motherboard, so that's another unwieldy connector and cable with an unnatural route that you have to deal with. It's just terrible.
I like the inside of my computer to look as neat and tidy as possible - not only because it looks nice and is easier to clean, but also because it improves airflow, something quite important with today's processors and graphics cards. However, aging standards with terrible designs and horrible usability that wouldn't look out of place in a 1960s mainframe make that quite the challenge.
We've seen some minor improvements already these past ten years or so, with the advent of modular PSUs and the death of the dreadfully terrible IDE cables and Molex connectors, but more work is definitely needed. We need a replacement for the aging ATX standard, which delivers enough power to the motherboard for the board itself, video cards, and the processors and fans, through a single cable with a modern, easy-to-use connector. It'd be great if a replacement for SATA could also carry power, so that we no longer need to route individual power cables to our hard drives. We need to get rid of 9/10-pin connectors for things like USB and the front panel, and replace them with easy-to-use USB-like connectors.
And last but certainly not least: put all of these things in locations that make sense for the vast majority of cases in use today, so we can reduce the length of cables, save money in the process, and end up with cleaner, easier-to-use computers.
Intel, AMD, NVIDIA, case makers, Microsoft, and whomever else is involved here - sit around a damn table for once, and hash this stuff out. ATX is outdated garbage, and needs a modern replacement. ATX was introduced in 1995 - do you still want to use Windows 95? OS/2 Warp? Version 1.2.0 of the Linux kernel? System 7.5.1? Floppies? CRTs? Of course you don't!
Then why the hell are we still using ATX?
Around May 2015, Andrea âMancausoftâ Milazzo got in touch withÂ Jakub Filipowicz, a Polish guy involved inÂ MERA-400 computer historical researches; Jakub was writing an emulator of this machine, but the operating system was missing and almost unavailable (details onÂ the mera400.plÂ website [Polish]).
Jakub found 5 magnetic tapes at the Warsaw Museum of Technology, containing hopefully copies of theÂ CROOK operating system. The Museum was not able to read them. After some months, he managed to get the tapes, to try a data recovery, extracting the operating system.
Fascinating story with tons of details, definitely a must-read. Interestingly enough - or sadly enough - I can't seem to find a whole lot of information on the MERA 400 in English, and since I don't speak or read Polish, I can't really give much more information than you can find in the source article. There is a Wikipedia page on the MERA 400's progenitor, the K-202.
Interesting little tidbit for the weekend: we now know what operating system the Nintendo Switch is running. Since it's basically an NVIDIA Shield, I kind of expected it to be running Android - heavily modded, of course - but it turns out it's running something else entirely: it's running FreeBSD.
Like Sony, Nintendo also opts for FreeBSD for its games console. This means of the four major gaming platforms, two run Windows, and two run FreeBSD. Fascinating.
In a paper out this week in Science, researchers Yaniv Erlich and Dina Zielinski report successfully using DNA to store and retrieve "a full computer operating system, movie, and other files".
DNA has the potential to provide large-capacity information storage. However, current methods have only been able to use a fraction of the theoretical maximum. Erlich and Zielinski present a method, DNA Fountain, which approaches the theoretical maximum for information stored per nucleotide. They demonstrated efficient encoding of information - including a full computer operating system - into DNA that could be retrieved at scale after multiple rounds of polymerase chain reaction.
Which operating system? Turns out it's KolibriOS, the all-assembler, floppy-based x86 operating system originally based on MenuetOS.
So I'd like to tell you my version of the story of Firefox OS, from the birth of the Boot to Gecko open source software project as a mailing list post and an empty GitHub repository in 2011, through its commercial launch as the Firefox OS mobile operating system, right up until the "transition" of millions of lines of code to the community in 2016.
During this five year journey hundreds of members of the wider Mozilla community came together with a shared vision to disrupt the app ecosystem with the power of the open web. I'd like to reflect on our successes, our failures and the lessons we can learn from the experience of taking an open source browser based mobile operating system to market.
Apple is losing its grip on American classrooms, which technology companies have long used to hook students on their brands for life.
Over the last three years, Apple's iPads and Mac notebooks - which accounted for about half of the mobile devices shipped to schools in the United States in 2013 - have steadily lost ground to Chromebooks, inexpensive laptops that run on Google's Chrome operating system and are produced by Samsung, Acer and other computer makers.
Mobile devices that run on Apple's iOS and MacOS operating systems have now reached a new low, falling to third place behind both Google-powered laptops and Microsoft Windows devices, according to a report released on Thursday by Futuresource Consulting, a research company.
That's got to sting. Out of the many reasons why ChromeBooks are way more successful than iPads in classrooms - they are cheaper, easier to manage, and so on - this is the one you're going to need to remember:
Then there is the keyboard issue. While school administrators generally like the iPadâs touch screens for younger elementary school students, some said older students often needed laptops with built-in physical keyboards for writing and taking state assessment tests.
My oh my, I wonder what Apple could do to remedy this.
The Switch is a console sandwiched between a bar of success lowered by the disaster of the Wii U and the considerable ground Nintendo must make up.
Compared to the Wii U on its merits, the Switch is a slam dunk. It takes the basic concept of the Wii U, of a tablet-based console, and fulfills the promise of it in a way Nintendo simply wasnât capable of realizing in 2012. Itâs launching with a piece of software that, more than anything in the Wii Uâs first year, demonstrates its inherent capability of delivering what Nintendo says is one of the Switchâs primary missions: a big-budget, AAA game that exists across a handheld device and a television-connected portable. The hardware lives up to its name in how easily and smoothly it moves between those two worlds, in how dead simple it all is to make something pretty magical happen.
I am genuinely excited by the Switch, and the prospects it brings to the table. I'm worried about the lineup of games - or lack thereof, really - so I'm not going to jump in straight away. The reviews of the device and its launch Zelda title are positive, though, so I'm looking forward to what Nintendo has in store for the Switch.
Android Studio 2.3 has been released.
We are most excited about the quality improvements in Android Studio 2.3 but you will find a small set of new features in this release that integrate into each phase of your development flow. When designing your app, take advantage of the updated WebP support for your app images plus check out the updated ConstraintLayout library support and widget palette in the Layout
Editor. As you are developing, Android Studio has a new App Link Assistant which helps you build and have a consolidated view of your URIs in your app. While building and deploying your app, use the updated run buttons for a more intuitive and reliable Instant Run experience. Lastly, while testing your app with the Android Emulator, you now have proper copy & paste text support.
I hear a lot of negativity regarding Android Studio, but since I'm not a developer, I can't really make heads or tails of it. Is it really as bad as some people make it out to be?
The AMD Zen/Ryzen reviews and benchmarks are hitting the web (Ars has a review and a look at the Zen architecture, Tom's Hardware has a review, and there's bound to be more), but as always, the one you want is AnandTech's (they also have an interview with AMD's CEO):
For over two years the collective AMD vs Intel personal computer battle has been sitting on the edge of its seat. Back in 2014 when AMD first announced it was pursuing an all-new microarchitecture, old hands recalled the days when the battle between AMD and Intel was fun to be a part of, and users were happy that the competition led to innovation: not soon after, the Core microarchitecture became the dominant force in modern personal computing today. Through the various press release cycles from AMD stemming from that original Zen announcement, the industry is in a whipped frenzy waiting to see if AMD, through rehiring guru Jim Killer and laying the foundations of a wide and deep processor team for the next decade, can hold the incumbent to account. With AMDâs first use of a 14nm FinFET node on CPUs, today is the day Zen hits the shelves and benchmark results can be published: Game On!
Gaming performance seems to lag behind Intel, while for workstation tasks, it has them beat. For me, an upgrade to Ryzen from my i5-4440 would amount to a total sum of about â¬900 (processor, motherboard, RAM, and cooling), so I'm going to wait it out for now - especially since gaming is what my processor is most used for. That being said - give it a year, and Ryzen will be up there on all fronts with Intel's best, but at a lower price point.
AMD is definitely back, and I'm very excited to see what competition will bring to the market.
The just released version 17.02 of the Genode OS framework comes with greatly enhanced virtual file-system capabilities, eases the creation of dynamic system compositions, and adds a new facility for processing user input. Furthermore, the components have become binary-compatible across kernel boundaries by default such that entire system scenarios can be moved from one kernel to another without recompiling the components.
Genode's virtual file-system (VFS) infrastructure has a twisted history. Originally created as a necessity for enabling command-line-based GNU programs to run within Genode's custom Unix runtime, the VFS was later extracted as a separate library. This library eventually became an optional and later intrinsic part of Genode's C runtime. It also happened to become the basis of a file-system-server component. If this sounds a bit confusing, it probably is. But the resulting design takes the notion of virtual file systems to an new level.
First, instead of providing a system-wide VFS like Unix does, in Genode each component can have its own VFS. Technically, it is a library that turns a number of Genode sessions into a file-system representation according the component's configuration. Via those sessions, the component is able to access services provided by other components such as file systems, terminals, or block devices. Furthermore, several built-in file systems are provided locally from within the component. Since the VFS is local to each component, the view of the component's world can be shaped by its parent in arbitrary ways.
By default, each component runs in isolation. Whenever two components are meant to share a certain part of their VFS with one another, both mount a file-system session of the same server into their local VFS. This sharing is a deliberate decision by the component's common parent and thereby subjected to the parent's security policy. One particularly interesting file-system server is the so-called VFS server. It uses an arbitrarily configured VFS internally and exports its content as a file-system service, which can then be mounted in other components. This way, the VFS server can be used to emulate a "global" VFS, or to multiplex access to any file-system types supported by the VFS.
Speaking of supported file-system types, this is where the VFS becomes literally infinitely flexible. The VFS features a plugin interface that incorporates file system types provided in the form of shared libraries. If the VFS configuration refers to a file system type not known by the VFS, a corresponding plugin is loaded. For example, there exists a plugin for generating random numbers based of the jitter of CPU execution time. The file system, when mounted, hosts only a single read-only file that produces random numbers. But VFS plugins can become much more creative. Via the rump-kernel VFS plugin, one can incorporate the file systems of the NetBSD kernel into any VFS-using component. Genode 17.02 furthermore comes with a Plan-9-inspired VFS plugin that makes the Linux TCP/IP stack available as a file system. The C runtime then translates BSD-socket API calls to file-system operations on the socket file system, which, in turn, are handled by the Linux TCP/IP stack. The fascinating part is that this all happens within a single component. Such a component is in fact quite similar to a unikernel.
If two applications ought to share the same TCP/IP stack, the VFS server comes in handy. The Linux TCP/IP stack is then mounted once in the VFS server, which, in turn, provides file-system sessions to the applications. Each application then accesses the TCP/IP stack indirectly through those file-system sessions. In this scenario, the VFS server suddenly becomes a network multiplexer.
The VFS is not the only topic of the current release. Another highlight is the introduction of a application binary interface that makes all components binary compatible across kernel boundaries by default. Combined with the new kernel-independent build directories, it has become possible to move complete system scenarios from kernels as different as L4, NOVA, seL4, or Linux in matter of seconds. Further improvements of Genode 17.02 are the addition of a generic input-event processor, new SD-card drivers, the update to the version 0.8 of the Muen separation kernel, and a new mechanism for managing dynamic subsystems. All the improvements are described in detail in the release documentation.
Tim Cook, during a shareholder meeting, when asked about a possible future convergence of macOS and iOS:
"Expect us to do more and more where people will view [the iPad] as a laptop replacement, but not a Mac replacement - the Mac does so much more," he said. "To merge these worlds, you would lose the simplicity of one, and the power of the other."
Oh really now.
The Republican-controlled FCC on Thursday suspended the net neutrality transparency requirements for broadband providers with fewer than 250,000 subscribers. Critics called the decision anticonsumer.
The transparency rule, waived for five years in a 2-1 party-line vote Thursday, requires broadband providers to explain to customers their pricing models and fees as well as their network management practices and the impact on broadband service.
The commission had previously exempted ISPs with fewer than 100,000 subscribers, but Thursday's decision expands the number of ISPs not required to inform customers. Only about 20 U.S. ISPs have more than 250,000 subscribers.
What could possibly go wrong?
The five-year waiver may be moot, however. FCC Chairman Ajit Pai and Republicans in Congress are considering ways to scrap a large chunk of the net neutrality regulations approved by the agency just two years ago.
Is it just me, or is the undoing of the opposing party's policies every 4-8 years a really terrible way to run a country?
From the EFF:
On February 23rd, a joint team from the CWI Amsterdam and Google announced that they had generated the first ever collision in the SHA-1 cryptographic hashing algorithm. SHA-1 has long been considered theoretically insecure by cryptanalysts due to weaknesses in the algorithm design, but this marks the first time researchers were actually able to demonstrate a real-world example of the insecurity. In addition to being a powerful Proof of Concept (POC), the computing power that went into generating the proof was notable.
So what's the big deal?
Unfortunately, the migration away from SHA-1 has not been universal. Some programs, such as the version control system Git, have SHA-1 hard-baked into its code. This makes it difficult for projects which rely on Git to ditch the algorithm altogether. The encrypted e-mail system PGP also relies on it in certain places.