Language Selection

English French German Italian Portuguese Spanish

OStatic

Syndicate content
OStatic
Updated: 3 hours 16 min ago

Mirantis Broadens OpenStack Training, Certification

Thursday 22nd of January 2015 04:02:52 PM

Mirantis, focused on the OpenStack cloud computing platform, has expanded its ambitious Mirantis Training for OpenStack course collection with two new courses and a Certificate Verification portal. Mirantis' training platform has been running since 2011, and is differentiated from some other training platforms in that the coursework is OpenStack distribution-agnostic. According to Mirantis, Eighty eight percent of students rate it as better than other professional industry training offerings due to the quality of its instructors, its hands-on format, and its curriculum that is removed of vendor bias.

In an interview with OStatic, Mirantis co-founder Boris Renski emphasized that the number of players in the OpenStack arena to take seriously is narrowing down, and that Mirantis gets an advantage from being purely focused on OpenStack:

"We see only four credible vendors left standing in the OpenStack market: Mirantis, Red Hat, VMware and HP. Our competitive advantage at Mirantis is our pure-play focus. We only do OpenStack. We have no other product or services agenda to upsell, cross-sell or lock-in any customers. This OpenStack only approach lets us provide a better solution for both kinds of customers we serve - the developers that use Mirantis OpenStack and the infrastructure team that has to run it. Unlike our competitors, we are not burdened by other products in our solutions portfolio. We focus on supporting those OpenStack configurations that customers really need, not those that pull other products in our list of offerings."

In fact, Renski is adamant that much of the meaningful consolidation on the OpenStack scene has happened:

"The consolidation has already happened, I predicted it in December 2013. CloudStaling, Metacloud, and eNovance were acquired. Rackspace and StackOps pivoted to focus their business on managed hosting. MorphLabs seem to have gone away altogether. Piston and Nebula are still around, but seem to be in a niche that doesn't directly compete with Mirantis' OpenStack distribution. It is us, Red Hat, VMware and HP...and that's it." 

The OpenStack community reports OpenStack as improving their resumes and as being a highly-valued job skill. Job tracking site Indeed has more than 2,000 career listings that specifically request OpenStack skills and shows that system administrators with OpenStack skills in the U.S. enjoy a $32,000 annual salary premium over their peers. This number will only grow; the BSA Global Cloud Scorecard 2013 predicts an estimated 14 million cloud jobs will be created by 2015.

All of that is behind why Mirantis is expanding its training platform. In addition to its pre-existing OpenStack Bootcamp I (OS100) training for IT professionals, Mirantis now offers OpenStack Fundamentals (OS50), a one-day course for business professionals, and OpenStack Bootcamp II (OS200), a training for students with extensive background in OpenStack. Mirantis Certification is billed as the only vendor-agnostic certification for OpenStack. Mirantis Certified professionals will also now be listed in Mirantis’ new Certificate Verification portal. The portal lets potential employers search for Mirantis Certified professionals and verify their credentials with their certification number.

“Mirantis Training is a powerful engine behind the industry’s adoption of OpenStack,” said Lee Xie, Head of OpenStack Training Services, Mirantis. “We have given more than 5,000 students hands-on experience in standing up and managing an OpenStack environment. Of these, an estimated 30 percent* report successfully deploying OpenStack in their organizations as a result of their training.”

Related Activities

Related Blog Posts








Fedora's 32-Bit Scare

Thursday 22nd of January 2015 04:16:06 AM

Stephen Smoogen Monday proposed that Fedora drop the 32-bit architecture with version 23 or 24 to see what folks might think. 83 comments and, at least, one strongly worded blog post later Smoogen had his answer.  Today he posted an apology and retraction. In other news, KDE 5.3 promises to be faster and GNOME 3.15 may be safer.

Last Monday, Fedora developer and steering committee member, Stephen Smoogen posted, "I am going to make the uncomfortable and ugly proposal to drop 32 bit in Fedora 23 and only look at 64 bit architectures as primary architectures." The post made a bit of news and sparked discussions. Smoogen's post got 83 comments alone.

FossForce's Larry Cafiero even weighed in today with his outrage saying "the incalculable enormity of bad in this proposal" is "immeasurable." Cafiero believes there are lots of folks still using 32-bit, especially in poorer countries. Cafiero and other comments accused Smoogen and Red Hat/Fedora of being "first-world thinkers." The one side of most of the comments are folks stating that they, or someone somewhere, are still using a 32-bit machine in one way or another. The other side say the arch is obsolete and should go the way of the dinosaur.

In any case, Smoogen was shamed into submission and today posted an apology and retraction. He says his original post was "meant to be absurd" and that he made things worse by trying to defend his original argument. He added that he was actually worried about those still using 32-bit "living on borrowed time."

In other headlines today:

* Martin Gräßlin: KWin on Speed

* Matthias Clasen: Sandboxed applications for GNOME

* Are Linux Graphic Apps Ready for Professionals?

* What the heck are Ubuntu Unity's Scopes?

Related Activities

Related Software Related Blog Posts








Targeted Tools Proliferate in the Hadoop, Big Data Ecosystems

Wednesday 21st of January 2015 04:19:27 PM

People in the Big Data and Hadoop communities are becoming increasingly interested in tools that are forming an ecosystem around Hadoop. These tools have specialized ways of enhancing the insights into data that organizations can get from Hadoop, and they range from Elastic Search to Qubole, which offers analytics on Hadoop data as a service (HaaS), to Apache Spark, an open source data analytics cluster computing framework originally developed in the AMPLab at UC Berkeley. 

Here is a look at some of the new and interesting tools that orbit Hadoop in the Big Data ecosystem.

Qubole. Many new kinds of storage software applications are arising in the Big Data arena. Qubole is an interesting example. It can be used for managing on-demand elastic clusters in the cloud, and can remove the need for Hadoop cluster skills. Find out more on it from The Register

Spark. Apache Spark, an open source data analytics cluster computing framework, can arm developers and software engineers with resources to build complete, unified applications that combine batch, streaming, and interactive analytics. "Spark offers clear benefits for realizing sophisticated analytics and is quickly becoming the future of data processing on Hadoop," said Sarah Sproehnle, vice president, Education Services, Cloudera. "For example, Spark Streaming enables businesses to process live data as it arrives in the enterprise data hub, rather than having to wait to batch-process it later." You can find out more about Spark here

Drill and Falcon.  Falcon, among the most promising technologies in the Hadoop  ecosystem, has just become a top-level project of the Apache Software Foundation. As Silicon Angle notes, Falcon can help serve billions of data requests with wildly varying requirements. It provides a smart framework for implementing automated controls to manage the flow of information.

Meanwhile, Apache Drill has graduated from the Apache Incubator to become a Top-Level Project (TLP). It's billed as the world's first schema-free SQL query engine that delivers real-time insights by removing the constraint of building and maintaining schemas before data can be analyzed. The Hadoop community is embracing Drill, and we have a complete interview available about it here.

Hadoop Search Tools. In addition to Elastic Search, many organizations are leveraging other search tools, as we covered here.  Hive interactive query capabilities and tools that leverage Apache Solr are all worth looking into, with much more on them here and here

Related Activities

Related Software Related Blog Posts








A New Service Discovery Tool for Use with Apache Mesos

Wednesday 21st of January 2015 03:56:14 PM

Recently, Mesosphere has been covered here on OStatic in a series of posts, including an interview with the company's Ben Hindman, in which he discusses the need for a "data center operating system." Mesosphere's data center operating system is built on the open source Apache Mesos project, which is being leveraged by many organizations for distributed resource and network management.

Now Mesoshpere has contributed a new, related project to open source: Mesos-DNS, a stateless DNS server for Mesos. The tool provides service discovery as an essential building block to connect applications and services.

According to a post on Mesos-DNS:

"At its simplest, service discovery is the mechanism by which an application or service can "discover" where other applications and services are located so that they can be connected. In a datacenter managed by Mesos, service discovery is especially important because applications and services are placed on machines (and sometimes moved) based on real time scheduling decisions as Mesos scales them out or restarts them after a machine failure. In such a dynamic environment, it is difficult for applications and services to find and keep up with the location of other applications and services they rely on."

 "Until now, each user of Mesos was required to choose their own service discovery mechanism, or to use a patchwork of mechanisms supplied by different frameworks. Mesos-DNS offers a service discovery system purposely built for Mesos. It allows applications and services running on Mesos to find each other with DNS, similarly to how services discover each other throughout the Internet. Applications launched by Marathon or Aurora are assigned names like search.marathon.mesos or log-aggregator.aurora.mesos. Mesos-DNS translates these names to the IP address and port on the machine currently running each application. To connect to an application in the Mesos datacenter, all you need to know is its name. Every time a connection is initiated, the DNS translation will point to the right machine in the datacenter."

 As Ben Hindman explained in our recent interview with him about the power of Mesos: "[Mesos] provides the basic primitives of the DCOS, including starting and stopping applications - and the bridge between applications and the hardware. f you’re in the cloud, you might be buying 8 core machines but only using 2 cores. Your cloud provider is really the one benefiting from virtualized resources, not you! The datacenter operating system enables you to more fully utilize your machines by automating the placement of your applications across your machines, using as many resources as it can per machine."

 You can find out more about Mesos-DNS here, and the following diagram depicts how it works:

 

Related Activities

Related Blog Posts








Getting Linux Adopted and Fedora 22 Previewed

Wednesday 21st of January 2015 04:39:24 AM

Today in Linux news Matt Hartley has the key to getting Linux adopted. Christian Schaller discusses some of the coming attractions of Fedora 22 and Phoronix.com is reporting that KDE 5 may also be coming to Fedora 22. Elsewhere, Jamie Watson gives Tumbleweed a roll and Softpedia.com is reporting that Steam is safe for Linux again.

Matt Hartley today said that if folks want to see greater Linux adoption there are certain things that should be done. He said that an online presence isn't enough, that we need "boots on the ground providing demonstrations, setup assistance and, some hands on help when it's needed." He figures Mom & Pop shops and PC repair techs are the best place to start there. Perhaps instead of restoring Windows over and over again, perhaps the repair tech could offer to install and support Linux instead. Mom & Pop shops should offer Linux computers or installs and support.

Jamie Watson said he was interested in using Tumbleweed, openSUSE's rolling version, because he needs some of the drivers in the latest and greatest kernels. So he began with stable openSUSE 13.2 and upgraded to Tumbleweed via zypper, although install images are available. This provided him and his cranky laptop with the drivers in Linux 3.18 instead of Linux 3.16.

Last Friday I linked to an article reporting on a bug in Steam scripts that deletes all user files on Linux under certain conditions. It became a much larger concern the days following as more users heard and today Steam addressed the issue. Softpedia.com is reporting that a new Steam client has been released and is miffed that a bug that could remove recursively forcefully all files warranted a mere line in a changelog.

In other news:

* Planning for Fedora Workstation 22

* Fedora 22 Will Aim To Use Plasma 5 For Its KDE Desktop Experience

* Smart things powered by snappy Ubuntu Core on ARM and x86

* The European Space Agency Builds A Private Cloud Platform With Red Hat

Related Activities

Related Software Related Blog Posts








The Linux Foundation Delivers 2015 Guide to the Open Cloud

Tuesday 20th of January 2015 04:07:43 PM

The Linux Foundation has issued its second annual "Guide to the Open Cloud: Open Cloud Projects Profiled," which provides a comprehensive look at the state of open cloud computing. The foundation created the guide in response to investor calls it received where people were trying to understand which projects mattered.

This year's report adds many new projects and technology categories that have gained importance in the past year. It covers well-known projects like Cloud Foundry, OpenStack, Docker and Xen Project, and up-and-comers such as Apache Mesos, CoreOS and Kubernetes. The purpose of the guide is to serve as a starting point for users considering which projects to use in building and deploying their own open clouds. Taking a deeper look into cloud infrastructure, the paper includes storage, provisioning and platform projects. New categories outline emerging cloud operating systems, Software-defined Networking (SDN) and Network Functions Virtualization (NFV) technologies.

To download the full report, you can visit The Linux Foundation’s Publication’s website at: https://www.linuxfoundation.org/publications/linux-foundation/guide-to-the-open-cloud

 You can also review the entire list in the online Open Cloud Directory on Linux.com at: http://www.linux.com/directory/open-cloud

“Our new ‘Guide To the Open Cloud’ is a helpful primer for any organization beginning a migration to the cloud or moving toward web-scale IT,” said Amanda McPherson, chief marketing officer at The Linux Foundation. “Open source and collaboration are clearly advancing the cloud faster than ever before. Just consider the many OpenStack distributions and ecosystem emerging around Linux containers that didn’t even exist a year ago. Yet, as the open source cloud evolves so quickly, it can sometimes be difficult for enterprises to identify the technologies that best fit their needs.”

There are several projects included in the guide that were hardly talked about in the last iteration, such as Docker.

You can use the guide to turn up lots of details on open cloud projects. For example, you can look up key contributors. In the case of OpenStack, the top contributors are listed as Cisco, HP, IBM, Mirantis, NEC, Rackspace, Red Hat and SUSE. There are many other in-depth statistics to take in.

For ease of reading, each category includes less than 10 projects, evaluated by maturity, number and diversity of contributions, number and frequency of commits, exposure, demonstrated enterprise use, and opinions from open source authorities.

Related Activities

Related Blog Posts








Interview: Mirantis Co-Founder Boris Renski Talks OpenStack

Tuesday 20th of January 2015 03:54:52 PM

Earlier this month, Mirantis announced the launch of Mirantis OpenStack 6.0, the latest version of its OpenStack cloud computing distribution. According to the company, it is based on OpenStack Juno, and version 6.0 is the first OpenStack distribution to let partners write plugins that install and run their products automatically.

OStatic has been running a series of interviews with movers and shakers on the cloud computing and Big Data scenes, and you may have seen this week’s interview with Ben Hindman from Mesosphere. In this latest interview, we caught up with Boris Renski, CMO and Co-Founder of Mirantis (shown), to talk about the company’s latest OpenStack distribution, cloud computing, and more. Here are his thoughts.

How did you first get involved with Mirantis, and how did the company initially take shape?

The company was founded a decade ago by Alex Freedland, who serves as the company chairman today. At that time it was a contract software engineering company focused on solving hardcore algorithmic problems for the EDA industry. I joined in 2006, when Mirantis bought my company of 50 people doing very similar work. We did a complete reboot and pivoted the company to OpenStack in January 2011 and have grown almost 10x since then. 

Some people feel like the OpenStack arena is getting crowded. Within it, what is Mirantis' competitive advantage?

We see only four credible vendors left standing in the OpenStack market: Mirantis, Red Hat, VMware and HP. Our competitive advantage at Mirantis is our pure-play focus. We only do OpenStack. We have no other product or services agenda to upsell, cross-sell or lock-in any customers. This OpenStack only approach lets us provide a better solution for both kinds of customers we serve - the developers that use Mirantis OpenStack and the infrastructure team that has to run it. Unlike our competitors, we are not burdened by other products in our solutions portfolio. We focus on supporting those OpenStack configurations that customers really need, not those that pull other products in our list of offerings. This applies to the infrastructure level (under OpenStack) and platform / developer tools level (on top of OpenStack). For instance, you can run Mirantis OpenStack on CentOS, with Ceph storage, Juniper Contrail for SDN and Pivotal's distribution of Cloud Foundry on top - a combination very commonly desired by OpenStack adopters but impossible for any of our big competitors to deliver and support.

Do you think the OpenStack scene is headed for consolidation, with a few big companies scooping up smaller companies?

The consolidation has already happened, I predicted it in December 2013. CloudStaling, Metacloud, and eNovance were acquired. Rackspace and StackOps pivoted to focus their business on managed hosting. MorphLabs seem to have gone away altogether. Piston and Nebula are still around, but seem to be in a niche that doesn't directly compete with Mirantis' OpenStack distribution. It is us, Red Hat, VMware and HP...and that's it.  

Do you think some of the companies currently backing OpenStack may not be so committed to completely open standards and open support strategies? Are some clearly more committed to openness than others?

I am sure that everybody has good intentions and would love to be committed to open standards. It's just a matter of legacy. If you have a multi-billion dollar business and you are a public company, you simply can't ship an OpenStack configuration that would cannibalize your existing business (see Clayton Christensen and the “Innovator’s Dilemma”). If you are a CEO of a public company and you do something like this, you'll be out the next day. Red Hat can't ship OpenStack on Ubuntu with Cloud Foundry on top. VMware can't ship OpenStack on KVM. EMC can't ship OpenStack with Ceph. But infrastructure and ops guys want Ceph and KVM and developers want Cloud Foundry and Docker. 

With your most recent 6.0 OpenStack release, customers can write or leverage plugins for the Fuel deployment manager and add to OpenStack's functionality. How would you position this, and open-sourcing the Fuel library, as competitive advantages?

With Fuel plug-ins our partner vendors can now get Mirantis OpenStack to run with their storage and networking out-of-the-box. The big problem with a complicated piece of software like OpenStack is installation and management. Every vendor in the infrastructure space wants to have an OpenStack story. But you don't quite get this story, until you enable your customers to deploy and scale OpenStack with ease. Just writing an OpenStack driver is not enough because you have to then manually configure it.  Few operations people can do this successfully (which is why we have a huge education business at Mirantis training people how to use OpenStack, more than 5,000 students to date). With installer plug-ins for Fuel, now any storage or networking vendor can get that complete OpenStack story. Fuel effectively becomes an InstallShield for OpenStack. And because Fuel is completely open and free, you don't even have to tie yourself to Mirantis OpenStack. You can write a Fuel plug-in and tinker with Fuel to get it to deploy your own OpenStack distro.

A lot of people considering or deploying OpenStack aren't familiar with tools like the OpenStack Tempest test suite, or Rally, used for validation and more. What can OpenStack users get out of these and the various certification and validation offerings that Mirantis has?

After you deploy OpenStack and before you unleash it onto your development teams (and risk looking like an idiot when everything breaks), you want to validate that OpenStack works via a series of sanity and load tests. Fuel already features a health check feature, which leverages tempest - an OpenStack testing project. Rally takes that to the next step and allows you to script custom load scenarios for OpenStack and profile the behavior of the cloud under load. We don't bundle Rally in our OpenStack distribution, but it is very popular in the community to help guide architectural decisions. We use Rally with some of our larger customers. 

Red Hat has proven that offering support and training for open source tools can be an outstanding business model. How does that model compare to your vision for Mirantis?

It is an outstanding model and we very successful embracing much of this approach. But there are many important variations of this model. In the case of Red Hat, you need to pay them money first and then you'll get the right to use the commercial version of the product. In the case of Mirantis, we don't really have a commercial version. What you download from our website is our commercial product, too. What you get with support is an SLA on case resolution and SLA on fixes and patches to various bugs. You can think of Mirantis as the Hortonworks of OpenStack in terms of business model. But, in general, it is not very different from Red Hat. 

 

Related Activities

Related Blog Posts








Bodhi Founder Returning as Ubuntu Heads to Mars

Tuesday 20th of January 2015 04:58:46 AM

Bodhi Linux founder, who recently resigned from the project, has announced that he's decided to return. Accompanying that news was also the announcement for Bodhi Linux 3.0 RC2. Elsewhere, Gary Newell briefly recaps the top 10 distributions of 2014 and Phoronix.com is reporting that Fedora 23 is likely to default to Wayland. Adam Williamson introduces Updatrex™ in response to PackageKit bug and Softpedia.com said today that Ubuntu will probably be the first operating system on Mars.

The top story today was the brief announcement by Jeff Hoogland that he's is officially returning to his former position as lead developer and manager of the Bodhi Linux project that he founded nearly four years ago. He didn't explain his decision to return in that post (or anywhere I can find), but one wonders if his interview last week with Christine Hall had anything to do with it; all that reminiscing about the unexpected success of the one-man distro. In that January 12 interview Hoogland said nothing of returning and, in fact, reiterated the then current structure. Plans for his future included lending a hand to the project from time to time, which is what Hoogland said at the time of his departure.

In today's announcement, Hoogland also announced the availability of Bodhi Linux 3.0 RC2. It shipped with Enlightenment 19.2, Linux 3.16, and is based on Ubuntu 14.04 LTS Core. See the announcement for download links. Bodhi Linux 3.0 removes the user theme configuration at the start of first boot and will instead incorporate a release-wide uniformity. Hoogland and colleagues have also been working on the Bodhi Linux website and forums, so excuse the cones.

Softpedia.com is today reporting that Ubuntu may soon be heading to Mars. Silviu Stahie said that NASA is hoping to put boots and operating systems on Mars by 2025 and since Ubuntu is Bas Lansdorp's favorite, it'll probably be the first one. "This would actually make a lot of sense, if you want to get something stable and capable or running for years on end, without breaking."

Adam Williamson today blogged about a new mysterious bug in the Fedora 21 PackageKit stack that causes up to seriously annoying issues. Williamson says they're working on the problems and have an early update in repos. His post gives more details and how to.

In other Fedora news, Phoronix.com today reported that Fedora 23 could be the version that switches Fedora's graphical system from Xorg to Wayland. "By the Fedora 23 release due out before the end of the calendar year we could see Wayland-by-default on this major tier-one Linux distribution." Additionally, Kevin Fenzi today posted about recently Fedora Infrastructure database dumps.

In other news:

* Analysis Of The Top 10 Linux Distributions Of 2014

* Manjaro 0.8.11 - The lonely goatherd

* TrackingPoint 338TP, the Linux rifle that's accurate up to a mile

* What’s new in SUSE LINUX 12?

* Linux Mint 17.1: Best KDE Spin Ever!

* DistroWatch Weekly, Issue 593, 19 January 2015

* Security problems need to be made public: Linus Torvalds

Related Activities

Related Software Related Blog Posts








Interview: Mesosphere's Ben Hindman on the Need for a Data Center OS

Monday 19th of January 2015 04:50:17 PM

One of the most interesting new companies leveraging an open source Apache project has to be Mesosphere, which OStatic covered in a recent post. The company offers a “data center operating system” (DCOS) built on the open source Apache Mesos project, and has announced a recent round of $36M in Series B funding. New investor Khosla Ventures led the round, with additional investments from Andreessen Horowitz, Fuel Capital, SV Angel and others.

According to Mesosphere’s leaders, the tech industry now needs a new type of operating system to automate the various tools used in the agile IT era.  They argure that developers and operators don’t need to focus on individual virtual or physical machines but can easily build and deploy applications and services that span entire datacenters.

OStatic caught up with former Twitter lead engineer and Apache Mesos co-creator Ben Hindman (seen here), who is now leading the design of Mesosphere’s DCOS, for an in-depth interview. Here are his thoughts.

What advantages can organizations get from a data center operating system?

The biggest advantages come from automating manual operations. The number of machines that most enterprise are working with is growing, and so is the variety of services and frameworks they’re trying to run. Organizations are under immense pressure to deliver software faster, with more “agility”. The combination of these has made static partitioning and human-scale management of machines and applications impractical.

Humans will always have a role in the datacenter, but things should be more automated with common services. Automation enables us to be smarter about scheduling and resource allocation, helping us drive up utilization (which drives down costs) and better handle machine and hardware failures.

Higher utilization is a key advantage of a datacenter operating system. If you’re in the cloud, you might be buying 8 core machines but only using 2 cores. Your cloud provider is really the one benefiting from virtualized resources, not you! The datacenter operating system enables you to more fully utilize your machines by automating the placement of your applications across your machines, using as many resources as it can per machine.

Dealing with failures gets much easier with a datacenter operating system too. When you are running 2-3 machines dealing with failures is a pain, but you can usually track down and fix any issues within a small amount of time. But when you begin to scale to tens, then hundreds, then thousands of machines, dealing with failures becomes an expensive manual operation.

Finally, a datacenter operating system enables developers, who traditionally have had to  interface with humans for access to machines, to develop and run their applications directly against datacenter resources via an API. Whether they’re claiming resources for existing applications or building new frameworks, the abstraction layer of the datacenter operating system makes it easier to build applications and share those applications across organizations.

Obviously Mesosphere's platform is based on Apache Mesos, but it's more complex than just Mesos. Tell us about the guts of the platform and how it was developed.  

The guts really are Mesos, which acts as the kernel for the distributed operating system. It provides the basic primitives of the DCOS, including starting and stopping applications - and the bridge between applications and the hardware.

What was built around Mesos and packaged into the Mesosphere DCOS are the other components that you would expect of an operating system. For example, the DCOS includes Marathon which acts as the distributed “init” system. Marathon uses the Mesos kernel to automatically launch and scale specific system and user applications. In addition to Marathon, the Mesosphere DCOS includes Chronos which provides distributed cron, i.e., the ability to launch applications on a regularly scheduled basis. The Mesosphere DCOS includes a Service Discovery component as well - a way of naming and finding specific applications as they are moved around in your datacenter or cloud.

There are a number of other components we’ve built in related to storage, managing containers, and other functionality that we view as key for running the next generation of distributed applications. And as with any other successful operating system, a huge focus for its evolution will be expanding the library of applications and frameworks that are natively supported.

In 2015, what do you think are the major trends we'll see in data centers?

Operators will stop thinking in terms of individual servers, and more in terms of reasoning across pools of resources and running distributed applications.

Some particularly interesting distributed applications will fall under the domain of “stateful services”, which is a challenging application to run in the cloud today and is ripe for innovation in the next few years.

There will be a lot of interesting work using machine learning to better automate and manage applications as well. Humans are notoriously bad at figuring out how many resources they need and will ultimately be completely handled via software.

From the hardware side of things I think we’ll start to hear more about concepts like disaggregated racks - where racks become like a big single computer. But we also see a trend towards the completely disaggregated datacenter. There are a number of scenarios where transporting the compute makes little sense, where you want to instead do local processing. Cell towers might have a mini datacenter, so you don’t have to get it back to the cloud, for example.

I've heard you talk about some data centers needing to do things like run multiple instances of Hadoop, and other tools. Why would such needs arise?

Primarily, you want to run multiple instances because you have different organizations in your company and you want to create isolation. Most organizations who have run multiple instances do so by creating a whole other cluster, which they have to set up - and then run independently. The problem here, however, is that you’ll often have large pools of idle resources in one cluster while another cluster might be completely overloaded. Using something like Mesos lets you run those two instances of Hadoop on the same hardware!

Another reason organizations will have multiple instances of Hadoop is when they want to upgrade from one version of Hadoop to another, which usually is performed in a completely new cluster. This is an expensive way to upgrade a Hadoop cluster, but there aren’t many other options out there!

Can you provide some anecdotal detail about a particular organization that is benefiting from Mesosphere's DCOS? How are efficiencies being captured there?

The Mesosphere DCOS was just launched, and we’ll be sharing lighthouse customer usage success stories in 2015. But I think a really good example of a compelling Mesos story is how eBay was able to pool its Jenkins instances. That’s an example of an organization that had to run multiple instances of a framework (Jenkins) and leveraged Mesos to collocate Jenkins on a single cluster.

Related Activities

Related Blog Posts








ToleranUX, Torvalds Walk-back, and Mageia's Badluck

Saturday 17th of January 2015 04:09:17 AM

Yesterday ArsTechnica.com quoted Linus Torvalds saying a focus on diversity is distracting and apparently it didn't set well with some folks. Today Torvalds emailed ArsTechnica.com in an effort to explain what he meant more precisely. Elsewhere, a mock distribution seems to be poking fun at feminists and diversity crowd. In other news, Mageia 5 Beta 2 is out after a bit of bad luck that may delay the final.

My favorite story of the day was the release of Mageia 5 Beta 2. Unfortunately, it was a good news/bad news thing. Good news, Beta 2 is here; bad news, Final has been delayed until March 10. Today's post said "various difficulties with EFI boot, grub 2 and a few other things" have caused the delay. The errata has been updated as of today although it still refers to Beta 1 in places. The announcement specifically asked for folks with UEFI machines to "test Beta 2 and report your experiences." Beta 2 shipped with KDE 4.14.3, GNOME 3.14.2, Linux 3.18.2, and systemd 217. Beta 3 is now scheduled for February 3 and the Release Candidate for February 24.

Yesterday Linus Torvalds made news by saying he didn't have time to be distracted by worrying about diversity quotas. He only cares about the technology and that works out well for Linux users. However, he must have got a lot of negative feedback on that statement because today he wrote ArsTechnica.com to try and explain. Torvalds told ArsTechnica.com, "What I wanted to say [at the keynote]—and clearly must have done very badly—is that one of the great things about open source is exactly the fact that different people are so different. It's not a religion. It's not an 'us vs them' thing." For him, it's about technology, not ideology.

Torvalds statements didn't come soon enough for some pranksters on Github today. Under the guise and nom de plume "Feminist Software Foundation," someone opened a Github account and page for ToleranUX. ToleranUX is said to be "world's first tolerant UNIX-like kernel." The commandline prompt reads "smash patriarch" and SystemV is the init. ArsTechnica.com isn't amused and goes into quite a bit more detail. For example ToleranUX on coding:

Absolutely no coding experience is necessary: all code are equal in the eyes of the Feminist Software Foundation. There is no objective way to determine whether one person's code is better than another's. In light of this fact, all submitted code will be equally accepted. However, marginalized groups, such as wom*n and trans* will be given priority in order to make up for past discrimination. Simply submit a pull request for any submission, whether code, artwork, or even irrelevant bits — nothing is irrelevant in the grand struggle for a Truly Tolerant UNIX-ike Kernel!

In other news:

* Interview with systemd creator Lennart Poettering

* AntiX MX-14.3 review

* MakuluLinux Xfce 7.0 Review

* Review: Linux Mint 17.1 "Rebecca" Xfce

* Fedora 21 review: Linux’s sprawliest distro finds a new focus

* Moving steam's .local folder deletes all user files on Linux

Related Activities

Related Software Related Blog Posts








DigitalOcean Launches FreeBSD Cloud Servers, Answering Developers

Friday 16th of January 2015 04:02:48 PM

As countless user surveys have shown, Linux is a predominant platform for many cloud deployments, with Ubuntu reigning king. In fact, many surveys show that more than half of OpenStack deployments are built on Ubuntu.

However, there are other open source operating systems of interest for cloud deployments and that is the space that open source SSD cloud host DigitalOcean plays in. This week, the company has announced FreeBSD hosting that can be implemented with Linux.

FreeBSD has a long and storied history, and companies ranging from Apple to Yahoo have built around it as an essential platform tool. DigitalOcean has announced that FreeBSD is now available as the first non-Linux operating system on its platform. FreeBSD is a Unix-like open-source operating system that, like other BSD releases, is derived from Unix. The OS provides developers with flexibility to compile their software from source rather than the packaged binary paradigm that Linux follows.

“DigitalOcean will continue to give developers more options,” said Moisey Uretsky, co-founder and chief product officer of DigitalOcean, in a statement. “FreeBSD differs from Linux in its history and philosophy; we want to continue shipping products our users are asking for.”

DigitalOcean officials said they pursued FreeBSD because it "was highly requested by developers using the DigitalOcean platform."

“The internal structure of DigitalOcean’s engineering team has evolved significantly over time due to the dynamic growth of the company," Neal Shrader, Senior Software Engineer at DigitalOcean, said. "What began as a couple of guys coding furiously in a room in Brooklyn has ballooned to a 100+ person organization serving hundreds of thousands of users around the globe. There has been a continued focus on improving how we approach, prioritize and execute this work – the FreeBSD Image is a testament to successful alignment.”

DigitalOcean claims that users can create a cloud server in 55 seconds with its platform, and pricing plans start at $5 per month for 512MB of RAM, 20GB SSD, 1 CPU, and 1TB Transfer. You can find out more here

Related Activities

Related Software Related Blog Posts








Three Essential OpenStack Deployment and Validation Tools

Friday 16th of January 2015 03:48:15 PM

As predicted, 2015 is turning out to be the year when many IT departments are moving from evaluation stage to deployment stage for OpenStack cloud instances.

What many first-time OpenStack users don't realize though, is that numerous tools have been developed in tandem with OpenStack to ease the process of testing and overall orchestration. In this post, you'll find three essential examples of these to know about.

Heat.  Heat is an OpenStack orchestration project, and according to its project wiki: "The mission of the OpenStack Orchestration program is to create a human- and machine-accessible service for managing the entire lifecycle of infrastructure and applications within OpenStack clouds. Heat is the main project in the OpenStack Orchestration program. It implements an orchestration engine to launch multiple composite cloud applications based on templates in the form of text files that can be treated like code. A native Heat template format is evolving, but Heat also endeavours to provide compatibility with the AWS CloudFormation template format, so that many existing CloudFormation templates can be launched on OpenStack. Heat provides both an OpenStack-native ReST API and a CloudFormation-compatible Query API."

You can find many good online guides to using Heat. Arthur Berezin takes you through the essentials in his guide to getting started with Heat, which covers running it from the command line, using it with applications and much more.

Tempest. Tempest is a set of integration tests to be run against a live OpenStack cluster. Tempest has batteries of tests for OpenStack API validation, Scenarios, and other specific tests useful in validating an OpenStack deployment. It can be run against any OpenStack cloud, be it a one node devstack install, a 20 node lxc cloud, or a 1000 node kvm cloud. You can find lots of online guides on using Tempest. 

Rally. Some people find Tempest to be a complex tool, and there is a project that helps demystify it: Rally, a project that creates a framework for validating, performance testing and benchmarking OpenStack at scale with Tempest.  Mirantis has an excellent set of resources on Rally.

In this one, the Mirantis team notes: "Rally automatically installs and configures Tempest, and automates running Tempest tests. In contrast to Tempest, which is installed and executed individually on each cluster, Rally can verify a huge number of clouds — just add clouds as deployments to Rally, then easily switch between them. Rally’s benchmarking engine can automatically perform tests under simulated user loads. Results of these tests and benchmarks are saved in Rally’s database. You can review them."

Hopefully, these tools can make your use of OpenStack easier and more trusted. For many more resources of interest, Opensource.com has a brand new guide to deployment resources, found here, and you can check out our roundup of OpenStack training resources here

Related Activities

Related Software Related Blog Posts








Torvalds Only Cares about the Kernel

Friday 16th of January 2015 03:53:23 AM

Linus Torvalds is back in the news today after his keynote speech at Linux.conf.au. ArsTechnica.com covered the Q&A session with some of Torvalds most notable quotes. In other news, Phoronix.com previews GNOME 3.16 and Clement Lefebvre today announced new tiny PC "MintBox Mini." Lastly, Nick Heath covers Munich's Linux trials and travails.

After speaking at the Linux.conf.au Conference in New Zealand today, Torvalds took questions from the audience. ArsTechnica.com quoted Torvalds saying, "Some people think I'm nice and are shocked when they find out different. I'm not a nice person, and I don't care about you. I care about the technology and the kernel—that's what's important to me." On the subject of diversity Torvalds said, "The most important part of open source is that people are allowed to do what they are good at. All that [diversity] stuff is just details and not really important."

Michael Larabel recently compiled a list of features to look out for in upcoming GNOME 3.16. Larabel said 3.16 is due in March 2015 and developmental versions have been release for those wishing a sneak peek. Those trying the betas might find that GTK+ now supports OpenGL which will lead to a more feature-rich and prettier Nautilus, Mutter, and themes. Other improvements are less flashy but welcome.

Clem Lefebvre today blogged of the new Mint mini PC dubbed "MintBox Mini." The silent mini is less than an inch thick and is completely silent. It will feature an AMD A4 6400T, Radeon R3 GPU, 4GB RAM, 64 GB SSD, Wifi and wired, lots of USB ports, and two HDMI sockets. It will likely cost $295 and will be available in Spring 2015.

Nick Heath today said, "The mayor of a German city that swapped Windows for Linux must stop publicly criticising the way IT is run at the council or risk worsening an ongoing staffing crisis." The IT department is running at about 80% currently and spokesmen say they'll continue trying to fill the vacancies. Beyond that a survey is planned for workers to gauge adoption issues and satisfaction. The results will be "used to draw up a definitive list of issues users have with IT at the council and potential ways to resolve them." But all in all, the Microsoft loving mayor is making things harder than they have to be.

Related Activities

Related Blog Posts








Leverage MapR's Resources for Getting Big Data Right

Thursday 15th of January 2015 04:26:03 PM

As the Big Data trend marches forward in enterprises and as Hadoop becomes a true open source star driving the trend, MapR Technologies doesn't get quite as much attention as some other players. However, the company offers a slew of informative and helpful posts, videos and educational offerings that can help any enterprise get smart about leveraging Big Data tools, including many free, open source applications. 

Here are just a few resources from MapR to know about.

MapR's CEO has offered up a series of interesting predictions for Big Data in 2015. According to his predictions:

"In 2015, IT will embrace self-service Big Data to allow developers, data scientists and data analysts to directly conduct data exploration. Previously, IT would be required to establish centralized data structures. This is a time consuming and expensive step. Hadoop has made the enterprise comfortable with structure-on-read for some use cases. Advanced organizations will move to data bindings on execution and away from a central structure to fulfill ongoing requirements. This self service speeds organizations in their ability to leverage new data sources and respond to opportunities and threats."

"As organizations move quickly beyond experimentation to serious adoption in the data center, enterprise architects move front and center into the Big Data adoption path. IT leaders will be vital in determining the underlying architectures required to meet SLAs, deliver high availability, business continuity and meet mission-critical needs. In 2014 the booming ecosystem around Hadoop was celebrated with a proliferation of applications, tools and components. In 2015 the market will concentrate on the differences across platforms and the architecture required to integrate Hadoop into the data center and deliver business results."

MapR officials have also been predicting that there is going to be a lot of consolidation in the Hadoop space this year.

As far as resources that companies can leverage to get going with Big Data tools, MapR's site offers many good ones to know about. MapR offers a very useful and flexible Sandbox for Hadoop.  It provides tutorials, demo applications, and browser-based user interfaces to let developers and administrators get started quickly with Hadoop. It is actually a fully functional Hadoop cluster running in a virtual machine. You can try the Sandbox now. It is free and available as a VMware or VirtualBox VM.

MapR's CTO, M.C. Srivas, recently delivered a keynote address at the 2014 Strata Conference + Hadoop World conference in New York City. You can watch a video of his address here, and it focuses on the future of Big Data and how organizations can lay out logical Big Data plans.

MapR also has a huge set of case studies on how customers have leveraged its own and other Big Data tools. And, you can find a library of free webinars on Big Data here

"Hadoop continues to show significant evidence of how companies are achieving measurable ROI from storing, processing, analyzing and sharing Big Data," said MapR CEO and Cofounder, John Schroeder. “This is the year that organizations move Big Data deployments beyond initial batch implementations and into real time. This will be driven by the huge strides that existing industry leaders and soon-to-be new leaders have already made by incorporating new Big Data platforms into their operations and integrating analytics with 'in-flight' data to impact business as it happens.”

 

Related Activities

Related Blog Posts








Google Opens Up Cloud Monitoring Service to Developers

Thursday 15th of January 2015 04:02:06 PM

Featuring full integration of the technology from Google’s acquisition of Stackdriver last year, Google Cloud Monitoring has arrived. It's a tool that developers can leverage to monitor the performance of application components. If you're a Google Cloud Platform customer you can try it out for free beginning immediately. Here are more details.

Eight months ago, Google brought Stackdriver to its bag of technology tols. The company announced Stackdriver’s initial Google Cloud Platform integration at Google I/O in June 2014 and made the service available to a limited set of alpha users. Since then, the team has been working to make operations easier for Google Cloud Platform and Amazon Web Services customers, and hundreds of companies are now using the service for that purpose. 

Now, Google has announced the beta availability of Google Cloud Monitoring. All Google Cloud Platform customers can now use Cloud Monitoring to gain insight into the performance, capacity and uptime of Google App Engine, Google Compute Engine, Cloud Pub/Sub, and Cloud SQL.

According to a post on the Google Cloud Platform blog:

"Cloud Monitoring streamlines operations by unifying infrastructure monitoring, system/OS monitoring, service/uptime monitoring, charting and alerting into a simple and powerful hosted service. Customers can use Cloud Monitoring to gain insight into:
- Overall Health: Use resource groups to create aggregate views of your key environments and systems. Incorporate application or business statistics using custom metrics. Create and share custom dashboards to provide your team with a unified perspective.

- Usage: Get core metrics and dashboards to understand capacity and utilization of Google Cloud Platform services.

- Uptime: Configure endpoint checks to test functionality and notify team members when web servers, APIs, and other Internet-facing resources become unavailable for end users.

- Performance: View latency, error rates and other key metrics for Google Cloud Platform services, and common web/application serving, database, messaging and load balancing platforms. Configure alerting policies to be notified when metrics are outside of acceptable ranges.

- Incidents: Receive notifications via multiple communication channels when alerting policies are violated.

The new tool lets you configure alerts to notify your team when specified conditions are met, such as when the request latency for your App Engine module exceeds a certain threshold. These alerts can be configured to notify you via email, SMS, PagerDuty, Campfire, Slack, HipChat and webhook.

Finally and very notably, Cloud Monitoring also features native integration with common open source services, such as MySQL, Nginx, Apache, MongoDB, RabbitMQ and many more. For example, you can use our Cassandra plugin to gain deep visibility into the performance of your distributed key value store, as seen here:

Related Activities

Related Blog Posts








Gentoo Needs to Focus on Distros We'll Never See

Thursday 15th of January 2015 03:49:33 AM

Today's tiptoe through the newsfeeds found a list of distributions we'll never see. Elsewhere, Phoronix.com said Fedora leadership is still planning releasing version 22 on time. Bruce Byfield has the advantages and disadvantages of popular Linux desktops and Jon maddog Hall shares his road to Open Source success. Over in Gentooland developer Donnie Berkholz says Gentoo needs focus to stay relevant and Andreas Hüttel has started a new blog series highlighting Gentoo derivatives.

For folks thinking of switching desktops Bruce Byfield today offered the pros and cons of each popular desktop. For example, Cinnamon is a classic desktop lacking drop and drag according to Byfield. The main advantage to GNOME is it's "seemingly endless choice of desktop configurations," but:

GNOME consists of two modes: one in which you work, and an overview in which you launch applications and position them on virtual desktops. This arrangement might work on a phone, where the screen is small, but it is a nuisance on a laptop or workstation, especially since you can only launch one application at a time from the overview.

Byfield also said KDE is innovative but the desktop configuration is convoluted and confusing. He also looked at Xfce, Unity, and MATE, so check that out.

Gentoo was once at the top of popularity charts and polls, but the last ten years or so haven't been as exciting. Today Gentoo developer Donnie Berkholz said, "If we want to have any relevance, we need to have focus. Everything for everybody is a guarantee that you'll be nothing for nobody." He thinks three core areas are the ones to focus on. These are the developer, those requiring flexibility, and those who wish to learn how Linux works. He concluded, "We've gotten overly deadened to how people want to use Linux, and this is my proposal as to how we could regain it."

In related news, Andreas Hüttel today launched a new blog series highlighting "Gentoo-derived products." He began his series with one of his personal favorites SystemRescueCD. Hüttel said SystemRescueCD is "the Swiss army knife" of Linux distros.

Ever needed a powerful Linux boot CD with all possible tools available to fix your system? You switched hardware and now your kernel hangs on boot? You want to shrink your Microsoft Windows installation to the absolute minimum to have more space for your penguin picture collection? Your Microsoft Windows stopped booting but you still need to get your half-finished PhD thesis off the hard drive? Or maybe you just want to install the latest and greatest Gentoo Linux on your new machine?

The best post today has to be Larry Cafiero's Linux Distros We’ll Never See. First up is William Shatner Linux, based on Arch with recent codenames Kirk, Denny Craine, and Khaaaaaaaaaaaaaan. It overperforms sometimes and is very stiff at times. It's motto? "You. Need this. Operating. System."

Another distro I guess we'll never see is Bill and Ted’s Excellent Linux whose motto is "Whoa!" and was recently released under the codename "Royal Ugly Dudes." The motto of Samuel L. Jackson Linux is... well, I bet you can guess.

In other news:

* What did maddog study for a career in computer science?

* FESCo Makes A Bold Move To Try To Release Fedora 22 On Time

* Allwinner Accused of Breaking Linux License Rules

* Deepin Linux: A Polished Distro That's Easy to Install and Use

* My business was saved by Zorin OS

Related Activities

Related Software Related Blog Posts








Study Shows Amazon Still Offers Big Cost Savings in the Cloud

Wednesday 14th of January 2015 04:06:08 PM

While open source cloud computing platforms are all the rage, with OpenStack grabbing lots of headlines, there may still be advantages to some leading proprietary platforms. So says a new report from infrastructure as a service (IaaS) performance monitoring analysts at Cloud Spectator. The full findings from their report are available now (registration required), here. Among the findings, the report found that Amazon EC2 offers significant cost savings as a long term investment.

For some who are leveraging the cloud, the cost of block storage is a major issue. Notably, the Cloud Spectator report found that Microsoft Azure offers the least expensive block storage. Rackspace's storage was found to be more expensive although the report notes that it may offer advantages for some customers.

The Cloud Spectator report focused on 10 vendors: AWS, CenturyLink, DigitalOcean, Google, HP, Joyent, Microsoft Azure, Rackspace, SoftLayer and Verizon.

SoftLayer was found to be the least expensive choice in the cloud for large  Windows deployments. Among other winning metrics, Amazon was found to have the lowest costs for leveraging Linux virtual machines.

The new report joins several others coming out in the cloud space. According to a report from WANTED Analytics: "There are 3.9 million jobs in the U.S. affiliated with cloud computing today with 384,478 in IT alone. The median salary for IT professionals with cloud computing experience is $90,950 and the median salary for positions that pay over $100,000 a year is $116,950." In addition, a new KPMG study, 2014 Cloud Survey Report: Elevating Business in the Cloud, shows that executives are rapidly changing how they think about the cloud.

The KPMG study is done annually and involves responses from C-Level executives. The good news is that 73% of them are seeing improved business performance after implementing cloud-based applications and strategies. And, notably, 35 percent of enterprises adopting cloud computing platforms are interested in business analytics.

That last finding implies that we could see more convergence between the cloud and Big Data tools such as Hadoop.

The KPMG study is available for download here.

IDG Enterprise also came out recently with results from a new survey it did involving 1,672 IT decision-makers, and they show that cloud adoption of all kinds continues apace.

The results showed:

"More than two-thirds (69%) of companies have already made cloud investments. The rest plan to do so within the next three years. Companies appear to be moving steadily: Respondents anticipate their cloud usage will expand, on average, by 38% in the next 18 months. At the end of 2015, companies expect to be operating an average of 53% of their IT environments in the cloud."

 

 

Related Activities

Related Blog Posts








SOASTA Delivers Workbench Tool for Platform, Mobile Analytics

Wednesday 14th of January 2015 03:52:49 PM

Whether you're involved with a cloud computing deployment or have a role in the operation of a website, you're probably hungry for better analytics and performance metrics. With that in mind, SOASTA, which specializes in platform performance measurements and analytics, has announced the availability of its Data Science Workbench, a query and analysis environment for gaining nsight from complex user experience performance data. The Workbench simplifies the exploration and analysis of current and historical Web and mobile user performance data.

Business analysts, performance engineers, data scientists and others are increasingly in need of detailed performance metrics. SOASTA's analytics can let them understand historic user conversion rates correlated to response time, view most profitable click-paths by region, trend peak user traffic by operating system, or query data in many other ways.

So how does the Workbench relate to open source? SOASTA has a service provision business model, but it uses cloud-based data storage to retain user experience data from a tool called mPulse that leverages an open source computing language. SOASTA's Workbench may also be of interest to those doing cloud deployments or mobile deployments who are in need of deep metrics. SOASTA's tools can be used for load testing and monitoring and much more.

"Big data is billions of zeros and ones until you apply analytics and visualization that mean something to someone,” said Henry Morris, Senior Vice President for IDC and Executive Lead for Big Data and Analytics Research.  “Taking critical steps to enable simplified big data analytics to help customers combine data collected for specific purposes – like testing or monitoring – and creating analytics tools to pull business decision-making intelligence from them is a key, fast emerging area.”

 “Understanding today’s Web and mobile user behavior and impact on digital business is extremely complex and time-consuming,” said Tom Lounibos, CEO of SOASTA, in a statement. “With SOASTA’s Data Science Workbench, users can quickly analyze meaningful business insights without mining and sifting through enormous amounts of performance-related data. For the leading brands in the world, SOASTA has become synonymous with the art of Performance Analytics. Our new solution will provide an even deeper understanding into their users’ experience.”

Data Science Workbench is available immediately as an annual service package that includes data transformation and access, visual tools and Data Scientist help as needed with data access and support.  For more information, visit: www.soasta.com/products/data-science-workbench

As seen below, the Workbench platform specializes in data visualizations:

 

Related Activities

Related Blog Posts








Chrome, Contributing Made Easy, and Linux Kills

Wednesday 14th of January 2015 04:23:30 AM

Today in Linux news Jim Mendenhall discusses whether Chrome OS is a Linux distribution. In other news, Konrad Zapałowicz said contributing to the Linux kernel is easier that one might imagine and another Linus quote is making headlines. Elsewhere, Danny Stieben compares Linux to BSD and OpenSource.com is wondering which distro you use.

Chrome OS was declared a Linux distribution a while back with most saying it's evidenced by being built on a Linux kernel. Nevertheless, some still can not think of it as Linux. Yet today Jim Mendenhall said, "Chrome OS's simplistic, restrictive nature, does not mean that it is not a Linux distro," adding that even Richard Stallman agrees that it is. Mendenhall said that not only is Chrome OS a Linux distro, but it is probably the most widely used variant. "Only time will tell where Chrome OS will fit in the long and varied history of GNU/Linux," Mendenhall concluded.

Danny Stieben today posted a comparison of Linux versus BSD over at Makeuseof.com. Steiben said Linux and BSD are both Open Source, "Unix-like," look similar, and run a lot of the same programs. "So when you only try to look for big, noticeable differences, you’re not going to find any." But underneath you'll probably find some said Stieben. First up, he said Linux is a kernel while BSD is an entire operating system whose kernels have been proven superior to Linux in ways. The licenses are very different as might be vendor support. However, Steiben concludes folks should stick with Linux if they're looking for a desktop system.

Linux.com reposted an article by Konrad Zapałowicz today saying that contributing to the Linux kernel is not nearly as difficult as one might think. He list four main mythes he's encountered and tries to debunk each. Zapałowicz said kernel programming is actually "fairly easy." He said if you look at the kernel as smaller parts, it's not as overwhelming. And there are plenty other things one might do. "Not everyone has to redesign kernel core modules, there is plenty of other work that needs to be done. For example, the very popular newbie task is to improve the code quality by fixing either the code style issues or compiler warnings." No special hardware required.

In other news:

* Torvalds: Apple's HFS+ is the worst file system ever

* Poll: What is your favorite Linux distribution?

* Linux kills, 3D blooms: Last look from Vegas

* New GNU Sharable Badges

* Top 10 Linux Foundation Videos of 2014

Related Activities

Related Software Related Blog Posts








Reports: Google Stops Patching Old Android Browser Vulnerabilities

Tuesday 13th of January 2015 03:21:18 PM

Around the time that Google went into the handset business itself, there were a lot of questions about how the company would treat Android in terms of protecting its own competitive advantages with Android devices while preserving Android as an open platform for others to leverage. Some suggested that Google devices would get the advantage of newer releases of Android, while other devices would have to wait.

In a different spin on these issues, security researchers are raising red flags over the fact that Google will seemingly no longer no longer fix security flaws in the browser in the oldest versions of Android. According to Tod Beardlsey, a security researcher at Rapid7, versions of Android WebView, which helps the Android browser that apps use to render webpages, are insecure.

The affected issue pertains to Android WebView running on Android 4.3 and below. The component in question was done away with when Android 4.4 arrived. 

Still, Android is such a prevalent platform now that Google is bound to face the same issues that Microsoft faced as it phased out support for widely used versions of Windows over the years.

Beardsley writes:

"Google will no longer be providing security patches for vulnerabilities reported to affect only versions of Android's native WebView prior to 4.4. In other words, Google is now only supporting the current named version of Android (Lollipop, or 5.0) and the prior named version (KitKat, or 4.4). Jelly Bean (versions 4.0 through 4.3) and earlier will no longer see security patches for WebView from Google, according to incident handlers at security@android.com...When asked for further clarification, the Android security team did confirm that other pre-KitKat components, such as the multi-media players, will continue to receive back-ported patches...Google's reasoning for this policy shift is that they 'no longer certify 3rd party devices that include the Android Browser,' and 'the best way to ensure that Android devices are secure is to update them to the latest version of Android."

People are bound to disagree about these changes. After all, if you use iOS or iTunes, don't they nag you to upgrade to the latest version for best results? Adobe's tools are pretty diligent about doing that as well.

When it comes to security issues, though, the issue is the size of the vector affected by a vulnerability, and it sounds like there are going to be a lot of vulnerable Android devices out there.

You can read more about Beardsley's findings here

 

Related Activities

Related Software Related Blog Posts








More in Tux Machines

Leftovers: Software

today's howtos

Leftovers: Gaming

Pro tip: Find tons of open-source Android software with F-Droid

If you're looking for truly open-source software for the Android platform, you don't have to do a ton of searching or check through licenses from within the Google Play Store. All you have to do is download a simple tool called F-Droid. With this tool, you can download and install apps (from quite a large listing) as easily as you can from the Google Play Store. You won't, however, find F-Droid in the Google Play Store. Instead, you have to download the .apk file and install it manually. Once it's installed, the rest is just a matter of searching for an app and tapping to install. Read more