Language Selection

English French German Italian Portuguese Spanish

OStatic

Syndicate content
OStatic
Updated: 3 hours 35 min ago

ApacheCon Shaping Up to Be One of the Best Events of the Year

Wednesday 18th of March 2015 03:06:21 PM

The Apache Software Foundation is putting together what looks like it will be one of the better open source events of the year: ApacheCon North America, to be held in Austin, Texas, April 13th - 16th. Austin is a fun place to visit, and the agenda for ApacheCon looks excellent. You can register by March 21st to take advantage of the earlybird pricing and here are more details on the event.

The Linux Foundation and The Apache Software Foundation are again joining forces to advance and support open source development by co-producing this year's ApacheCon events in North America and Europe.

ApacheCon is meant to reinforce the Apache Software Foundation's core tenet of 'Community Over Code,' with sessions that focus on open source projects, including Apache projects Cassandra, Cordova, CloudStack, CouchDB, Geronimo, Hadoop, Hive, HTTP Server, Lucene, OpenOffice, Struts, Subversion and Tomcat, among others. Overall, ApacheCon will feature 500+ Apache project community developers and users.

ApacheCon keynotes include:

Brian Behlendorf, Co-Founder of the Apache Software Foundation, Entrepreneur and Technologist

Chip Childers, Tech Chief of Staff for CloudFoundry.org

 Nikita Ivanov, CTO at GridGain

Jay Schmelzer, Director of Program Management at Microsoft and .NET Foundation President

Andy Terrel, Chief Computational Scientist at Continuum Analytics

Session highlights include:

Profiting From Apache Projects Without Losing Your Soul: Shane Curcuru, Apache Software Foundation

Using cloud based VMs to build community: Ross Gardler, Apache Software Foundation and Microsoft Open Technologies

 Using Apache Brooklyn and Docker to simulate your production environments in the Cloud: Andrew Kennedy, Cloudsoft

 From the Incubator to TLP: a case study of community metrics for Apache Aurora and Apache Mesos: David Lester, Twitter

How Apache gets GoT to your iPad: Phillip Sorber, Comcast

To view the full ApacheCon schedule, visit: http://events.linuxfoundation.org/events/apachecon-north-america/program/schedule.

 

 

 

Related Activities

Related Blog Posts








Cisco Deepens OpenStack Commitment with Deutsche Telekom

Wednesday 18th of March 2015 02:56:09 PM

The convergence of OpenStack-based cloud computing and the telecom industry is continuing apace. We've reported on Red Hat's partnership with Telefonica to drive Network Functions Virtualization (NFV) and telecommunications technology into OpenStack. And we've covered Canonical and Juniper Networks' partnership to oversee co-development of a carrier-grade, OpenStack solution.

Now, Deutsche Telekom and Cisco are announcing a number of newly developed Intercloud-based services. However, these services won't be focused on telco datacenters, but rather on small- and medium-sized businesses and enterprise customers. 

Deutsche Telekom and Cisco are currently setting up the necessary infrastructure for a redundant Intercloud node in Deutsche Telekom's high-performance data centers in Magdeburg and Biere near Berlin, Germany. Deutsche Telekom will deliver services throughout the European region. With the initial Infrastructure as a Service (IaaS) offering, Cisco and Deutsche Telekom will build around OpenStack.

According to the announcement:

"[OpenStack] allows the transition of all virtualized workloads into the cloud at the highest scalability and with significantly improved cost efficiency, while helping to ensure data sovereignty. Moreover, standardized APIs, open standards, virtualization and application policy libraries will be added. This allows true workload mobility between private enterprise clouds, virtual private clouds of Deutsche Telekom, public cloud infrastructures of different providers and the Intercloud node; greatly simplifying cloud access for customers."

The two companies say they will also work with ISVs on building out relevant applications. And, it's worth noting this: "Special focus will be given to cloud applications enabling the Internet of Everything (IoE), collaboration, cloud-based virtual managed services and cloud-based systems management solutions."

Related Activities

Related Blog Posts








The Return of SCO, a Debian Retrospective, & GNU is 30

Wednesday 18th of March 2015 03:33:14 AM

Last I heard SCO was all but bankrupt, but apparently five years later a claim against IBM for $5 billion is still pending. Elsewhere, Bruce Byfield discusses how Debian has changed over the years and if that was for the good. In other gnews, the GNU Manifesto turns 30 this month.

It's ba-ack. I bet you thought you'd never have to hear the words SCO again, but here it is. The Salt Lake Tribune today reported that a lawsuit was filed in U.S. District Court in Salt Lake City by SCO claiming $5 billion in damages from IBM. SCO filed for bankruptcy in 2007 and lost its case against Novell in 2010. This case is the last bit of business for SCO but IBM has claims against them in return as well, so this still isn't the last of SCO. I know there's a zombie metaphor here somewhere, I just can't put my finger on it.

Bruce Byfield today posted a real nice piece on The Changing Face of Debian these last fifteen years. He said back then the developers were "becoming legends" and Debian was a "major power in free software," fearless and ambitious. Enter Ubuntu and Debian's rough years, but now here on the other side, "Debian transformed itself into an upstream distribution. On Distrowatch, the top two distributions for page hits are based on Debian, with Debian itself a respectable third. In addition, 130 of the distributions listed on Distrowatch are based on Debian and its influence is stronger than ever before."

This month the Free Software Foundation and Richard Stallman will be celebrating 30 years of GNU. Stallman published his GNU Manifesto in Dr. Dobb’s Journal of Software Tools in March 1985. The New Yorker covered the occasion today quoting some of the manifesto:

[A] user who needs changes in the system will always be free to make them himself, or hire any available programmer or company to make them for him. Users will no longer be at the mercy of one programmer or company which owns the sources and is in [the] sole position to make changes." The document is also funny, in keeping with the playful traditions of early hackers. For instance, GNU (pronounced “guh-NOO," with a hard “g") is a recursive acronym, spelling out “GNU’s Not Unix."

Writer Maria Bustillos explained "the free in free software refers to freedom, not cost" which is "key to understanding Stallman's career." Bustillos spoke with Stallman on the phone who said, "Proprietary software was the norm when I started the GNU project in 1983. It was because you could no longer get a computer that you could run with free software." GNU is a set of free and Open Source tools which are still essential in the implementation of Linux, which is why many prefer the term GNU/Linux to just Linux - like Debian GNU/Linux. See Bustillos' article for much more on Stallman and GNU. Sam Varghese today added that we, the users and developers of free and Open Source software, "owe him a massive debt. He had a dream and it has come to pass."

Other interesting titles today:

* 7 Leading Applications for GNOME

* Seven killer Linux apps that will change how you work

* 7 Irish open source developers you should know

* ROSA Desktop Fresh R5 GNOME Silently Released, Here’s What’s New

Related Activities

Related Software Related Blog Posts








Survey Finds That Enterprises Are Ramping Up Big Data Spending

Tuesday 17th of March 2015 03:13:15 PM

How much is the average company going to spend on data analytics initiatives over the next six months? A new IDG Enterprise survey finds that they will spend an average of $7.4 million on data-related initiatives with enterprises investing $13.8 million, and small & medium businesses (SMBs) investing $1.6 million.

Even more notable: Eighty percent of enterprises and 63 percent of small & medium businesses (SMBs) already have deployed or will deploy big data projects in the next twelve months. Here are more of the Big Data findings from the survey.

Healthcare is the leader among all industries that are implementing, planning or evaluating data-driven projects over the next year, the IDG study found. And larger companies are much more likely it is to have a data analytics initiative being planned.

Additionally, 83 percent of organizations are prioritizing structured data initiatives as critical or high priority in 2015, and 36 percent have plans to increase their spend for data-driven initiatives in 2015.

Many of the key findings from the survey, which included 1,139 respondents who reported their organizations are currently implementing, planning or considering big data projects, are summarized here, and they include these conclusions:

Over the past year, the number of organizations with deployed/implemented data-driven projects has increased by 125%.

Organizations place greater priority on structured data initiatives compared to unstructured data, as 32% of organizations state that managing unstructured data is not on their to-do list.

Confidence in security solutions and products for company data rises, increasing from 49% in 2014 to 66% this year.

While security confidence increases, organizations also realize they must restrict access to sensitive data.

Related Activities

Related Blog Posts








Docker Inc.'s Acquisitions Aim for Ease of Use and Portability

Tuesday 17th of March 2015 02:58:29 PM

Docker, Inc., the corporate sponsor of the popular container technology toolset, has been in the news recently for its acquisitions. Last month, Docker acquired startup SocketPlane, and said that SocketPlance could help add standard networking interfaces to Docker to make multi-container distributed apps easily portable. And Docker, Inc. has also acquired Canadian startup Kitematic and its eponymous, popular open source software tool. "The Docker experience is enhanced through Kitematic and its graphical user interface (GUI)-driven workflow that automatically installs Docker on a user’s Mac to build, ship and run Docker containers in just minutes," says the announcement.

In both acquisitions, Docker, Inc. is focusing on interface and portability measures that can make Docker more friendly for typical administrators.

“We are thrilled to be bringing the Kitematic team on board to help expand our efforts in creating tooling that enriches the developer experience,” said Solomon Hykes, chief architect of the Docker Project and founder and CTO of Docker, Inc. “Kitematic reflects our original vision for Docker which was to make a great tooling for developers, who in turn would then use it to make great tools. Adding talented people like the Kitematic team to Docker is how we ensure that we keep focused on tackling new challenges for developers as they arise. We are also proud to announce that the Kitematic tool remains open source and free.” 

Apparently, Kitematic does make Docker very accessible for Mac users. According to Docker, Inc.:

"The first step of the Docker journey to distributed applications starts with building and running a container and through Kitematic that initial step happens in less than five minutes – including the time to download the Kitematic package. Kitematic leverages Docker Machine, one of Docker’s three orchestration tools, to configure a developer’s Mac as a “Docker host” and then subsequently install and run the Docker Engine. The developer is then presented with a catalog of curated content which includes images for Nginx, Minecraft, Redis, and more that they can build, ship and run as Docker containers on their laptop."

The acquisitions from Docker, Inc. come at a time when things are about to get competitive for Docker. The folks behind CoreOS, a very popular Linux flavor for use in cloud deployments, are developing their own container technology, dubbed Rocket, which will be competitive.

Rocket is a new container runtime, designed for composability, security, and speed, according to the CoreOS team. According

According to a post on Rocket:

“When Docker was first introduced to us in early 2013, the idea of a “standard container” was striking and immediately attractive: a simple component, a composable unit, that could be used in a variety of systems. The Docker repository included a manifesto of what a standard container should be. This was a rally cry to the industry, and we quickly followed. We thought Docker would become a simple unit that we can all agree on.”

“Unfortunately, a simple re-usable component is not how things are playing out. Docker now is building tools for launching cloud servers, systems for clustering, and a wide range of functions: building images, running images, uploading, downloading, and eventually even overlay networking, all compiled into one monolithic binary running primarily as root on your server. The standard container manifesto was removed. We should stop talking about Docker containers, and start talking about the Docker Platform.”

“We still believe in the original premise of containers that Docker introduced, so we are doing something about it. Rocket is a command line tool, rkt, for running App Containers. An ‘App Container’ is the specification of an image format, container runtime, and a discovery mechanism.”

 You can learn more about Kitematic here

 

Related Activities

Related Software Related Blog Posts








Strange Bedfellows and Linux Reviews

Tuesday 17th of March 2015 03:47:54 AM

Christine Hall at FOSS Force today wrote that Canonical's deal with the devil may signal Ubuntu's swan song topping today's Linux news. Linux Tycoon Bryan Lunduke reviewed the Dell M3800 with Ubuntu and Jamie Watson tested six pre-release distributions. To top that off, we have four reviews and a Linux Mint Debian teaser.

Last week Canonical announced an extension of their partnership with Microsoft to certify Canonical's Metal-as-a-Service on Microsoft cloud servers. Canonical says MAAS is tool to set up physical servers as cloud and other services with the push of a button.

Canonical’s MAAS brings the dynamism of cloud computing to the world of physical provisioning and Ubuntu. Connect, commission and deploy physical servers in record time, re-allocate nodes between services dynamically, and keep them up to date and in due course, retire them from use.

Canonical already contracts with Microsoft to sell Ubuntu Docker server implementations on the Azure Marketplace and Christine Hall at FOSS Force implied this latest move may signal a "swan song" for Canonical. She said Novell's deal with Microsoft contributed to its downfall and eventual sell-off and wonders if Canonical may start "to play fast and loose with open source licenses." She said if nothing else, it's gearing up to be a big "brouhaha in the FOSS user community" quoting one commenter saying "Bye-bye Ubuntu." Even Red Hat's hands aren't completely clean in this "Bermuda Triangle."

Speaking of Ubuntu, Bryan Lunduke testdrove a Dell M3800 with Ubuntu recently and said "this beast" is not a desktop replacement, it's a "desktop destroyer." Then he said Ubuntu "works great" with the "beast's" hardware. He even tested several other distributions on it which also "ran great with absolutely zero issues." He did wonder why no Ethernet port and said he didn't love the trackpad, but the keyboard is "fantastic." It has about a four-hour battery life for all its heavy metal and would be great for just about anybody. Lunduke concluded that the $2200 price tag was well worth it.

In a brief teaser today, Clement Lefebvre said that Linux Mint Debian Edition 2 just passed QA and got approved for a Release Candidate release. He said an official announcement would be forthcoming in the next "couple of days." LMDE adopted a frozen release cycle in February to promote stability after being introduced as a rolling-release distro.

Elsewhere:

* KaOS 2015.02 Review: Delivers a Pure KDE Plasma 5.0 Desktop

* Thoughts on using Linux Mint Cinnamon Edition 17.1

* Bodhi Linux 3.0.0 Review: Minimalist distro with superb performance

* First Impressions of Ubuntu MATE 14.10

* Building a pre-release Linux testbed with openSuSE, Fedora, Ubuntu, and more

Related Activities

Related Software Related Blog Posts








Interview: The Team Behind Grappa Discusses Next-Gen Big Data Analytics

Monday 16th of March 2015 02:53:47 PM

There are a lot of folks out there working on new ways to cull meaningful insights from data stores, and many of them are working with data found on clusters and, often, on commodity hardware. That puts a premium on affordable data-centric approaches that can improve on the performance and functionality of MapReduce, Spark and many other tools.

One of the most interesting new tools in this area is the open source Grappa project, which scales data-intensive applications on commodity clusters and offers a new type of abstraction that can beat classic distributed shared memory (DSM) systems. In fact, Rich Wolski, founder of the Eucalyptus cloud project, enthusiastically pointed to Grappa as a very interesting project in our recent interview with him.

We caught up with some of the leaders behind Grappa, who are based at the University of Washington, for an interview. They are Luis Ceze, Jacob Nelson, and Mark Oskin (see the photos at the bottom of this post). Here are their thoughts.

How did Grappa come to be?

Univ. Washington Team: One of our colleagues here worked at Cray for a while and worked closely with a large-scale shared memory machine about three years ago. What that machine was capable of was interesting but also very expensive. So we asked whether take some of the  ideas that the Cray machine raised, and the analytics it was capable of, and try to make them work in off-the-shelf commodity hardware. That’s how Grappa started.

What does Grappa do, and what kind of organization can benefit from it?

Univ. Washington Team: Grappa helps accelerate in-memory analytics computation. Specifically, we’re exploring techniques that make worst-case performance for these applications. Applications like graph analytics have a lot of low locality access needs. If you just use standard analytics approaches to commodity clusters, you’ll end up needing a lot of overhead for each step you take as you perform analytics. Grappa tries to find opportunities in the random-access performance that goes on there better. It also provides an easier programming model for distributed memory machines.

How else does it differ from traditional distributed shared memory systems?

Univ. Washington Team: We’re providing some of the same abstractions that you see in distributed shared memory systems, but taking a very different approach. They have taken the standard technique of exploiting locality for performance, and depended on that. We take the opposite approach. Rather than simple approaches to caching data, we’ve reduced the cost of migrating operations around the cluster.

In the 1990s there was a lot of research on shared memory systems. The way those worked was that you would hook into the default data-handling mechanisms of the processors and then in software move pages around leveraging software-based cache. On a cluster, for those things to have any kind of performance, you had to have very high localities in your applications. Grappa’s philosophy is to avoid hooking into the processor in the way just described and instead use modern language abstractions. When we go to access remote memory we’re going to intelligently be able to context switch between tasks.

Do you think people should be thinking beyond MapReduce, Spark and other powerful tools being used in the data analytics space?

Univ. Washington Team: What we’ve observed when we look at systems like Hadoop and Spark is that they end up building their own optimized stack all the way from the operating system up to whatever the user programming abstraction is. We’ve spent some time observing how Grappa can be used as a kind of optimized substrate for these programming models. 

Specifically, we’ve looked at implementing a subset of MapReduce on top of Grappa, implementing a subset of Spark, focusing on the query processing platform. Basically, we’ve looked at providing familiar programming abstractions on top of Grappa, making it easier to work with new approaches to analytics.

Grappa is a square peg, but there are a lot of round holes out there. When software developers encounter problems with MapReduce, they can end up implementing approaches that start to crumble. What we can do is give software developers a MapReduce abstraction that allows them to use their familiar models to leverage some of the powerful results that Grappa can help them get.

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

Editor's Note: This story is the latest in a series of interview pieces with project leaders working on the cloud, Big Data, and the Internet of Things. The series has included talks with Rich Wolski who founded the Eucalyptus cloud project, Ben Hindman from Mesosphere, Sam Ramji from Cloud Foundry Foundation,Tomer Shiran of the Apache Drill project, Philip DesAutels who oversees the AllSeen Alliance, Tomer Shiran on MapR and Hadoop, and co-founder of Mirantis Boris Renski.

Related Activities

Related Software Related Blog Posts








Report on Oracle Asks Questions About its Cloud Strategy

Monday 16th of March 2015 02:42:33 PM

Oracle has steadily upgraded its Oracle OpenStack for Oracle Linux distribution, and has been loudly beating the war drums on the OpenStack front. As I have noted, It seems inevitable that there will be an OpenStack market shakeout soon, and the really big players like Oracle and HP may remain standing as that happens.

Research and Markets has announced the addition of the "Oracle 4Q14 Report - Betting on a Holistic Cloud Stack" company profile, which reaches the conclusion that despite early success expanding cloud adoption, Oracle’s long-term success is not necessarily guaranteed. Here are details.

According to the report:

"Oracle’s reformed go-to-market strategy is clear: The company is relying on the movement to cloud and engineered systems to support growth. Oracle’s approach in both areas remains the same: Message the value proposition of adopting an all-Oracle stack and leaving the development and integration of different solutions to Oracle, which result in reduced overall IT spend."

But there is a large ecosystem of free and open tools surrounding OpenStack now, so the "all Oracle stack" argument may become more difficult to defend.

"Oracle’s diverse portfolio shows spots of brilliance, as cloud developments focusing on core strengths to keep the vendor competitive," Research and Markets points out in the report.

In particular, SaaS applications may be the pieces of the company's cloud computing portfolio that many analysts are undervaluing.  However, many enterprises are deploying OpenStack precisely because they want an open platform and don't want to be locked into proprietary technologies and any one vendor's IP strategy. 

Oracle, of course, has a history of fiercely defending its patented technologies. For example, it filed a complaint for patent and copyright infringement against Google, regarding parts of the Java code found in Google's Android mobile OS. That suit drew many interpretations, but one thing that seemed very clear about it was that Oracle was doing exactly what developers were hoping it wouldn't do as it swallowed up Sun Microsystems. The move made clear that Oracle's strategy wasn't entirely open.

In fact, some writers, such as Dana Blankenhorn, argued that the suit "challenged the whole open source establishment."

Smaller companies like Mirantis and Red Hat are pursuing very open strategies in the cloud, and that could have a lot of meaning to enterprises in the long run, especially the ones deploying OpenStack. It will be interesting to read the tea leaves as Oracle moves forward with its cloud-focused strategy. One thing that is for sure is that it already has sway with a whole lot of enterprises who depend on the Oracle software stack.

 

Related Activities

Related Software Related Blog Posts








Personal Linux Stories and Best Distros for Newbies

Saturday 14th of March 2015 03:21:36 AM

The newsfeeds weren't overflowing this evening, but there were a few bright spots. First up, Tecmint.com is running a new series called My Linux Story featuring folks sharing their journeys to Linux. Elsewhere, Justin Pot asked can we really trust Linux and Computer Business Review today listed their choices of distributions for new users.

Tecmint.com has been running a series called My Linux Story and so far they've posted five. The story tellers have been diverse and not your ordinary new user. One is a physics professor, another IT executive, and one other is the founder of OpusVL. Every one had their own unique reasons for switching, but all have one thing in common - a love of Linux. Bookmark the category as new stories are added once or twice a week.

Justin Pot today asked, "Can We Really Trust Linux?" Of course, it was just the setup for explaining precisely why we can. He noticed a lot of folks equate "free" with "ad-supported," so he thought he could help. Pot begins:

So it needs to be said: Linux, and other open source software, isn’t free in the sense that Facebook or Gmail are – that is, it’s not an ad-supported endeavour offered by hugely profitable corporations (though many large companies do contribute to open source projects). Instead Linux is a collaborative project that the whole world can participate in, should they decide to do so. Yes: even you.

And finally today, we have yet another top five distros for newbies post. And yes, you guess it, Linux Mint and Ubuntu top the list. Can you guess the other three? You might think so, but poster Jimmy Nicholls actually chose Debian as one of those recommended for new users. Not that I personally disagree, but others might. Debian is rarely recommended for new users. He doesn't explain why he thinks so, just that it's one of the oldest.

A few other interesting posts:

* Microsoft: Dying PC or Just Bad OS?

* Plasma is my new favorite desktop

* The Difference Between Window Managers and Desktop Environments

Related Activities

Related Software Related Blog Posts








Sam Ramji Discusses Cloud Foundry and Open Source Opportunities

Friday 13th of March 2015 02:44:28 PM

Cloud Foundry Foundation, positioned as a global standard creator for open Platform-as-a-Service (PaaS) and cloud applications, announced its launch as an independent nonprofit foundation late last year, and recently named a very well-known open source leader as its CEO: Sam Ramji (shown here). Ramji has worn several hats in the open source community, and we covered him previously when he headed up Microsoft’s open source initiatives.

Cloud Foundry is managed as a Linux Foundation Collaborative Project and operates under a system of open governance created by a team of open source experts from founding Platinum Members EMC, HP, IBM, Intel, Pivotal, SAP and VMware. Ramji has a big job with Cloud Foundry Foundation, where he can help drive many meaningful open source projects forward. OStatic recently caught up with him for  his thoughts on his new role.

Ramji emphasizes that organizations are effectively becoming cloud service and application providers.

“Companies everywhere are having to make software and cloud services part of their core competencies,” Ramji said. “Every one of them needs to leverage applications from a limited supply of application developers. So for an open source project to efficiently gather good developers to create code that they can all share —that’s a big core benefit.”

In fact, Ramji noted that Cloud Foundry and the applications and collaboration that it helps to nurture can contribute to “the art of scaling infrastructure” for many organizations.

The Cloud Foundry Foundation is very focused on these kinds of development and collaboration efficiencies. It has implemented an approach to open source development called Dojo, which is derived from the Pivotal Labs Dojo program. This offers developers what Ramji characterizes as a unique “fast track” for commit rights, development resources and more.  Ramji notes that it can typically take more than a year for a developer to gain committer status on a given open source project. In some cases, it can take longer.

Through Dojo, Ramji said, skilled, and often very proven engineers from Cloud Foundry’s community can participate in development alongside committers on a Cloud Foundry project team.

In addition, Cloud Foundry benefits from its connection to The Linux Foundation, which has a proven record as a steward of open source projects and collaboration.

The Linux Foundation is a parent that has already gathered a strong collection of developers and resources,” Ramji said. “It’s a not-for-profit foundation that is very helpful in providing resources for collaboration. It will provide help and resources to Cloud Foundry on an ongoing basis.”

“[Cloud Foundry] has seen some real momentum including customer wins with two of the top three U.S. telcos and seven Global 500 manufacturing companies,” Forbes recently noted. Sam Ramji is a proven quantity who promises to drive Cloud Foundry Foundation toward many further successes.

 Editor's Note: This story is the latest in a series of interview pieces with project leaders working on the cloud, Big Data, and the Internet of Things. The series has included talks with Rich Wolski who founded the Eucalyptus cloud project, Ben Hindman from Mesosphere, Tomer Shiran of the Apache Drill project, Philip DesAutels who oversees the AllSeen Alliance, Tomer Shiran on MapR and Hadoop, and co-founder of Mirantis Boris Renski.

Related Activities

Related Software Related Blog Posts








Get More Out of GitHub

Friday 13th of March 2015 02:39:45 PM

In only a few short years, GitHub has emerged as an absolutely key hub for posting and getting a hold of open source software. If you remember when SourceForge and a small collection of disorganized repositories were the only alternatives, you appreciate GitHub.

Not everyone gets everything they can out of it, though. In this post, you'll find pointers to a couple of very handy guides that help you maximize what you get out of GitHub.

Post It and They Will Come. Many of us retrieve software tools from GitHub, but what if you are involved with an open source project and want to actually post a repository? If that's you, check out Hello World's simple guide to posting a repository, found here.  It's a truly easy process, and the most complicated step is writing up an effective description of your project.

Meanwhile, Hello World also has tutorials up on other aspects of GitHub, such as opening up an issue that other community members can attend to. You can find out how to do that and more, here

InfoWorld's Quick Guide.  Some people use the command line (Git) and some people use the client (GitHub), but InfoWorld has an exhaustive guide to using both. It's actually a collection of 20 tips for using Git and GitHub and is well worth spending some time with.

Finally, don't forget that GitHub itself has an Explore module where you can browse open source projects. It's a good resource for discovering projects you may not have heard of, and it's found here

Related Activities

Related Software Related Blog Posts








Debian 8 on its Way

Friday 13th of March 2015 03:49:04 AM

The number of release critical bugs in upcoming Debian 8.0 has decreased to a small manageable number in recent weeks making an April final release possible. Elsewhere, Jeremy Eder shared a bit on performance tuning of Red Hat Enterprise Linux 7 and Fudzilla said Linux still not ready for prime time.

In today's Debian Project News the number of release critical bugs left in Debian 8.0 developmental branch was said to be "roughly 67." The total number to be addressed remains approximately 112, but 30 of those are already fixed in unstable, 14 are tagged patched, and one other is done but still needs to go through unstable. That leaves just 67. The other day Niels Thykier wrote that an April release is "a possibility." That depends, of course, on getting those 67 bugs squashed in time.

Red Hat's Jeremy Eder today shared some of the tricks of the trade in tuning Red Hat Linux for "workload-specific" profiles. Eder said it really all began back in Red Hat 5 with ktune but it was Red Hat 6 where tuned was introduced to the public. Eder said this newer profile mechanism can "boost performance in the double-digit percent range." By Red Hat 7, the team had a good start-up default performance profile that worked for "most workloads."

Fudzilla's Nick Farrell today said that Linux is still not ready for the desktop after all these years. He said he tries Linux every few years and it doesn't seem to get any better in three important keys areas. He allowed Ubuntu to represent Linux as a whole and, apparently, he's not completely sold. He said the Software Center is limited because of issues with proprietary software installations. Beyond that, a confusing interface and non-compatibility with Windows programs round out he major complaints. If he's waiting for compatibility with Windows programs, he might as well quit testing. However, it's Linux' fault that Windows programs don't work on it according to Farrell who said "Linux has not attempted to fix any of the problems which have effectively crippled it as a desktop product."

Elsewhere:

* 7 Neat Linux Tricks That Newbies Need to Know

* Lightweight Desktop For Linux: What’s the Best One for You?

* Leading Applications for KDE Plasma and Plasma 5.3 wallpaper contest

Related Activities

Related Software Related Blog Posts








ownCloud Offers Support Subscriptions for its Open Source Community Edition

Thursday 12th of March 2015 03:01:33 PM

With the rise of cloud computing, ownCloud has been getting a lot of attention for its flexibility, and because interest in private clouds is on the rise. There is a huge community of contributors surrounding the open source version of ownCloud, and ownCloud Inc. continues to serve enterprise users.

Now, ownCloud Inc. has announced new support offerings including a new support subscription for its open source community edition. In addition, the company also introduced new functionality that adds greater control and an improved user experience to ownCloud Enterprise users.

Customers now have a choice between two product offerings; ownCloud 8 Server (formerly referred to as ownCloud Community Edition) and ownCloud 8 Enterprise Subscription which includes ownCloud Enterprise Edition. ownCloud 8 Server was released last month and provides base level file sync and share capabilities. It includes a wide variety of Server apps including activity, anti-virus, encryption, external storage, LDAP/AD, federated cloud sharing, provisioning, versions, and mobile apps available in the ownCloud app store.

"We're excited about the new offerings because ownCloud can now offer the right level of product, support and extensibility for our evolving needs," said Siggi Langauf, IT Group Leader at Stuttgarter Lebensversicherung a.G. "ownCloud's ongoing commitment to improving the user experience has helped us more easily and effectively provide access to enterprise files to our workers wherever they are, without sacrificing ownership and control of our data."

 According to the ownCloud blog:

"The new ownCloud subscriptions will benefit a wide range of ownCloud users. ownCloud Server, formally known as ownCloud Community Edition, is for those interested in pure file sync and share and the new Standard Subscription provides this capability in a highly competitive package. This subscription includes an 8×5 email support option, support for Active Directory and LDAP integrations, as well expected enterprise file sync and share features like selective sync, versions, un-delete, activity streams, and more. This subscription is designed for businesses seeking support for core file sync and share without the need of deep enterprise integration."

"If the Standard Subscription isn’t sufficient, customers can always select or upgrade to the Enterprise Subscription. This subscription, ensuring that you keep your modifications private and that you don’t have to provide them back to the community. The Enterprise subscription also provides ownCloud Enterprise capabilities such as SharePoint and Windows Network Drive integration and Swift or S3 object store integration, as well as mobile and desktop app site labeling, and both email and phone support in a larger support window. Beyond the Enterprise Subscription, we also offer a range of custom items, including up to 24×7 support, custom services, deployment support and much more."

For more information, you can get pricing details here, or visit www.owncloud.com

Related Activities

Related Blog Posts








Google's New Cloud Storage Experiment is Priced Low

Thursday 12th of March 2015 02:49:37 PM

In this day and age, you would expect most users of computing and mobile digital tools to backup their data regularly, but there are reams of data that show that most people don't do so. The good news is that cloud-based storage and backup services are very cheap nowadays.

And, advancing that trend, Google has just announced Google Cloud Storage Nearline, which is a quick-in, quick-out data backup system that the company casts as a "simple, low-cost, fast-response storage service."

Nearline Storage is officially in a beta release for the moment. According to Google:

"Nearline Storage enables you to store data that is long-lived but infrequently accessed. Nearline data has the same durability and comparable availability as Standard storage but with lower storage costs. You can read more about it in the Nearline Storage White Paper."

"Nearline Storage is appropriate for storing data in scenarios where slightly lower availability and slightly higher latency (typically just a few seconds) is an acceptable trade-off for lowered storage costs."

A Google blog post adds:

"The amount of data being produced around the world is staggering and continues to grow at an exponential rate. Given this growing volume of data, its critical that you store it in the right way – keeping frequently accessed data easily accessible, keeping cold data available when needed, and being able to move easily between the two. Organizations can no longer afford to throw data away, as it’s critical to conducting analysis and gaining market intelligence. But they also can’t afford to overpay for growing volumes of storage."

"Today, we're excited to introduce Google Cloud Storage Nearline, a simple, low-cost, fast-response storage service with quick data backup, retrieval and access. Many of you operate a tiered data storage and archival process, in which data moves from expensive online storage to offline cold storage. We know the value of having access to all of your data on demand, so Nearline enables you to easily backup and store limitless amounts of data at a very low cost and access it at any time in a matter of seconds."

 Capacity pricing is set low at one cent per Gigabyte per month, which is less less than Google's standard storage price of 26 cents per GB per month. You can find out more about Nearline here.

 

 

Related Activities

Related Blog Posts








Community Sticks Up for Linus

Thursday 12th of March 2015 03:32:06 AM

The big headline today should be how many in the community are sticking up for the creator and lead developer of the Linux kernel Linus Torvalds. Of course, there were voices on the other side as well but it seems most of the support is for the man who gifted the world with Linux. Elsewhere, Foss Force is running a favorite distribution poll and several distro reviews need mentioning.

In response to the Code of Conflict merged into the Linux kernel with The Linux Foundation's support and blessing, community members have expressed their support for Linus Torvalds. Jim Lynch today featured several comments from around the Web. He quoted one saying, "Clear language rules, and efficient programming needs clear language. Let Linus do his thing!" Another said, "Really folks, this isn't minor league stuff. If you cry, pout, and want to take your ball and go home when someone throws a high and tight fastball you shouldn't be playing in the majors." One other said, "Seriously, this anti-Linus campaign smells violently as a mean to reduce his control over the Linux project. I've seen people saying that Linus' behaviour is not how you lead a successful project... What?? Linux is the most successfull collaborative software project that ever existed!! People should study how Linus is leading it, and COPY his behaviour!

The folks over at the Linux Voice asked if we need a Code of Conflict and opinions were mixed there. One 'nix user said, "I think that the code of conduct will be fine as long as it is not misused." Another submitted, "Just dismayed. I like Linus because he's a d**k that doesn't care what people think. And he's usually right."

And, of course, Slashdot had a lively discussion as well. The first post got in with, "I guess Linus needs a new job." Another answered, "Linus' leadership role is on its way out, I fear. Linux is done, too. It's suffering from the same disease that has affected GNOME, Firefox and Debian: technological correctness taking a backseat to political correctness." One of the funniest had to be: Two code fragments enter..., "One fragment leaves. To which someone replied, "Linux 4.0 should be named Thunderdome?

The folks over at FOSS Force are running a poll asking "What Linux Distro Do You Use?" Choices include Linux Mint, Ubuntu, Debian, openSUSE, Fedora, Mageia, CentOS, Arch, elementary, and Manjaro. These were chosen because they have proven popular at Distrowatch.com and the poll accepts "write-ins" as well. The poll just started today so the current rankings might end up completely different, but Mint is in the lead with 25% of the vote followed by Ubuntu with 19%. Go post your vote or see the full results at FOSSForce.com.

And finally today are several distro reviews. Jesse Smith reviewed Korora 21 Monday and Sabayon 15.02 last week. Jack M. Germain today said, "Evolve OS Is a Clean and Light Work in Progress" and Dedoimedo.com wrote LXLE Linux 14.04.1 is "Champagne without bubbles."

Related Activities

Related Blog Posts








SUSE OpenStack Cloud 5 Serves Private Clouds, Can Onboard Hadoop

Wednesday 11th of March 2015 03:21:37 PM

While it doesn't grab as many headlines as other players, SUSE has steadily announced new versions of its SUSE Cloud platform, which has been its OpenStack distribution for building Infrastructure-as-a-Service private clouds. SUSE has especially focused on data centers where administrators want to take advantage of multiple types of computing environments, and has offered full support for VMware vSphere through integration with VMware vCenter Server.

Now, SUSE has announced the general availability of SUSE OpenStack Cloud 5, which is the new name for SUSE Cloud. SUSE OpenStack Cloud 5 is based on the latest OpenStack release (Juno) and provides increased networking flexibility and improved operational efficiency to simplify private cloud infrastructure management. It also provides "as-a-service" capabilities to enable development and big data analytic teams to rapidly deliver business solutions along with integration with the new SUSE Enterprise Storage and SUSE Linux Enterprise Server 12 data center solutions.

According to Donna Scott, Gartner vice president and distinguished analyst, and Arun Chandrasekaran, Gartner research director, by 2019, OpenStack enterprise deployments will grow tenfold, up from just hundreds of production deployments today, due to increased maturity and growing ecosystem support.*

"Furthering the growth of OpenStack enterprise deployments, SUSE OpenStack Cloud makes it easier for customers to realize the benefits of a private cloud, saving them money and time they can use to better serve their own customers and business," said Nils Brauckmann, president and general manager of SUSE. "Automation and high availability features translate to simplicity and efficiency in enterprise data centers."

SUSE claims that OpenStack Cloud 5 offers the following to enterprise customers:

- Enhanced networking flexibility – SUSE OpenStack Cloud 5 provides additional networking functionality and additional support for third-party OpenStack networking plug-ins. In particular, it provides for the implementation of distributed virtual routing, which enables individual compute nodes to handle routing tasks individually or as clusters. Configuring distributed virtual routing as part of a SUSE OpenStack Cloud installation increases scalability, performance and availability by enabling the network to automatically expand as compute nodes are added, reduce traffic through central routers, and decrease exposure to a single point of network failure.

- Increased operational efficiency – The SUSE OpenStack Cloud installation framework has been enhanced to seamlessly incorporate existing servers running outside of the private cloud into the cloud environment. In addition, SUSE OpenStack Cloud 5 centralizes log collection and search, giving cloud administrators a single view into cloud operations and improving problem resolution speed.

- Integrated with SUSE Enterprise Storage and SUSE Linux Enterprise Server 12 – SUSE OpenStack Cloud 5 includes support for SUSE Linux Enterprise Server 12 as compute nodes within the cloud, giving customers the most current versions of KVM and Xen. SUSE Linux Enterprise Server 12 nodes can exist alongside SUSE Linux Enterprise Server 11 SP3 nodes. SUSE OpenStack Cloud 5 also integrates the recently announced SUSE Enterprise Storage, powered by Ceph. This provides an enhanced platform for object, block and image storage within the SUSE OpenStack Cloud, while retaining the same ease of installation of Ceph components that was available in earlier releases of SUSE OpenStack Cloud.

- Simplified services deployment – Since many workloads require additional services, standardization in an "as-a-service" model simplifies and speeds installation by eliminating the need for users to manage and configure these services. Simplified services deployment makes it easy to deploy private clouds tailored for development and big data.

SUSE OpenStack Cloud 5 also includes a data processing project, "Sahara," which provides a means to provision a data-intensive application cluster like Hadoop or Spark on top of OpenStack. SUSE and MapR have teamed to provide support for MapR Enterprise running on SUSE OpenStack Cloud using the MapR Sahara plugin.

"As OpenStack has matured and grown in enterprise adoption, more and more users want to leverage spare capacity for Hadoop data services," said Tomer Shiran, vice president of product management for MapR Technologies. "With Sahara support in SUSE OpenStack Cloud 5 and the MapR Distribution including Hadoop, users can provision dynamic Hadoop clusters in just a few minutes for frequent development and test use cases. This is all backed up with enterprise support from MapR and SUSE."

Maria Olson, vice president of Global & Strategic Alliances for NetApp, said, "Together, SUSE and NetApp are collaborating to build OpenStack-powered private and public clouds that deliver high-performing, efficient and scalable cloud services with enterprise-class storage and data management capabilities. As a charter member and gold level sponsor of the OpenStack Foundation, NetApp has a rich history of upstream OpenStack community contributions and is the most widely used commercial storage option for Cinder deployments."

 

 

 

Related Activities

Related Blog Posts








Canonical Deepens Partnership with Microsoft, Advances Metal-as-a-Service

Wednesday 11th of March 2015 03:11:56 PM

There are a lot of announcements coming out of the Open Compute Project U.S. Summit this week. HP has announced new Cloudline servers that will sell for low prices and eschew the proprietary technology that the company uses in its Proliant servers. They may especially find a home in organizations standardizing on HP's Helion cloud platform.

And, also coming out of the summit, Canonical and Microsoft announced a partnership extension and demonstrated Canonical’s Metal-as-a-Server (MaaS) deployment in an open computing environment. Ubuntu’s MaaS allows users to treat physical servers like virtual machines in the cloud, turning bare metal into an elastic resource. New support means that Windows and Linux (Ubuntu, CentOS, SUSE) operating systems and application software can be one-touch provisioned on OCS hardware. Together, the two companies claim they will create a more scalable, OCP-compliant architecture to make open source deployments easier for enterprises and telecoms providers.

Canonical has also announced the addition of QCT (Quanta Cloud Technology), a global datacenter solution provider, to its Ubuntu Cloud Partner Program. The two companies have been working to offer a line of integrated OpenStack cloud solutions, including SKUs for Proof-of-Concept, Production-Ready HA, and Production HA architecture.

In 2014, Microsoft joined the Open Compute Project, as did many other companies, to encourage collaboration on the designs of the servers it uses in its large data centers. Collaborating companies in the Open Compute Project can reduce costs and increase the range of hardware options available.

Meanwhile, Canonical is also conducting an "OpenStack Roadshow," where Mark Shuttleworth holds court on the company's cloud offerings. According to the company, at the roadshow events you can:

"...See the world’s most popular Openstack implementation in action, gain insights from the world’s largest OpenStack deployments in telco, finance, media and government environments, meet the architects of Canonical’s OpenStack strategy, and exchange views on requirements for enterprise private clouds."

Canonical is still best known for Ubuntu, but the company is rapidly changing its business model, emphasizing OpenStack, and forging ahead with partnerships with Microsoft and many other companies. Change is in the air.

Related Activities

Related Software Related Blog Posts








Fedora 22 Alpha, Bodhi 3.0 Review, & Ubuntu 15.04 Wallpapers

Wednesday 11th of March 2015 03:39:21 AM

The newsfeeds were a virtual cornucopia today with several exciting headlines. First up, Fedora 22 Alpha was announced today and word has it it's in "great shape." Ubuntu switched to systemd and made their community wallpaper choices. Jim Lynch reviewed Bodhi 3.0 and Christine Hall spoke with Jeff Hoogland about the release. Justin Pot identified seven signs you may be ready to switch to Linux and Paul Venezia demonstrated how cool Bash still is.

Dennis Gilmore today announced the release of Fedora 22 Alpha saying, "Please take some time to download and try out the Alpha and make sure the things that are important to you are working well. If you find a bug, please report it." The Alpha is available in the three standard versions and several handy spins.

Some of the fresh updates include a redesigned notification system and a Wayland-using login screen. Fedora is moving away from X and the login screen is just the start. The GNOME version brings new themes and the KDE spin now features Plasma 5. The Beta is due April 14 and the Final is scheduled for May 19. See the announcement for individual download links.

Michael Larabel said today that he tested Fedora 22 Alpha and his experience was "very good." He added that it feels "very clean and an evolutionary step over Fedora 21." Larabel ran down a few upcoming features and included screenshots, so check that out.

Ubuntu was the subject of several headlines today as well. The Register reported on Ubuntu's switch to systemd yesterday saying that Ubuntu 15.04 will ship with it by default. The Reg is expecting users to storm the castle with torches and pitchforks, but Ubuntu is replacing their own homebrew init anyway. I suspect most Ubuntu users won't notice or care.

The Ubuntu Portal and Softpedia are reporting on the Ubuntu community wallpaper contest winners. Softpedia listed the winning details and The Portal posted thumbnail links. They are all available in a single file from Launchpad. In related news, Matt Hartley yesterday posted 15 Must Have Ubuntu Enhancements.

Elsewhere, Bodhi 3.0 got some press. Jim Lynch today reviewed it saying it is wonderful for those who are "true minimalists." He especially liked the Enlightenment window manager Bodhi includes. Over at FOSS Force Christine Hall yesterday posted her recent conversation with founder and lead developer Jeff Hoogland on Bodhi 3.0. He said Bodhi 3.0 is a big success and is well liked by loyal users and newcomers alike. Hoogland added he plans on developing more original native Enlightenment apps for future Bodhi releases.

Other interesting tidbits:

* Bash is more powerful than you think

* 7 Warning Signs That You’re Meant to Switch to Linux

* Netrunner 15 Review: Looks fantastic

Related Activities

Related Software Related Blog Posts








Apache Tajo Update Offers Open, Relational Big Data Warehousing Solution

Tuesday 10th of March 2015 02:58:30 PM

Now here is an interesting open source project that has been flying under the radar: The Apache Software Foundation (ASF), which stewards more than 350 open source projects and initiatives, announced the availability of Apache Tajo v0.10.0, the latest version of the advanced open data warehousing system in Apache Hadoop.

Apache Tajo is used for low-latency and scalable ad-hoc queries, online aggregation, and ETL (extract-transform-load process) on large data sets stored on HDFS (Hadoop Distributed File System) and other data sources. "By supporting SQL standards and leveraging advanced database techniques, Tajo allows direct control of distributed execution and data flow across a variety of query evaluation strategies and optimization opportunities," notes the announcement from Apache.

Although it doesn't grab a lot of headlines, Tajo is in use at numerous organizations worldwide, including Gruter, Korea University, Melon, NASA JPL Radio Astronomy and Airborne Snow Observatory projects, and SK Telecom for processing Web-scale data sets in real time.

"Tajo has evolved over the last couple of years into a mature 'SQL-on-Hadoop' engine," said Hyunsik Choi, Vice President of Apache Tajo. "The improved JDBC driver in this release allows users to easily access Tajo as if users use traditional RDBMSs. We have verified new JDBC driver on many commercial BI solutions and various SQL tools. It was easy and works successfully."

Tajo v0.10.0 reflects dozens of new features and improvements, including:

- Oracle and PostgreSQL catalog store support

- Direct JSON file support

- HBase storage integration (allowing users to directly access HBase tables through Tajo)

- Improved JDBC driver for easier use of JDBC application

- Improved Amazon S3 support

A complete overview of all new enhancements can be found in the project release notes at https://dist.apache.org/repos/dist/dev/tajo/tajo-0.10.0-rc1/relnotes.html

 "I'm very happy with that Tajo has rapidly developed in recent years," said Jihoon Son, member of the Apache Tajo Project Management Committee. "One of the most impressive parts is the improved support on Amazon S3. Thanks to the EMR bootstrap, users can exploit Tajo's advanced SQL functionalities on AWS with just a few clicks."

Apache Tajo software is released under the Apache License v2.0 and is overseen by a self-selected team of active contributors to the project. A Project Management Committee (PMC) guides the Project's day-to-day operations, including community development and product releases. For downloads, documentation, and ways to become involved with Apache Tajo, visit http://tajo.apache.org/ and https://twitter.com/ApacheTajo

ajo is in use at numerous organizations worldwide, including Gruter, Korea University, Melon, NASA JPL Radio Astronomy and Airborne Snow Observatory projects, and SK Telecom for processing Web-scale data sets in real time. - See more at: http://globenewswire.com/news-release/2015/03/09/713437/10123742/en/The-Apache-Software-Foundation-Announces-Apache-tm-Tajo-tm-v0-10-0.html#sthash.R3PHT1vr.dpuf

 

Related Activities

Related Blog Posts








HP Does Open Hardware with New Cloudline Servers

Tuesday 10th of March 2015 02:47:37 PM

It was only summer of last year when HP began making a lot of noise about its commitment to cloud computing overall, and the OpenStack platform in particular. Now, the company is moving its cloud strategy into high gear. It announced the HP Helion brand in 2014, and pledged to commit $1 billion over the next two years on products and services surrounding OpenStack, under Helion's branded umbrella.

Now, the company is betting big on open hardware designs with new Cloudline servers that will sell for low prices and eschew the proprietary technology that the company uses in its Proliant servers.

The Cloudline servers are optimized for HP's Helion OpenStack, which the company says works well with the architecture and benefits of the new servers. The servers are based on standardized specifications set out by the Open Compute Project, founded by Facebook three years ago, and Open Networking Foundation, which was launched in 2011.

Not all of the servers have pricing set yet, but prices are to be low. There is a Cloudline CL1100 server, which is an inexpensive two-socket server for Web hosting. Then, CL2100 and CL2200 servers are two-socket systems that have more memory and storage. The servers will ship beginning March 30.

According to HP:

"HP Cloudline was built on the premise of Open Infrastructure – which means hardware and software products based on open source technologies and specifications. The new HP Cloudline portfolio is built with an open design philosophy open using open components to increase adaptability and facilitate IT integration.  And HP Cloudline will provide Open Infrastructure based on Open Compute Project specifications."

You can read more about the new servers here.

 

Related Activities

Related Software Related Blog Posts








More in Tux Machines

HandyLinux 2.0 Beta Now Available for Download, Based on Debian 8 Jessie - Screenshot Tour

The availability of the Beta version of the upcoming HandyLinux 2.0 computer operating system has been announced today, March 30, on the distribution’s website, which has been redesigned to match the look and feel of the OS. Read more

DebEX Barebone Is the First Debian 8 Jessie Live CD with Xfce 4.12

Arne Exton had the pleasure of informing Softpedia earlier today, March 29, about the immediate availability for download of a new build (150329) of his DebEX Barebone computer operating system derived from the upcoming Debian GNU/Linux 8 Jessie distribution and built around the recently released Xfce 4.12 desktop environment. Read more

Linus Torvalds Announces Linux Kernel 4.0 RC6, Final Version to Be Released Soon

Linus Torvalds had the pleasure of announcing today, March 29, the immediate availability for download and testing of the sixth Release Candidate (RC) version of forthcoming Linux 4.0 kernel. Apparently, some important bugs have been squashed, which means that the final Linux kernel 4.0 will be released sooner than expected. Read more

Mesa's Android Support Is Currently In Bad Shape

While Mesa is talked about as being able to be built for Google's Android operating system to run these open-source graphics drivers on Android devices with OpenGL ES support, in reality there's a lot left to be desired. Over the years there's been a handful of developers working on Android Mesa support to let the popular open-source graphics drivers run over there -- from the Intel driver now that they're using HD Graphics within their low-power SoCs (rather than PowerVR), AMD has made a few steps toward Android netbook/laptop devices with Radeon graphics, and we're starting to see Gallium3D drivers for Qualcomm Adreno (Freedreno) and the Raspberry Pi (VC4) where there's interest from Android users. This year as part of Google Summer of Code we also might see a student focused on Freedreno Android support. Read more