Language Selection

English French German Italian Portuguese Spanish

Server

Servers/Back End: Kubeflow, Kubernetes and EdgeX Foundry

Filed under
Server
  • Designing an open source machine learning platform for autonomous vehicles

    Self-driving cars are one of the most notable technology breakthroughs of recent years. The progress that has been made from the DARPA challenges in the early 2000s to Waymo’s commercial tests is astounding. Despite this rapid progress, much still needs to be done to reach full autonomy without humans in the loop – an objective also referred to as SAE Level 5. Infrastructure is one of the gaps that need to be bridged to achieve full autonomy.

    Embedding the full compute power needed to fully automatise vehicles may prove challenging. On the other hand, relying on the cloud at scale would pose latency and bandwidth issues. Therefore, vehicle autonomy is a case for edge computing. But, how to distribute and orchestrate AI workloads, data storage, and networking at the edge for such a safety-critical application? We propose an open-source architecture that will address these questions.

    [...]

    In order to implement an open-source machine learning platform for autonomous vehicles, data scientists can use Kubeflow: the machine learning toolkit for Kubernetes. The Kubeflow project is dedicated to making deployments of machine learning workflows simple, portable and scalable. It consists of various open-source projects which can be integrated to work together. This includes Jupyter notebooks and the TensorFlow ecosystem. However, since the Kubeflow project is growing very fast, its support is soon going to expand over other open-source projects, such as PyTorch, MXNet, Chainer, and more.

    Kubeflow allows data scientists to utilize all base machine learning algorithms. This includes regression algorithms, pattern recognition algorithms, clustering and decision making algorithms. With Kubeflow data scientists can easily implement tasks which are essential for autonomous vehicles. These tasks include object detection, identification, recognition, classification, and localisation.

  • Kubernetes communication, SRE struggles, and more industry trends

    As part of my role as a senior product marketing manager at an enterprise software company with an open source development model, I publish a regular update about open source community, market, and industry trends for product marketers, managers, and other influencers. Here are five of my and their favorite articles from that update.

  • Introducing a Tech Preview of Containerized Ceph on Kubernetes

    We have been hard at work to bring a containerized version of Ceph to Kubernetes, and we are very excited to announce that we are releasing a technical preview of our project to run SUSE Enterprise Storage (powered by Ceph) on SUSE CaaS Platform (powered by Kubernetes). We leverage the most modern, powerful application management framework to make Ceph lifecycle management easier, and we provide an easy way for SUSE CaaS Platform users to get Kubernetes-native persistent storage for their Kubernetes cluster backed by enterprise-grade software-defined storage.

    [...]

    The good news is that work on Rook and Ceph-Rook integration is a concentrated effort upstream. There are many eyes—and many fingers—working to make Ceph better on Kubernetes. We at SUSE are in a good position to make sure that Ceph and Rook work upstream will meet the unique needs of our customers, and we are thrilled that our customers and their needs are able to make upstream better.

  • Making The IoT More Open: A Common Framework For IoT Edge Computing With EdgeX Foundry

    The internet of things (IoT) is a diverse space, but it’s also fragmented by design, whether it’s consumer IoT or industrial IoT. In 2015, Dell started working on a project called Project Fuse to weave together the diverse and fragmented world of IoT. The idea was to build the right architecture for IoT and edge computing.

    The team working on the project quickly realized that they needed to extend the cloud-native principles — things like microservice-based architectures and platform independence — as close as possible to the device edge so that there would be more flexibility in how solutions are devised. In order to succeed, the project needed to be vendor-neutral, interoperable and open.

Servers, Security and DRM Leftovers

Filed under
Server
Security
  • DevNation Live Bengaluru: Sail into cloud — An introduction to Istio

    Our first DevNation Live regional event was held in Bengaluru, India in July. This free technology event focused on open source innovations, with sessions presented by elite Red Hat technologists.

    In this session, Kamesh Sampath provides an overview of Envoy and Istio, two open source projects that will change the way you write cloud-native Java applications on Kubernetes. We’ll show how to download and set up Istio on your local laptop, then deploy Java microservices as part of the Istio service mesh with Istio sidecar proxy.

  • Pogo Linux Launches New Modular Intel Servers to Address IT Evolution in Data Services

    Pogo Linux (https://www.pogolinux.com), a leading supplier of rackmount servers for the modern data center, today announced the immediate availability of a new product line of Intel®-based servers. Based on the newest Intel® server processor platform, Intrepid Modular Server System users can upgrade a single server with forward-compatible technology add-ons instead of buying a new server. The new Intrepid product line are integrated with 2nd Gen Intel® Xeon® Scalable processors and are shipping in volume across 1U thru 2U form factors.

    Since 1999, Pogo Linux has delivered custom-built, high-performance server hardware to IT departments of all sizes to process the compute backbone of traditional on-premise and data center applications. To support new business opportunities in the new digital and data services economy, including artificial intelligence (AI), machine learning and predictive analytics, technology departments will need to make new investments in IT infrastructure to stay competitive. As this data transformation touches all aspects of business, modern server hardware must to evolve to help IT users support more connected users.

  • Report finds cyberattacks on critical utility operating systems are increasing

    A new study published Friday finds that cyberattacks on the operational technology (OT) involved in running critical utilities are increasing and says these attacks have the potential to cause “severe” damage.

    The report, compiled by the manufacturing company Siemens and the Ponemon Institute, is based on survey responses from 1,700 utility professionals worldwide and focuses on cyber risks to electric utilities with gas, solar, or wind assets, and water utilities.

  •  

  • Yes, Apple just killed iTunes — here's what that means for your library of music, movies, and TV shows

                     

                       

    That means that rather than renting movies and TV shows through iTunes on your Mac, you'll watch everything through the Apple TV app.

Server: Decentralisation, SUSE and Red Hat

Filed under
Red Hat
Server
SUSE
  • Decentralizing the Data Center: Hybrid Cloud, Multi-Cloud and more

    But how did we get to cloud computing in the first place? While these are not the only reasons, cost, availability and disaster recovery were a large part of what motivated companies to transition from on-prem [-only] deployments to cloud or hybrid approaches. Now, let us fast forward to the present and we are seeing something entirely new: a complete decentralization of the data center.

    But what does that mean? Once upon a time, companies transitioning or starting their operations in the cloud shopped around and found a public cloud service that best suited their needs. The final decision typically boiled down to cost and services. I would know. I used to work in a division of one of these large cloud providers and we were always going neck-to-neck with the other major players for mainly these key topics.

  • Quarks – New Building Blocks for Deploying on Kubernetes

    At the recent Cloud Foundry Summit EU in the Netherlands, Mario Manno of SUSE and Enrique Encalada of IBM gave a presentation about two popular platforms for deploying your cloud-native applications – Kubernetes and Cloud Foundry. Kubernetes is the great for its flexibility, control over your application and is a great container orchestrator. Cloud Foundry is the go-to platform where you don’t want to worry about your infrastructure, networking, scaling, and routing. It also has the best developer experience in the industry. With Quarks, deployment is simplified using BOSH features, but keeping the flexibility of Kubernetes. Believing that Quarks is the next buzzword for Cloud Foundry conferences, they described and demonstrated the new framework and its building blocks for deploying cloud-native applications which has the best features of the two worlds.

  • SLE 12 SP5 Release Candidate 2 is out!

    This Service Pack 5 is a consolidation Service Pack release.

  • Red Hat Streamlines Operating System Update Cycle

    CentOS is a distribution of Linux based on a fork of Red Hat Enterprise Linux (RHEL). The team that oversees CentOS operates independently of Red Hat. That team in collaboration with Red Hat is making available an additional distribution dubbed CentOS Stream, through which a continuous stream of content will be updated several times daily.
    Mike McGrath, senior director for Linux engineering at Red Hat, said those innovations eventually will find their way into RHEL, but until then developers who want to build applications using those features as they become available can use CentOS Stream.
    This latest distribution of Linux from Red Hat is intended to act as a bridge between Fedora, a distribution of Linux through which Red Hat makes available experimental technologies, and RHEL, he said.

  • Happy Halloween (Packages Not In EPEL-8 yet)

    It is October, and in the US it means that all the decorations for Halloween are going up. This is a time of year I love because you get to dress up in a costume and give gifts to people. In the spirit of Halloween, I am going to make various packages available in a COPR to add onto the EPEL-8 repositories.

    There are a lot of packages which are in EPEL-6 or EPEL-7 but are not in EPEL-8 yet. Some of these may not be possible due to missing -devel, others may just need someone interested in maintaining a branch for EPEL-8, etc etc. In order to try and get a push on this I wanted to see what packages could be built and made ready at some point. I also wanted to make it possible that if you really needed this package, that they could be available. 

  • CentOS 8 Stream Install Guide – CentOS 8 Installation Screenshots

Virtualmin CPanel – Free & Open Source Web Hosting Panel

Filed under
Server
OSS

As the name suggests, a server control panel lets you control your server graphically, and provides you important server statistics, manage websites, databases, email accounts, etc. right in your browser without having to pass long commands.

You can do pretty much everything from the control panel. It makes handling complex and time-consuming server tasks extremely easy.

In this series, I will cover open source, free, and paid Linux control panels. If you need more features, you may need to support the development by giving a few dollars per year.

Read more

Databases: MongoDB, ArangoDB and KarelDB

Filed under
Server
OSS

Drupal and WordPress News

Filed under
Server
OSS
Drupal
  • Acquia Acquired for $1B, WordPress 5.3 on the Horizon, More Open Source News

    Acquia has announced an agreement to receive a majority investment from Vista Equity Partners, which essentially translates into the investment company purchasing Acquia for a colossal $1 billion. The investment will enable the open-source digital experience company to continue growing its presence in the digital experience platform space. “Vista shares our belief that the DXP market is ripe for disruption and we are excited to partner with them to accelerate our plans,” said Michael Sullivan, Acquia CEO.

    Acquia’s press release noted that Acquia will “continue to operate independently”.

    This announcement came shortly after being named to the 2019 Forbes Cloud 100 for the fourth consecutive year and acquiring the first enterprise-grade, low-code Drupal website builder.

  • Daily Buzz: Drupal's Big Buyout
  • WordCamp Philly returns this weekend in all its open-source, community-powered glory

    In an age where the internet’s attention is hyper focused on the most recent tweet, only to be distracted the next minute, WordPress’ decade-long staying power can be attributed to its diverse and dedicated open-source community.

    WordPress values and strives to grow its community, and one of the ways it does that is through WordCamps. Philadelphia is home to one of the oldest WordCamps in the United States, and the annual daylong event is returning this weekend, Oct. 5 and 6, at the Pennsylvania Academy of the Fine Arts.

  • People of WordPress: Alice Orru

    Alice Orru was born in Sardinia, an island in the middle of the Mediterranean Sea. As a child, she dreamt of becoming a flight attendant, traveling the world, and speaking many foreign languages.

    Unable to meet the height requirements of her chosen profession, Orru ended up choosing a different path in life, following the Italian mantra: “You have to study something that will guarantee a stable and secure job for life.”

    The unemployment rate in Sardinia is very high, a challenge shared throughout the surrounding islands. In addition to that, Alice wasn’t that keen on having the same job all her life, as her parents had.

    When Orru was 22 she moved to Siena, Tuscany, to finish her studies. That is when she created her first personal blog. The website was built on an Italian platform named Tiscali, which she later migrated to WordPress.com.

    After 2 years in Tuscany Orru moved to Strasbourg, France. She studied French and worked several jobs while living there. Her first serious job was in Milan – working 40 hours/week in the marketing department of a large, international company. She found herself surrounded by ambitious colleagues and a boss who constantly requested extra —unpaid— working hours per day.

Percona Database News

Filed under
Server
OSS
  • Percona Tunes Monitoring Platform For ‘Living Breathing’ Databases

    Modern enterprises run on information in the form of data, so they buy databases. Databases are nice solid chunky pieces of software that, once installed, neatly store away all the company’s operational and transactional information in easy-to-find predesignated areas… so after initial deployment, they pretty much look after themselves.

    Unfortunately, that’s not quite true. Databases are living breathing things that need to change and adapt to a variety of factors all the time. Here’s a selection of eight popular reasons that your firm’s information backbone might need to change...

  • Percona customers talk about database challenges

    At Percona Live in Amsterdam, the Open Source database company has released details from its latest customer survey. The results are interesting and suggest that the database market is less rigid, stable and predictable than you might think. They also show a propensity for larger customers to have more database instances than staff.

  • Percona details ‘state’ of open source data management

    Open source database management and monitoring services company Percona has laid down its state of open source data management software survey for 2019.

    Surveys are surveys and are generally custom-constructed to be self-serving in one sense or another and so convey a message set in their ‘findings’ that the commissioning body (or in this case company) has wanted to table to media, customers, partners and other related bodies.

    This central truth being so, should we give any credence to Percona’s latest market assessment?

  • Percona packages PostgreSQL alongside existing MySQL and MongoDB products

    PostgreSQL is among the most popular database management systems, but market share is a slippery thing to measure, depending on whether you mean revenue, developer activity, or actual deployed databases.

    The developer-focused StackOverflow puts PostgreSQL second after MySQL, with Microsoft SQL Server third and Oracle way down at 8th. DB-Engines on the other hand, which measures general discussion, puts Oracle top, followed by MySQL, Microsoft SQL Server and PostgreSQL 4th.

    Open-source company Percona's distribution, announced at its Percona Live event in Amsterdam, is based on PostgreSQL 11.5, supplemented by several extensions. The pg_repack extension reorganises tables with minimal locks. The PostgreSQL Audit Extension (pgaudit) provides tools for audit logs to meet compliance requirements. And backup and restore is provided by the pgBackRest extension.

  • Database Diversity: The Dirt, the Data

    Companies are using an increasingly eclectic mix of databases, a survey of 836 enterprise database users from around the world conducted by Percona reveals — with the vast majority of respondents using more than one type of open-source database.

    The survey comes as the overall database market – worth some $46 billion at the end of 2018 – continues to fragment: there are now over 40 companies with revenues of $100 million-plus in the commercial open-source ecosystem.

  • The state of open source databases in 2019: Multiple Databases, Clouds, and Licenses

    The Open Source Data Management Software Survey was undertaken by Percona, a company offering services for open source databases, to capture usage patterns and opinions of the people who use open source databases. The survey, unveiled today at Percona's Open Source database conference in Amsterdam, included 836 of them from 85 countries, which means it's a good way to get insights.

  • Percona Announces Enhanced Version of Award-Winning Open Source Database Monitoring and Management Platform, For Faster Performance Issue Resolution

Servers: IBM/Red Hat, CentOS, CNCF and SUSE

Filed under
Server
  • Microservices, and the Observability Macroheadache

    Moving to a microservice architecture, deployed on a cloud platform such as OpenShift, can have significant benefits. However, it does make understanding how your business requests are being executed, across the potentially large numbers of microservices, more challenging.

    If we wish to locate where problems may have occurred in the execution of a business request, whether due to performance issues or errors, we are potentially faced with accessing metrics and logs associated with many services that may have been involved. Metrics can provide a general indication of where problems have occurred, but not specific to individual requests. Logs may provide errors or warnings, but cannot necessarily be correlated to the individual requests of interest.

    Distributed tracing is a technique that has become indispensable in helping users understand how their business transactions execute across a set of collaborating services. A trace instance documents the flow of a business transaction, including interactions between services, internal work units, relevant metadata, latency details and contextualized logging. This information can be used to perform root cause analysis to locate the problem quickly.

  • 5 AI fears and how to address them

    Most people don’t know what microservices architecture is, for example, even if some of the apps they use every day were built in decoupled fashion. But technical evolutions like microservices don’t tend to cause the kinds of emotional responses that AI does around potential social and economic impacts. Nor have microservices haven’t been immortalized in popular culture: No one is lining up at the box office for "Terminator: Rise of the Cloud-Native Apps."

    This speaks mainly to fears about AI’s nebulous future, and it can be tough to evaluate their validity when our imaginations run wild. That’s not particularly useful for IT leaders and other execs trying to build a practical AI strategy today. Yet you will encounter fears – many of them well-founded. The trick is to focus on these real-world concerns, not the time-traveling robot assassins. For starters, they’re much easier to defeat – er, address – because they’re often based in current reality, not futuristic speculation.

    “The types of fears [people have about AI] depend on the type of AI that we are talking about,” says Keiland Cooper, a neuroscience research associate at the University of California Irvine and co-director of ContinualAI. “The more theoretical and far off ‘general AI’ – a computer that can do all the things that humans can do – will raise more fears than those from a more realistic AI algorithm like we see being commonly used today.”

    Let’s look at five legitimate concerns about AI today – and expert advice for addressing them so that they don’t derail your AI plans.

  • CentOS 8 "Gnome Desktop" overview | The community enterprise operating system

    In this video, I am going to show an overview of CentOS 8.0.1905 "Gnome" and some of the applications pre-installed. 

  • How deep does the vDPA rabbit hole go?

    In this post we will be leading you through the different building blocks used for implementing the virtio full HW offloading and the vDPA solutions. This effort is still in progress, thus some bits may change in the future, however the governing building blocks are expected to stay the same. 

    We will be discussing the VFIO, vfio-pci and vhost-vfio all intended on accessing drivers from the userspace both in the guest and the host. We will also be discussing MDEV, vfio-mdev, vhost-mdev and virtio-mdev transport API constructing the vDPA solutions. 

    The post is a technical deep dive and is intended for architects, developers and those who are passionate about understanding how all the pieces fall into place. If you are more interested in understanding the big picture of vDPA, the previous post "Achieving network wirespeed in an open standard manner - introducing vDPA" is strongly recommended instead (we did warn you). 

  • CNCF’s Envoy report card shows Google, Lyft are top of contributing class

    The CNCF has delivered a report card on Envoy, the open source edge and service proxy which is usually mentioned alongside the words Kubernetes or service mesh.

    The report comes a year after Envoy graduated from the CNCF incubation process, and the headline scores are 1,700 contributors, who have made 10,300 code commits, 5,700 pull requests and 51,000 contributions overall.

    Envoy was initially developed in 2016 at Lyft, the ridesharing giant which isn’t Uber, and this is reflected in the CNCF report. Lyft still accounts for 30.4 of the Envoy code, though Google is the biggest contributor overall, with 42.8 per cent of the code.

  • Highly Automated and Secured Multi-Tenancy Using SUSE CaaS Platform 4

    I notice that when it comes to using a XaaS solution, clients and solution architects are typically concerned about multi-tenancy. This post attempts to decipher why that is and how SUSE CaaS Platform helps make this a reality.

Databases: Databases in Linux Plumbers Conference (LPC) and Using PostgreSQL as a Cache

Filed under
Server
  • Better guidance for database developers

    At the inaugural Databases microconference at the 2019 Linux Plumbers Conference (LPC), two developers who work on rather different database systems had similar complaints about developing for Linux. Richard Hipp, creator of the SQLite database, and Andres Freund from the PostgreSQL project both lamented the lack of definitive documentation on how to best use the kernel's I/O interfaces, especially for corner cases. Both of the sessions, along with others in the microconference, pointed to a strong need for more interaction between user-space and kernel developers.

  • Using PostgreSQL as a cache?

    In the article on his blog Peter asks "How much faster is Redis at storing a blob of JSON compared to PostgreSQL?". Answer: 14x slower.

    Seems about right. Usually Redis is about 4x faster for a simple query like that compared to using PostgreSQL as a cache in my experience. It's why so many people use Redis as a cache. But I'd suggest PostgreSQL is good enough to act as a cache for many people.

    Django is pretty slow at fetching from PostgreSQL compared to other python options, so this could explain part of the 14x VS 4x difference.

    [...]

    Of course you should probably just cache the views at the CDN/web proxy level, or even at the Django view or template level. So you probably won't even hit the Django app most times.

No Matter What You’ve Heard, the Docker Container Ship Is Not Sinking

Filed under
Server

According to some press reports around a leaked memo from Docker's CEO to its employees, the open source company that all but invented the container technology boosting cloud growth for the last five years or so is facing hard times. Although the memo does indicate that the company is needing some cash to tide it over or help it expand, the situation doesn't seem to indicate that the Docker container can't weather the storm.

"As shared at the last all hands (meeting), we have been engaging with investors to secure more financing to continue to execute on our strategy," Rob Bearden, Docker's CEO since May, wrote in an email sent to company employees. "I wanted to share a quick update on where we stand. We are currently in active negotiations with two investors and are working through final terms. We should be able to provide you a more complete update within the next couple of weeks."

Read more

Syndicate content

More in Tux Machines