Language Selection

English French German Italian Portuguese Spanish

System 76

Syndicate content
At System76, we empower the world’s curious and capable makers of tomorrow with custom Linux computers.
Updated: 8 min 20 sec ago

Open Up: Contributions and Collaborations

Thursday 11th of November 2021 09:34:47 PM

Hello fellow space travelers! It’s been a while since we catalogued all of our goings on here on Starship Pop!_OS, so we thought it might be a good time to highlight what our upstream contributions have looked like over the last couple of years. We’ve been logging some major light years! Have a look.

GNOME 

Keyboards Settings Panel and Keyboard Shortcuts

We worked in the GNOME design team to modernize and improve the GNOME Settings keyboard panel and shortcuts and engineered the new designs that ship today.


Laptop Battery Thresholds

This one is a work-in-progress that started a year ago. We’re seeing some recent movement happening here, so hopefully we can get this one in soon.

https://gitlab.freedesktop.org/upower/upower/-/merge_requests/49

GNOME Settings General and Responsiveness

We think Settings are an important window into the capabilities and options of an OS experience. As such, we tend to spend a lot of time helping to improve the Settings experience on GNOME. And with Pop!_OS auto-tiling, responsiveness is important to us as well. Our QA team tests every panel for responsiveness and we provide patches to enable or fix issues as they arise.

Contributions to other GNOME Projects

GNOME Disk Utility 

Nvidia Optimus Application Launching

Gnome Shell


GTK
Open Firmware
  • Collaboration with Intel and AMD to enable their platforms in open firmware

Coreboot

System76 has ported a wide variety of our laptops to coreboot and upstreamed the majority of these changes. Below is a list of changes that improve coreboot for all vendors.

Improvements for CometLake CPUs

Add support for PCIe hot plug

Add driver for TI smart amplifier

Update Intel Microcode

Fix SMMSTORE clear command

Improvements for TigerLake CPUs

Enabled TigerLake-H CPUs (many changes in one topic)

In the process of adding generic NVIDIA hybrid graphics support

Enable flashrom on TGL-H chipsets

Many more changes made by System76 can be seen here, there are five pages, so make sure to view them all:

https://review.coreboot.org/q/owner:jeremy%2540system76.com+OR+owner:tcrawford%2540system76.com


Fwupd
Nvidia Testing Collaboration

We’ve established a new testing relationship brought on by the recent GSync regression. We’ll now be testing pre-release nvidia drivers in the QA lab across a wide variety of hardware to catch regressions and bugs prior to release to the Linux community


Elementary App Center
Fedora 

Additional Upstream Collaboration
Rust Community
Linux Kernel

System76 ACPI driver for coreboot-based systems

Audio quirks for several Clevo boards


System76 Projects as Upstream

All in all it’s been a couple busy trips around the sun! Thanks for joining us on this journey throughout our productivity, and keep your view screens on for our upcoming contributions!

Robert Bunn is developing an AI to prevent preterm births

Thursday 4th of November 2021 04:31:00 PM

As part of System76’s Unleash Your Potential Program, AI researcher Robert Bunn has been using the Oryx Pro to develop artificial intelligence to predict and prevent preterm births. The Oryx Pro laptop has accelerated Robert’s work on the AI, which is set to become the first product of his new startup, Ultrasound AI. Read on for more details!

Give us an overview of your AI project.

About three years ago, I found out that preterm birth is the number one killer of children under five worldwide and the biggest problem in women’s health. It kills a million children every year, and costs America over $30 billion in preterm costs. I asked myself, can I do anything to help? Solve it in any way? And so, I spent about two-and-a-half years working to create an artificial intelligence solution.

There’s a lot of static in the images in ultrasound, so humans have difficulty seeing the image and understanding it through all that static, but an AI can learn to do what we can’t and see things that we can’t see. Most people wouldn’t even consider this possible. In fact, many doctors, when I told them I was going to do this, said it couldn’t be done. They look at ultrasounds all the time, and they said what you’re doing is not possible, don’t waste your time. But I guess I’m a bad listener…and after two-and-a-half years, it finally worked.

I needed to get this to the world, so I created a startup called Ultrasound AI. Now we are getting ready to apply for our breakthrough device designation with the FDA, which is basically any device that has life-saving implications that the FDA would have to fast-track. After that, we’ll apply for FDA marketing clearance so that doctors can use it. I should make it clear, we still need to get clearance from the FDA, and we still must work with early adopter doctors and hospitals for them to validate this in the real world. The current results are just from the large dataset that we started with.

What indicators of a premature birth has your AI been trained to detect?

It’s well known in the obstetric community that the cervix is an indicator of preterm birth. It’s not that accurate of a predictor with the way it’s currently used, though.The AI is certainly much more accurate with that than these particular measurements. The uterus and the boundary between the placenta and the uterus are indicators as well.

Some discoveries are completely new to medical science, but I can’t reveal those until our patent is issued. In some cases, it can even predict when the doctor must induce a pregnancy because of a life-threatening condition to the mother.

Can you quantify its accuracy?

We can predict the number of days early that a baby will be born. Not just if it will be born preterm, but that it will it be born 114 days early, for example. If you can predict that a baby will be born preterm, there are things you can do to prevent it—or prepare for it if you can’t stop it.

In many cases, you can do something to extend the pregnancy, so a baby stays in-utero longer. Say you’ve been told the baby will come about 40 days early, the accuracy on that is about plus or minus 6 days. A preterm birth is normally at least 21 days early, so even with that margin of error you’re still preterm either way.

There’s also the research this AI makes possible because it can identify the anatomical regions that are driving the problems, where right now that’s not really known. If doctors know exactly what areas are causing problems, they can sometimes try to improve these situations over time, since they were alerted six months ahead in some cases.

It also seems to be able to identify miscarriages before they occur, and perhaps that will help prevent those or at least allow more extensive research to be done. Right now, there are no visual indicators or evidence that a miscarriage might happen. There are some blood tests that can be done for preterm birth in general, but they’re expensive. So, it’s still early, but we believe this will open an enormous number of avenues for research.

Where does the Oryx Pro come into play?

Typically, where I use the Oryx Pro is when I do a smaller experiment, and then I move it over to the big behemoth I built. I don’t want to burn out the GPU on my laptop with it running for days and days, which laptops aren’t really designed for, but it’s great because I can do experiments to see if I’m going in the right direction quickly. When I see that something’s productive, I can switch to the cloud or the desktop.

What software are you using? On what OS?

The AI platform we’re using is PyTorch for fundamental deep learning. I use something on top of PyTorch called fast.ai, which is a good platform that’s great for prototyping and testing ideas without writing a ton of boilerplate code. AWS also has a ton of resources for deep learning.

I use Ubuntu because I wanted to start with something I’m familiar with, but honestly, I’m thinking about switching to Pop!_OS. I’m always worried about using a new Linux operating system because if you ever want to do anything, you want to be able to search for problems. Pop!_OS obviously isn’t going to be as good a search word as Ubuntu, right? So, my problem might not appear, and I didn’t know how different it was going to be. But apparently, Pop! _OS is built on Ubuntu and is designed for the laptops you guys have. I like that it turns off the GPU when it’s not in use to save power, so I think it will be a lot more convenient to use on Pop!_OS.

How has your experience been with the computer overall?

It’s shockingly impressive. I think this is a really great computer. I can use it whenever I have some downtime. It’s helped me a lot because I can’t take my behemoth computer everywhere, and running that thing is expensive. So, with the Oryx Pro I can whittle down my experiments and make sure I’m only putting the most productive ones up on the cloud, so it’s cost efficient—though we’ve also had tremendous support from AWS and NVIDIA, as well as from you guys. Having the laptop has been a great enhancement of my productivity, and I think the project has accelerated quite quickly because of your support.

Usually with Linux, it’s a problem getting things to work because it’s not plug-and-play like Mac or Windows where everyone has drivers for literally everything. Drivers are unfortunately not as good in the Linux ecosystem. However, you guys have done a great job of making sure all the various parts that come with the computer, and all the things you might want to connect to the computer, work. It’s plug-and-play here too, and very close to the experience on Mac for ease of use. I can figure things out myself of course, but it’s been great not having to figure things out because that wastes a lot of time. So, a laptop with all that ready to go was just fabulous.

I’m also going to be working with some of the tech staff at System76 to get my external GPU working on the laptop so it can run at a higher performance for days without wearing out the laptop. Our plan is to get that working and hopefully write up some instructions for other people. I think these types of laptops are going to become more common among AI researchers now that they’re powerful enough to use for real-word artificial intelligence research.

You mentioned your goal is to end preterm births worldwide. How do you ensure your patented technology achieves high adoption?

I wasn’t in it for the money with building this AI, otherwise I would’ve quit a long time ago, as I mentioned my primary goal was to make a great scientific achievement with lasting impact on the world. It took a long time and a lot of effort to get this thing working. If our vision is to end preterm birth, then this must be available to everybody. Everybody must be able to have access to it, and it must be affordable.

Learn more about Robert’s project at preterm.ai. Stay tuned for more updates on Robert’s work and cool projects from our Unleash Your Potential Program winners!

Marquita Wiggins is Developing her Open Source Graphic Design Program: Designy

Thursday 7th of October 2021 03:19:10 PM

The Unleash Your Potential Program provides a System76 computer to six winners for accelerating the completion of their next project. This week, we interviewed Marquita Wiggins, who is in the early stages of developing her open source Canva alternative, Designy.

What prompted you to want to create Designy?

I like Canva, but because it’s owned by a company that keeps the software closed down, there’s no ability for people who know how to code to be like, “Oh, I want this. Let’s add it and make Canva even better.” To my knowledge, there aren’t any free tools out there that give the Canva Pro treatment. So I’d like to make a tool that’s better, and also free.

You mentioned you had heavy experience using Canva. What’s your background with it?

I work in marketing for WBEZ, a public radio station. I’ve been doing that for about three years. A good portion of my work involves designing, so I’m always in either Canva or Illustrator.

I like the ease of Canva because I can work on designs from my work laptop, or I can use someone else’s laptop and log in if I’m somewhere else. And then with Illustrator, you can expand artboards as much as you want.

What sorts of improvements are you implementing in your open source alternative?

When you’re working on a design in Canva, it’s very linear. Let’s say I am working on a poster, and I just started it, and I just want to keep iterating on small changes. In order to do that, you have to locate the artboard that you’re working on, and you can’t view them all on the board at the same time. The reason I like Illustrator is I like to have eight different artboards up at the same time, and I can zoom out and see all my iterations at the same time, and then zoom into the one I want to make changes on. That is my number one feature that I love about Illustrator, and that’s what I want to bring to Designy.

I’d also give Designy the ability to create templates and share them with other people on the same software. If you create a template, you can then put it on the template board for other people to use. In Canva, you can’t just put templates up in the marketplace. Canva creates your templates, and those are the only ones you’re able to see unless you know somebody who also uses Canva, and they send you the template to use.

Do you have a background in coding?

Not really. In my last job I sent out all the emails for the organization, and I also managed the website, so I did use HTML and CSS for that, but I was never an expert in it. That said, I was an expert Googler. I was able to make massive changes to the website by Googling what I needed to do and then figuring out the code for it.

I’ve been interested in the computer programming space for a while, and I’ve always dabbled in it and learned more about HTML and CSS. When I saw this program pop up, I felt that this was my opportunity to learn a lot more, and also be able to create something that would be useful to myself.

What software are you using to develop it?

I’m going to be using Javascript for the front end, Java for the back end, and likely MonoDB for the database. I’m almost done learning Javascript now, and it’s a lot! So after that, I’ll start building the front end of the site, and then learn Java, connect it to the back end, and then MonoDB for the database.

This was the perfect opportunity to get the momentum going on learning how to do this, because now I can’t stop until it’s done!

Why did you choose Javascript?

When Canva was created, they created it using Javascript, so I figured why not use the same software that they originally used? I think right now they’ve moved on to something else, but when they originally started they used Javascript.

What are your initial thoughts on Pop!_OS?

I never used Linux until I got this laptop, so it was a bit of a learning curve to figure out how to do certain things. I haven’t really downloaded that much—I only really use Visual Studio Code and Firefox, and I also downloaded the Brave browser on it—but I like the navigation. I like that I can open up Visual Studio Code and then open up Firefox and the auto-tiling will automatically arrange the windows. I wish more companies would develop that feature.

How has your experience been with the Oryx Pro so far?

It’s great! It has a huge screen, so I don’t even have to use an external monitor. I have it on a riser with an external keyboard. I haven’t had any issues so far.

Did you encounter any challenges in setting up your system out of the box?

It was super smooth. I don’t even know if it took 15 minutes from unboxing it to actually being able to use it. I also like that I’m able to secure my data with encryption before I log into my account.

You mentioned Designy will have a beta. What’s the plan for that currently?

I’m thinking the beta phase will start in March when it’s all done, where I’m sharing it with other people, getting feedback, and making changes. I’ll be using Reddit a lot to get folks to try it out and let me know what they think. It’ll also be up on GitHub, so people will be able to push updates if they have a change they want me to make.

I’m going to finish the front end of the site in November and the back end of the site will be done in January. The database connection will be done in February. I know there may be a lot of weird bugs and whatnot that other people will find, so the beta helps me work all that out. The goal is to put this out to the public and then iterate on it, so maybe down the line it’ll transition from Javascript to something else.

Is there anything we didn’t ask about that you wanted to share?

A random fact is I have a dog named Mr. President. People seem to get a kick out of that.

Stay tuned for further updates from Marquita Wiggins’ Designy and other cool projects from our UYPP winners!

Massimo Pascale and his Lemur Pro Explore Dark Matter Substructure with the Sunburst Arc

Thursday 9th of September 2021 03:03:10 PM

Unleash Your Potential Program winner Massimo Pascale is a graduate student studying astrophysics at the University of California, Berkeley. Using his Lemur Pro, he’s studying early galaxies and dark matter in the sunburst arc, a distant galaxy magnified through a phenomenon called gravitational lensing. Read the whole interview for more details on the project and his experience with the Lemur Pro!

Give readers a rundown on what your project entails.

A galaxy cluster is a conglomeration of many galaxies that ends up weighing 10^14 solar masses. It’s incomprehensibly massive. Mass is not only able to gravitationally attract objects, but it’s also able to deflect the path of light, and the more massive it is the more it can deflect that light. This is what’s called gravitational lensing. When you have a massive galaxy cluster, and somewhere behind that galaxy cluster is another galaxy, the light from that galaxy can get deflected due to the mass of that galaxy cluster. Gravity causes the light to get stretched, sheared, and even magnified because of the way that it retains surface brightness, so these objects end up being a lot brighter than they would ever be if we didn’t have this galaxy cluster in front of it.

We’re using an arc of light called the sunburst arc. If we take our telescope and look at that galaxy cluster, we actually see that background galaxy all stretched out, and it appears as if it’s in the foreground. So truly we’re using this galaxy cluster as a natural telescope in the sky. And there’s many, many scientific impacts that we get from that.

If you want to see some of the earliest galaxies in the universe—we can say the most distant galaxies are the earliest galaxies because it takes time for that light to travel to us—this might be a good opportunity because you have this natural telescope of this massive galaxy cluster.

When we look at these beautiful arcs of light, these beautiful stretched out background galaxies in the galaxy cluster, we can actually use that as evidence to reverse engineer the mass distribution of the galaxy cluster itself. You can think of it as looking at a footprint in the sand and reconstructing what the shape and weight of that foot must’ve been to make that footprint.

Something I’m personally very interested in is how we can probe dark matter in this galaxy cluster. Visible matter interacts with light, and that’s why we can see it. The light bounces off and goes to our eyes, and that tells our eyes, “okay, there’s an object there.” Dark matter doesn’t interact with light in that way. It still does gravitationally, still deflects that light. But we can’t see what that dark matter is, and that makes it one of the most mysterious things in the universe to us.

So I’m very interested in exploring that dark matter, and specifically the substructure of that dark matter. We’re using the evidence of the sunburst arc to try and discover not only what the mass distribution of the overall galaxy cluster is, but also to get a greater insight into the dark matter itself that makes up that galaxy cluster, and dark matter as a whole.

Where did the idea to do this come from?

I’ll have to admit that it’s not my original idea entirely. I work with an advisor here at UC Berkeley where I’m attending as a graduate student, Professor Liang Dai, who previously was looking at the effects of microlensing in this galaxy cluster. He’s an expert when it comes to doing a lot of these microlensing statistics. And I had previously had work on doing cluster scale modeling on a number of previous clusters as part of my undergraduate work. So it was a really nice pairing when we had found this common interest, and that we can both use our expertise to solve the problems in this cluster, specifically the sunburst arc.

What kind of information are you drawing from?

Very generally, in astronomy we are lucky to be funded usually through various governments as well as various philanthropists to build these great telescopes. If you have a cluster or any object in the sky that you’re very interested in, there’s usually some formal channel that you can write a proposal, and you will propose your project. Luckily for us, these objects had already been observed before by Hubble Space Telescope. The big benefit with Hubble is that it doesn’t have to worry about the atmosphere messing up the observations.

Because a lot of these telescopes are publicly funded, we want to make sure this information gets to the public. Usually when you observe you get a few months where that’s only your data—that way no one else can steal your project—but then after that it goes up into an archive. So all of this data that we’re using is publicly available, and we’re able to reference other astronomers that studied it in their previous works, and see what information we’re able to glean from the data and build off of that. What’s so great about astronomy is you’re always building off of the shoulders of others, and that’s how we come to such great discoveries.

That sounds very similar to our mission here.

Yeah exactly. I see a lot of parallels between System76 and the open source community as a whole, and how we operate here in astronomy and the rest of the sciences as well.

How do you determine the age of origin based on this information?

We can estimate the general age of the object based off the object’s light profile. We do something called spectroscopy and we look at the spectrum of the object through a slit. Have you ever taken a prism and held it outside, and seen the rainbow that’s shown on the ground through the light of the sun? We do that, but with this very distant object.

Based off of the light profile, we can figure out how far away it is, because the universe is ever-expanding and things that are further away from us are expanding away faster. The object effectively gets red-shifted by the Doppler effect, so the light gets made more red. By looking at how reddened it’s become, we can figure out the distance of the object. We usually refer to it by its red-shift. You can do this with any object, really.

Based off of the distance from the lensed object, which we find through spectroscopy, and the objects in the cluster, which we also find through spectroscopy, we can then figure out what the mass distribution of the cluster must be. Those are two important variables for us to know in order to do our science.

How do you divide the work between the Lemur Pro and the department’s supercomputer?

A lot of what I do is MCMC, or Markov-chain monte carlo work, so usually I’m trying to explore some sort of parameter space. The models that I make might have anywhere from six to two dozen parameters that I’m trying to fit for at once that all represent different parts of this galaxy cluster. The parameters can be something like the orientation of a specific galaxy, things like that. This can end up being a lot of parameters, so I do a lot of shorter runs first on the Lemur Pro, which Lemur Pro is a great workhorse for, and then I ssh into a supercomputer and I use what I got from those shorter runs to do one really long run to get an accurate estimate.

We’re basically throwing darts at a massive board that represents the different combinations of parameters, where every dart lands on a specific set of parameters, and we’re testing how those parameters work via a formula which determines what the likelihood of their accuracy is. It can be up to 10-plus runs just to test out a single idea or a single new constraint. so it’s easier to do short runs where I test out different ranges. After that, I move to the supercomputer. If I’ve done my job well, it’s just one really long run where I throw lots of darts, but in a very concentrated area. It doesn’t always end up that way since sometimes I have to go back to the drawing board and repeat them.

What software are you using for this project?

Almost all of what I do is in Python, and I am using an MCMC package called Emcee that’s written by another astronomer. It’s seen great success even outside of the field of astronomy, but it’s a really great program and it’s completely open source and available to the public. Most of the other stuff is code that I’ve written myself. Every once in a while I’ll dabble in using C if I need something to be faster, but for the most part I’m programming in Python, and I’m using packages made by other astronomers.

How has your experience been with the Lemur Pro overall?

It’s been really fantastic. I knew going in that it was going to be a decently powerful machine, but I’m surprised by how powerful it is. The ability to get the job done is the highest priority, and it knocked it out of the park with that.

Mobility is really important to me. It’s so light and so small, I can really take it wherever I need to go. It’s just really easy to put in my bag until I get to the department. And being a graduate student, I’m constantly working from home, or working from the office, or sometimes I like to go work at the coffee shop, and I might have to go to a conference. These are all things you can expect that the average astronomer will be doing, especially one that’s a graduate student like me.

I’ve had to travel on a plane twice since I’ve had it, and it was actually a delight to be able to do. Usually I hate working on planes because it’s so bulky, and you open the laptop and it starts to hit the seat in front of you, you don’t know if you can really put it on the tray table, maybe your elbows start pushing up against the person next to you because the computer’s so big, but this was the most comfortable experience I’ve had working on a plane.

What will findings on dark matter and early galaxies tell us about our universe?

First let’s think about the galaxy that’s getting magnified. This is a background galaxy behind the cluster, and the mass from the cluster is stretching out its light and magnifying it so that it appears as an arc to us. Through my MCMC I figure out what the mass distribution of the galaxy cluster is. And using that, I can reconstruct the arc into what it really looked like before it was stretched and sheared out, because I know now how it was stretched and sheared.

A lot of people are interested in looking at the first galaxies. How did the first galaxies form? What were the first galaxies like? Looking at these galaxies gives us insight into the early parts of the universe, because the more distant a galaxy is, the earlier in the universe it’s from. We’re seeing back in time, effectively.

Secondarily, we don’t know much about dark matter. By getting an idea of dark matter substructure by looking at these arcs, we can get insight and test different theories of dark matter. and what its makeup might be. If you learned that 80 percent of all mass in your universe was something that you couldn’t see, and you understood nothing about, I’m sure you would want to figure out something about it too, right? It’s one of the greatest mysteries not just of our generation, but of any generation. I think it will continue to be one of the greatest mysteries of all time.

The third prong of this project is that we can also figure out more about the galaxy cluster itself. The idea of how galaxy clusters form. We can get the mass distribution of this cluster, and by comparing it to things like the brightness of the galaxies in the cluster or their speed, we can get an idea for where the cluster is in its evolution. Clusters weren’t always clusters, it’s the mass that caused them to merge together in these violent collisions to become clusters. If you know the mass distribution which we get by this gravitational lensing, as well as a couple of other things about the galaxies, you can figure out how far along the cluster is in this process.

There’s a big impact morally on humanity by doing this sort of thing, because everybody can get behind it. When everybody looks up and they see that we came up with the first image of a black hole, I think that brings everybody together, and that’s something that everybody can be very interested and want to explore.

Stay tuned for further updates from Massimo Pascale’s exploration of dark matter and the sunburst arc, as well as cool projects from our other UYPP winners!

More in Tux Machines

Who's new

  • trendoceangd
  • Onzarwadabun
  • kmcmillan
  • Marius Nestor
  • johnwalsh