Language Selection

English French German Italian Portuguese Spanish

Mozilla

Syndicate content
Planet Mozilla - https://planet.mozilla.org/
Updated: 1 hour 5 min ago

Cameron Kaiser: Chrome murders FTP like Jeffrey Epstein

Saturday 17th of August 2019 02:36:05 PM
What is it with these people? Why can't things that are working be allowed to still go on working? (Blah blah insecure blah blah unused blah blah maintenance blah blah web everything.)

This leaves an interesting situation where Google has, in its very own search index, HTML pages served by FTP its own browser won't be able to view:

At the top of the search results, even!

Obviously those FTP HTML pages load just fine in mainline Firefox, at least as of this writing, and of course TenFourFox. (UPDATE: This won't work in Firefox either after Fx70, though FTP in general will still be accessible. Note that it references Chrome's announcements; as usual, these kinds of distributed firing squads tend to be self-reinforcing.)

Is it a little ridiculous to serve pages that way? Okay, I'll buy that. But it works fine and wasn't bothering anyone, and they must have some relevance to be accessible because Google even indexed them.

Why is everything old suddenly so bad?

Tantek Çelik: IndieWebCamps Timeline 2011-2019: Amsterdam to Utrecht

Friday 16th of August 2019 09:21:00 PM

While not a post directly about IndieWeb Summit 2019, this post provides a bit of background and is certainly related, so I’m including it in my series of posts about the Summit. Previous post in this series:

At the beginning of IndieWeb Summit 2019, I gave a brief talk on State of the IndieWeb and mentioned that:

We've scheduled lots of IndieWebCamps this year and are on track to schedule a record number of different cities as well.

I had conceived of a graphical representation of the growth of IndieWebCamps over the past nine years, both in number and across the world, but with everything else involved with setting up and running the Summit, ran out of time. However, the idea persisted, and finally this past week, with a little help from Aaron Parecki re-implementing Dopplr’s algorithm for turning city names into colors, was able to put togther something pretty close to what I’d envisioned:

Istanbul  Amsterdam  Utrecht  Nürnberg    Düsseldorf      Berlin      Edinburgh  Oxford   Brighton       New Haven  Baltimore  Cambridge    New York      Austin   Bellingham  Los Angeles   San Francisco    Portland          201120122013201420152016201720182019

I don’t know of any tools to take something like this kind of locations vs years data and graph it as such. So I built an HTML table with a cell for each IndieWebCamp, as well as cells for the colspans of empty space. Each colored cell is hyperlinked to the IndieWebCamp for that city for that year.

2011-2018 and over half of 2019 are IndieWebCamps (and Summits) that have already happened. 2019 includes bars for four upcoming IndieWebCamps, which are fully scheduled and open for sign-ups.

The table markup is copy pasted from the IndieWebCamp wiki template where I built it, and you can see the template working live in the context of the IndieWebCamp Cities page. I’m sure the markup could be improved, suggestions welcome!

Julien Vehent: The cost of micro-services complexity

Thursday 15th of August 2019 03:45:23 PM

It has long been recognized by the security industry that complex systems are impossible to secure, and that pushing for simplicity helps increase trust by reducing assumptions and increasing our ability to audit. This is often captured under the acronym KISS, for "keep it stupid simple", a design principlepopularized by the US Navy back in the 60s. For a long time, we thought the enemy were application monoliths that burden our infrastructure with years of unpatched vulnerabilities.


So we split them up. We took them apart. We created micro-services where each function, each logical component, is its own individual service, designed, developed, operated and monitored in complete isolation from the rest of the infrastructure. And we composed them ad vitam æternam. Want to send an email? Call the rest API of micro-service X. Want to run a batch job? Invoke lambda function Y. Want to update a database entry? Post it to A which sends an event to B consumed by C stored in D transformed by E and inserted by F. We all love micro-services architecture. It’s like watching dominoes fall down. When it works, it’s visceral. It’s when it doesn’t that things get interesting. After nearly a decade of operating them, let me share some downsides and caveats encountered in large-scale production environments.


High operational cost

The first problem is operational cost. Even in a devops cloud automated world, each micro-service, serverless or not, needs setup, maintenance and deployment. We never fully got to the holy grail of completely automated everything, so humans are still involved with these things. Perhaps someone sold you on the idea devs could do the ops work on their free time, but let’s face it, that’s a lie, and you need dedicated teams of specialists to run the stuff the right way. And those folks don’t come cheap.

The more services you have, the harder it is to keep up with them. First you’ll start noticing delays in getting new services deployed. A week. Two weeks. A month. What do you mean you need a three months notice to get a new service setup?

Then, it’s the deployments that start to take time. And as a result, services that don’t absolutely need to be deployed, well, aren’t. Soon they’ll become outdated, vulnerable, running on the old version of everything and deploying a new version means a week worth of work to get it back to the current standard.


QA uncertainty

A second problem is quality assurance. Deploying anything in a micro-services world means verifying everything still works. Got a chain of 10 services? Each one probably has its own dev team, QA specialists, ops people that need to get involved, or at least notified, with every deployment of any service in the chain. I know it’s not supposed to be this way. We’re supposed to have automated QA, integration tests, and synthetic end-to-end monitoring that can confirm that a butterfly flapping its wings in us-west-2 triggers a KPI update on the leadership dashboard. But in the real world, nothing’s ever perfect and things tend to break in mysterious ways all the time. So you warn everybody when you deploy anything, and require each intermediate service to rerun their own QA until the pain of getting 20 people involved with a one-line change really makes you wish you had a monolith.

The alternative is that you don’t get those people involved, because, well, they’re busy, and everything is fine until a minor change goes out, all testing passes, until two days later in a different part of the world someone’s product is badly broken. It takes another 8 hours for them to track it back to your change, another 2 to roll it back, and 4 to test everything by hand. The post-mortem of that incident has 37 invitees, including 4 senior directors. Bonus points if you were on vacation when that happened.

Huge attack surface

And finally, there’s security. We sure love auditing micro-services, with their tiny codebases that are always neat and clean. We love reviewing their infrastructure too, with those dynamic security groups and clean dataflows and dedicated databases and IAM controlled permissions. There’s a lot of security benefits to micro-services, so we’ve been heavily advocating for them for several years now.

And then, one day, someone gets fed up with having to manage API keys for three dozen services in flat YAML files and suggests to use oauth for service-to-service authentication. Or perhaps Jean-Kevin drank the mTLS Kool-Aid at the FoolNix conference and made a PKI prototype on the flight back (side note: do you know how hard it is to securely run a PKI over 5 or 10 years? It’s hard). Or perhaps compliance mandates that every server, no matter how small, must run a security agent on them.

Even when you keep everything simple, this vast network of tiny services quickly becomes a nightmare to reason about. It’s just too big, and it’s everywhere. Your cross-IAM role assumptions keep you up at night. 73% of services are behind on updates and no one dares touch them. One day, you ask if anyone has a diagram of all the network flows and Jean-Kevin sends you a dot graph he generated using some hacky python. Your browser crashes trying to open it, the damn thing is 158MB of SVG.

Most vulnerabilities happen in the seam of things. API credentials will leak. Firewall will open. Access controls will get mis-managed. The more of them you have, the harder it is to keep it locked down.


Everything in moderation I’m not anti micro-services. I do believe they are great, and that you should use them, but, like a good bottle of Lagavulin, in moderation. It’s probably OK to let your monolith do more than one thing, and it’s certainly OK to extract the one functionality that several applications need into a micro-service. We did this with autograph, because it was obvious that handling cryptographic operations should be done by a dedicated micro-service, but we don’t do it for everything. My advice is to wait until at least three services want a given thing before turning it into a micro-service. And if the dependency chain becomes too large, consider going back to a well-managed monolith, because in many cases, it actually is the simpler approach.

Hacks.Mozilla.Org: Using WebThings Gateway notifications as a warning system for your home

Thursday 15th of August 2019 02:49:58 PM

Ever wonder if that leaky pipe you fixed is holding up? With a trip to the hardware store and a Mozilla WebThings Gateway you can set up a cheap leak sensor to keep an eye on the situation, whether you’re home or away. Although you can look up detector status easily on the web-based dashboard, it would be better to not need to pay attention unless a leak actually occurs. In the WebThings Gateway 0.9 release, a number of different notification mechanisms can be set up, including emails, apps, and text messages.

Leak Sensor Demo https://hacks.mozilla.org/files/2019/08/Leak-Sensor-Demo.mp4

         

In this post I’ll show you how to set up gateway notifications to warn you of changes in your home that you care about. You can set each notification to one of three levels of severity–low, normal, and high–so that you can identify which are informational changes and which alerts should be addressed immediately (fire! intruder! leak!). First, we’ll choose a device to worry about. Next, we’ll decide how we want our gateway to contact us. Finally, we’ll set up a rule to tell the gateway when it should contact us.

Choosing a device

First, make sure the device you want to monitor is connected to your gateway. If you haven’t added the device yet, visit the Gateway User Guide for information about getting started.

Now it’s time to figure out which things’ properties will lead to interesting notifications. For each thing you want to investigate, click on its splat icon to get a full view of all its properties.

You may also want to log properties of various analog devices over time to see what values are “normal”. For example, you can monitor the refrigerator temperature for a couple of days to help determine what qualifies as an abnormal temperature. In this graph, you can see the difference between baseline power draw (around 20 watts) and charging (up to 90 watts).

Charger Power Consumption Graph

In my case, I’ve selected a leak sensor so I won’t need to log data in advance. It’s pretty clear that I want to be notified when the leak property of my sensor becomes true (i.e., when a leak is detected). If instead you want to monitor a smart plug, you can look at voltage, power, or on/off state. Note that the notification rules you create will let you combine multiple inputs using “and” or “or” logic. For example, you might want to be alerted if indoor motion is detected “and” all of the family smartphone “presence” states are “inactive” (i.e., no one in your family is home, so what caused motion?). Whatever your choice, keep the logical states of your various sensors in mind while you set up your notifier.

Setting up your notifier

The 0.9 WebThings Gateway release added support for notifiers as a specific form of add-on. Thanks to the efforts of the community and a bit of our own work, your gateway can already send you notifications over email, SMS, Telegram, or specialized push notification apps with new add-ons released every week. You can find several notification add-on options by clicking “+” on the Settings > Add-ons page.

The easiest-to-use notifiers are email and SMS since there are fewer moving parts, but feel free to choose whichever approach you prefer. Follow the configuration instructions in your chosen notifier’s README file. You can get to the README for your notifier by clicking on the author’s name in the add-on list then scrolling down.

You’ll find a complete guide to the email notifier here: https://github.com/mozilla-iot/email-sender-adapter#email-sender-adapter.

Creating a rule

Finally, let’s teach our gateway how and when it should yell for attention. We can set this up in a simple drag-and-drop rule. First, drag your device to the left as a trigger and select the “Leak” property.

Next, drag your notification channel to the right as an effect and configure its title, body, and level as desired.

Your rule is now set up and ready to go!

The finished rule!

You can now manually test it out. For a leak sensor you can just spill a little water on it to make sure you get a text, email, or other notification warning you about a possible scary flood. This is also a perfect time to start experimenting. Can you set up a second, louder notification for when you’re asleep? What about only notifying when you’re at home so you can deal with the leak immediately?

A more advanced rule

Notifications are just one small piece of the WebThings Gateway ecosystem. We’re trying to build a future where the convenience of a connected life doesn’t require giving up your security and privacy. If you have ideas about how the WebThings Gateway can better orchestrate your home, please comment on Discourse or contribute on GitHub. If your preferred notification channel is missing and you can code, we love community add-ons! Check out the source code of the email add-on for inspiration. Coming up next, we’ll be talking about how you can have a natural spoken dialogue with the WebThings Gateway without sending your voice data to the cloud.

The post Using WebThings Gateway notifications as a warning system for your home appeared first on Mozilla Hacks - the Web developer blog.

The Rust Programming Language Blog: Announcing Rust 1.37.0

Thursday 15th of August 2019 12:00:00 AM

The Rust team is happy to announce a new version of Rust, 1.37.0. Rust is a programming language that is empowering everyone to build reliable and efficient software.

If you have a previous version of Rust installed via rustup, getting Rust 1.37.0 is as easy as:

$ rustup update stable

If you don't have it already, you can get rustup from the appropriate page on our website, and check out the detailed release notes for 1.37.0 on GitHub.

What's in 1.37.0 stable

The highlights of Rust 1.37.0 include referring to enum variants through type aliases, built-in cargo vendor, unnamed const items, profile-guided optimization, a default-run key in Cargo, and #[repr(align(N))] on enums. Read on for a few highlights, or see the detailed release notes for additional information.

Referring to enum variants through type aliases

With Rust 1.37.0, you can now refer to enum variants through type aliases. For example:

type ByteOption = Option<u8>; fn increment_or_zero(x: ByteOption) -> u8 { match x { ByteOption::Some(y) => y + 1, ByteOption::None => 0, } }

In implementations, Self acts like a type alias. So in Rust 1.37.0, you can also refer to enum variants with Self::Variant:

impl Coin { fn value_in_cents(&self) -> u8 { match self { Self::Penny => 1, Self::Nickel => 5, Self::Dime => 10, Self::Quarter => 25, } } }

To be more exact, Rust now allows you to refer to enum variants through "type-relative resolution", <MyType<..>>::Variant. More details are available in the stabilization report.

Built-in Cargo support for vendored dependencies

After being available as a separate crate for years, the cargo vendor command is now integrated directly into Cargo. The command fetches all your project's dependencies unpacking them into the vendor/ directory, and shows the configuration snippet required to use the vendored code during builds.

There are multiple cases where cargo vendor is already used in production: the Rust compiler rustc uses it to ship all its dependencies in release tarballs, and projects with monorepos use it to commit the dependencies' code in source control.

Using unnamed const items for macros

You can now create unnamed const items. Instead of giving your constant an explicit name, simply name it _ instead. For example, in the rustc compiler we find:

/// Type size assertion where the first parameter /// is a type and the second is the expected size. #[macro_export] macro_rules! static_assert_size { ($ty:ty, $size:expr) => { const _: [(); $size] = [(); ::std::mem::size_of::<$ty>()]; // ^ Note the underscore here. } } static_assert_size!(Option<Box<String>>, 8); // 1. static_assert_size!(usize, 8); // 2.

Notice the second static_assert_size!(..): thanks to the use of unnamed constants, you can define new items without naming conflicts. Previously you would have needed to write static_assert_size!(MY_DUMMY_IDENTIFIER, usize, 8);. Instead, with Rust 1.37.0, it now becomes easier to create ergonomic and reusable declarative and procedural macros for static analysis purposes.

Profile-guided optimization

The rustc compiler now comes with support for Profile-Guided Optimization (PGO) via the -C profile-generate and -C profile-use flags.

Profile-Guided Optimization allows the compiler to optimize code based on feedback from real workloads. It works by compiling the program to optimize in two steps:

  1. First, the program is built with instrumentation inserted by the compiler. This is done by passing the -C profile-generate flag to rustc. The instrumented program then needs to be run on sample data and will write the profiling data to a file.
  2. Then, the program is built again, this time feeding the collected profiling data back into rustc by using the -C profile-use flag. This build will make use of the collected data to allow the compiler to make better decisions about code placement, inlining, and other optimizations.

For more in-depth information on Profile-Guided Optimization, please refer to the corresponding chapter in the rustc book.

Choosing a default binary in Cargo projects

cargo run is great for quickly testing CLI applications. When multiple binaries are present in the same package, you have to explicitly declare the name of the binary you want to run with the --bin flag. This makes cargo run not as ergonomic as we'd like, especially when a binary is called more often than the others.

Rust 1.37.0 addresses the issue by adding default-run, a new key in Cargo.toml. When the key is declared in the [package] section, cargo run will default to the chosen binary if the --bin flag is not passed.

#[repr(align(N))] on enums

The #[repr(align(N))] attribute can be used to raise the alignment of a type definition. Previously, the attribute was only allowed on structs and unions. With Rust 1.37.0, the attribute can now also be used on enum definitions. For example, the following type Align16 would, as expected, report 16 as the alignment whereas the natural alignment without #[repr(align(16))] would be 4:

#[repr(align(16))] enum Align16 { Foo { foo: u32 }, Bar { bar: u32 }, }

The semantics of using #[repr(align(N)) on an enum is the same as defining a wrapper struct AlignN<T> with that alignment and then using AlignN<MyEnum>:

#[repr(align(N))] struct AlignN<T>(T); Library changes

In Rust 1.37.0 there have been a number of standard library stabilizations:

Other changes

There are other changes in the Rust 1.37 release: check out what changed in Rust, Cargo, and Clippy.

Contributors to 1.37.0

Many people came together to create Rust 1.37.0. We couldn't have done it without all of you. Thanks!

New sponsors of Rust infrastructure

We'd like to thank two new sponsors of Rust's infrastructure who provided the resources needed to make Rust 1.37.0 happen: Amazon Web Services (AWS) and Microsoft Azure.

  • AWS has provided hosting for release artifacts (compilers, libraries, tools, and source code), serving those artifacts to users through CloudFront, preventing regressions with Crater on EC2, and managing other Rust-related infrastructure hosted on AWS.
  • Microsoft Azure has sponsored builders for Rust’s CI infrastructure, notably the extremely resource intensive rust-lang/rust repository.

Mozilla Localization (L10N): L10n Report: August Edition

Wednesday 14th of August 2019 06:00:07 PM

Please note some of the information provided in this report may be subject to change as we are sometimes sharing information about projects that are still in early stages and are not final yet. 

Welcome!

New localizers:

  • Mohsin of Assanese (as) is committed to rebuild the community and has been contributing to several projects.
  • Emil of Syriac (syc) joined us through the Common Voice project.
  • Ratko and Isidora of Serbian (sr) have been prolific contributors to a wide range of products and projects since joining the community.
  • Haile of Amheric (am) joined us through the Common Voice project, is busy localizing and recruiting more contributors so he can rebuild the community.
  • Ahsun Mahmud of Bengali (bn) focuses his interest on Firefox.

Are you a locale leader and want us to include new members in our upcoming reports? Contact us!

New community/locales added
  • Maltese (mt)
  • Romansh Vallader (rm-vallery)
  • Syriac (syc)

New content and projects What’s new or coming up in Firefox desktop

We’re quickly approaching the deadline for Firefox 69. The last day to ship your changes in this version is August 20, less than a week away.

A lot of content targeting Firefox 70 already landed and it’s available in Pontoon for translation, with more to come in the following days. Here are a few of the areas where you should focus your testing on.

about:logins

This is the new password manager for Firefox. If you don’t plan to store the passwords in your browser, you should at least create a new profile to test the feature and its interactions (adding logins, editing, removing, etc.).

Enhanced Tracking Protection (ETP) and Protection Panels

This is going to be the main focus for Firefox 70:

  • New protection panel displayed when clicking the shield icon in the address bar.
  • Updated preferences.
  • New about:protections page. The content of this page will be exposed for localization in the coming days.

With ETP there will be several new terms to define for your language, like “Cross-Site Tracking Cookies” or “Social Media Trackers”. Make sure they’re translated consistently across the products and websites.

The deadline to ship localization for Firefox 70 will be October 8.

What’s new or coming up in mobile

It’s summer vacation time in mobile land, which means most projects are following the usual course of things.

Just like for Desktop, we’re quickly approaching the deadline for Firefox Android v69. The last day to ship your changes in this version is August 20.

Another thing to note is that we’ve exposed strings for Firefox iOS v19 (deadline TBD soon).

Other projects are following the usual continuous localization workflow. Stay tuned for the next report as there will be novelties then for sure!

What’s new or coming up in web projects Firefox Accounts

A lot of strings landed earlier this month. If you need to prioritize what to localize first, look for string IDs containing `delete_account` or `sync-engines`. Expect more strings to land in the coming weeks.

Mozilla.org

The following files were added or updated since the last report.

  • New: firefox/adblocker.lang and firefox/whatsnew_69.lang (due on 26 of August)
  • Update: firefox/new/trailhead.lang

The navigation.lang file has been made available for localization for some time. This is a shared file, and the content is on production whether the file is fully localized or not. If this is not fully translated, make sure to give this file higher priority to complete soon.

What’s new or coming up in Foundation projects

More content from foundation.mozilla.org will be exposed to localization in de, es, fr, pl, pt-BR over the next few weeks! Content is exposed in different stages, because the website is built using different technologies, which makes it challenging for localization. The main pages will be available in the Engagement project, and a new tag can help have a look at them. Other template strings will be exposed in a new project later.

donate.mozilla.org is getting an update too! The website is being rebuilt from the ground up with a new system that will make it easier to maintain. The UI won’t change too much, so the copy will mostly remain the same. However, it won’t be possible to migrate the current translations to the new system, instead we will heavily rely on Pontoon’s translation memory.
Once the new website is ready, the current project in Pontoon will be set to “read only” mode during a transition period and a new project will be enabled.

Please make sure to review any pending suggestion over the next few weeks, so that they get properly added to the translation memory and are ready to be reused into the new project.

What’s new or coming up in SuMo

Newly published articles:

What’s new or coming up in Pontoon

The Translate.Next work moves on. We hope to have it wrapped up by the end of this quarter (i.e., end of September). Help us test by turning on Translate.Next from the Pontoon translation editor.

Newly published localizer facing documentation Events
  • Want to showcase an event coming up that your community is participating in? Reach out to any l10n-driver and we’ll include that (see links to emails at the bottom of this report)
Friends of the Lion

Know someone in your l10n community who’s been doing a great job and should appear here? Contact one of the l10n-drivers, and we’ll make sure they get a shout-out (see list at the bottom)!

Useful Links Questions? Want to get involved?

Did you enjoy reading this report? Let us know how we can improve by reaching out to any one of the l10n-drivers listed above.

Mozilla VR Blog: WebXR category in JS13KGames!

Tuesday 13th of August 2019 05:34:30 PM

Today starts the 8th edition of the annual js13kGames competition and we are sponsoring its WebXR category with a bunch of prizes including Oculus Quest headsets!

Like many other game development contests, the main goal of the js13kGames competition is to make a game based on a given theme under a specific amount of time. This year’s theme is "BACK" and the time you have to work on your game is a whole month, from today to September 13th.
There is, of course, another important rule you must follow: the zip containing your game should not weigh more than 13kb. (Please follow this link for the complete set of rules). Don’t let the size restriction discourage you. Previous competitors have done amazing things in 13kb.

This year, as in the previous editions, Mozilla is sponsoring the competition, with special emphasis on the WebXR category, where, among other prizes, the best three games will get an Oculus Quest headset!

Like many other game development contests, the main goal is to release a game based on a given theme under a specific amount of time. This year’s theme is "BACK" and the time you have to work on your game is a whole month, from today to 13th September.
There is, of course, another important rule you must follow: the zip containing your game should not weigh more than 13kb. (Please follow this link for the complete set of rules).

This year, as in the previous editions, Mozilla is again sponsoring the competition, with special emphasis on the WebXR category, where, among other prizes, the best three games will get an Oculus Quest headset!

Frameworks allowed

Last year you were allowed to use A-Frame and Babylon.js in your game. This year we have been working with the organization to include three.js on that list!
Because these frameworks weigh far more than 13kb, the requirements for this category have been softened. The size of the framework builds won’t count as part of the final 13kb limit. The allowed links for each framework to include in your game are the following:

The allowed links per framework to include on your game are the following:

If you feel you can present a WebXR game without using any third-party framework and still keep the 13kb limit for the whole game, you are free to do so and I’m sure the judges will value that fact.

You may use any kind of input system: gamepad, gazer, 3dof or 6dof controllers and we will still be able to test your game in different VR devices. Please indicate in the description what the device/input requirements are for your game.
If you have a standalone headset, please make sure you try your game on Firefox Reality because we plan to feature the best games of the competition on the Firefox Reality homepage.

Resources

Here are some useful links if you need some help or want to share your progress!

Enjoy and good luck!

This Week In Rust: This Week in Rust 299

Tuesday 13th of August 2019 04:00:00 AM

Hello and welcome to another issue of This Week in Rust! Rust is a systems language pursuing the trifecta: safety, concurrency, and speed. This is a weekly summary of its progress and community. Want something mentioned? Tweet us at @ThisWeekInRust or send us a pull request. Want to get involved? We love contributions.

This Week in Rust is openly developed on GitHub. If you find any errors in this week's issue, please submit a PR.

Updates from Rust Community News & Blog Posts Crate of the Week

This week's crate is topgrade, a command-line program to upgrade all the things.

Thanks to Dror Levin for the suggestion!

Submit your suggestions and votes for next week!

Call for Participation

Always wanted to contribute to open-source projects but didn't know where to start? Every week we highlight some tasks from the Rust community for you to pick and get started!

Some of these tasks may also have mentors available, visit the task page for more information.

If you are a Rust project owner and are looking for contributors, please submit tasks here.

Updates from Rust Core

270 pull requests were merged in the last week

Approved RFCs

Changes to Rust follow the Rust RFC (request for comments) process. These are the RFCs that were approved for implementation this week:

No RFCs were approved this week.

Final Comment Period

Every week the team announces the 'final comment period' for RFCs and key PRs which are reaching a decision. Express your opinions now.

RFCs

No RFCs are currently in final comment period.

Tracking Issues & PRs New RFCs

No new RFCs were proposed this week.

Upcoming Events Asia Pacific Europe North America

If you are running a Rust event please add it to the calendar to get it mentioned here. Please remember to add a link to the event too. Email the Rust Community Team for access.

Rust Jobs

Tweet us at @ThisWeekInRust to get your job offers listed here!

Quote of the Week

For me, acquiring a taste for rustfmt-style seems worthwhile to 'eliminate broad classes of debate', even if I didn't like some of the style when I first looked. I've resisted the temptation to even read about how to customise.

Years ago, I was that person writing style guides etc. I now prefer this problem to be automated-away; freeing up time for malloc-memcpy-golf (most popular sport in the Rust community).

@dholroyd on rust-users

Thanks to troiganto for the suggestion!

Please submit quotes and vote for next week!

This Week in Rust is edited by: nasa42, llogiq, and Flavsditz.

Discuss on r/rust.

Mozilla VR Blog: Custom elements for the immersive web

Monday 12th of August 2019 07:28:22 PM

We are happy to introduce the first set of custom elements for the immersive web we have been working on: <img-360> and <video-360>

From the Mixed Reality team, we keep working on improving the content creator experience, building new frameworks, tools, APIs, performance tuning and so on.
Most of these projects are based on the assumption that the users have a basic knowledge of 3D graphics and want to go deep on fully customizing their WebXR experience, (eg: using A-Frame or three.js).
But there are still a lot of use cases where content creators just want very simple interactions and don’t have the knowledge or time to create and maintain a custom application built on top of a WebXR framework.

With this project we aim to address the problems these content creators have by providing custom elements with simple, yet polished features. One could be just a simple 360 image or video viewer, another one could be a tour allowing the user to jump from one image to another.

Custom elements provide a standard way to create HTML elements to provide simple functionality that matches expectations of content creators without knowledge of 3D, WebXR or even Javascript.

How does this work?

Just include the Javascript bundle on your page and you could start using both elements in your HTML: <img-360> and <video-360>. You just need to provide them with a 360 image or video and the custom elements will do the rest, including detecting WebVR support. Here is a simple example that adds a 360 image and video to a page. All of the interaction controls are generated automatically:

You can try a demo here and find detailed information on how to use them on Github.

Next steps

Today we are releasing just these two elements but we have many others in mind and would love your feedback. What new elements would you find useful? Please join us on GitHub to discuss them.
We are also excited to see other companies working hard on providing quality custom elements for the 3D and XR web as Google with their <model-viewer> component and we hope others will follow.

Mozilla Reps Community: Reps OKRs for second half of 2019

Monday 12th of August 2019 04:29:54 PM

Here is the list of the OKRs (objective and Key Results) that the Reps Council has set for the second half of 2019

Objective 1: By the end of 2019, Reps are feeling informed and are more confident to contribute to Mozilla initiatives

  • KR1: More activities related to MDM campaigns are reported on reps portal (30% more reporting)
  • KR2: 10% of inactive Reps are getting reactivated via the campaigns
  • KR3: 3 communities that haven’t participated before in campaigns are now joining campaigns regularly
  • KR4: Reps report feeling more involved in the program (success increase of 20%)
  • KR5: More than 80% of the reps are reporting that they know what MDM is about
  • KR6: More than 70% reps are voting in autumn elections
  • KR7: More than 50% of reps are sharing feedback on surveys about the program

 

Objective 2: By the end of 2019, Reps have skills that allow them to be local leaders

  • KR1: Due to the skills that the Reps have obtained, they now contribute to a 20% increase on campaigns contributions
  • KR2: 80% of mentors are reporting that are ready to lead their mentees due to the new mentor training they got (⅘ satisfaction rate)
  • KR3: 90% of the new onboarded Reps are reporting that are ready to become local leaders in their community due to their onboarding training

 

Objective 3: By the end of 2019, MDMs recognize Reps as local community builders / helpers

 

  • KR1: 10% more bugs reported for budget / swag (filing on behalf of the community)
  • KR2: [on hold] when the MDM portal is ready, 80% of the leaders of the communities join Reps

Let us know what you think by leaving feedback on the comments.

Wladimir Palant: Recognizing basic security flaws in local password managers

Monday 12th of August 2019 07:12:37 AM

If you want to use a password manager (as you probably should), there are literally hundreds of them to choose from. And there are lots of reviews, weighing in features, usability and all other relevant factors to help you make an informed decision. Actually, almost all of them, with one factor suspiciously absent: security. How do you know whether you can trust the application with data as sensitive as your passwords?

Unfortunately, it’s really hard to see security or lack thereof. In fact, even tech publications struggle with this. They will talk about two-factor authentication support, even when discussing a local password manager where it is of very limited use. Or worse yet, they will fire up a debugger to check whether they can see any passwords in memory, completely disregarding the fact that somebody with debug rights can also install a simple key logger (meaning: game over for any password manager).

Judging security of a password manager is a very complex task, something that only experts in the field are capable of. The trouble: these experts usually work for competing products and badmouthing competition would make a bad impression. Luckily, this still leaves me. Actually, I’m not quite an expert, I merely know more than most. And I also work on competition, a password manager called PfP: Pain-free Passwords which I develop as a hobby. But today we’ll just ignore this.

So I want to go with you through some basic flaws which you might encounter in a local password manager. That’s a password manager where all data is stored on your computer rather than being uploaded to some server, a rather convenient feature if you want to take a quick look. Some technical understanding is required, but hopefully you will be able to apply the tricks shown here, particularly if you plan to write about a password manager.

Our guinea pig is a password manager called Password Depot, produced by the German company AceBit GmbH. What’s so special about Password Depot? Absolutely nothing, except for the fact that one of their users asked me for a favor. So I spent 30 minutes looking into it and noticed that they’ve done pretty much everything wrong that they could.

Note: The flaws discussed here have been reported to the company in February this year. The company assured that they take these very seriously but, to my knowledge, didn’t manage to address any of them so far.

{{toc}}

Understanding data encryption

First let’s have a look at the data. Luckily for us, with a local password manager it shouldn’t be hard to find. Password Depot stores its in self-contained database files with the file extension .pswd or .pswe, the latter being merely a ZIP-compressed version of the former. XML format is being used here, meaning that the contents are easily readable:

The good news: <encrypted> flag here clearly indicates that the data is encrypted, as it should be. The bad news: this flag shouldn’t be necessary, as “safely encrypted” should be the only supported mode for a password manager. As long as some form of unencrypted database format is supported, there is a chance that an unwitting user will use it without knowing. Even a downgrade attack might be possible, an attacker replacing the passwords database by an unencrypted one when it’s still empty, thus making sure that any passwords added to the database later won’t be protected. I’m merely theorizing here, I don’t know whether Password Depot would ever write unencrypted data.

The actual data is more interesting. It’s a base64-encoded blob, when decoded it appears to be unstructured binary data. Size of the data is always a multiple of 16 bytes however. This matches the claim on the website that AES 256 is used for encryption, AES block size being 16 bytes (128 bits).

AES is considered secure, so all is good? Not quite, as there are various block cipher modes which could be used and not all of them are equally good. Which one is it here? I got a hint by saving the database as an outdated “mobile password database” file with the .pswx file extension:

Unlike with the newer format, here various fields are encrypted separately. What sticks out are two pairs of identical values. That’s something that should never happen, identical ciphertexts are always an indicator that something went terribly wrong. In addition, the shorter pair contains merely 16 bytes of data. This means that only a single AES block is stored here (the minimal possible amount of data), no initialization vector or such. And there is only one block cipher mode which won’t use initialization vectors, namely ECB. Every article on ECB says: “Old and busted, do not use!” We’ll later see proof that ECB is used by the newer file format as well.

Note: If initialization vectors were used, there would be another important thing to consider. Initialization vectors should never be reused, depending on the block cipher mode the results would be more or less disastrous. So something to check out would be: if I undo some changes by restoring a database from backup and make changes to it again, will the application choose the same initialization vector? This could be the case if the application went with a simple incremental counter for initialization vectors, indicating a broken encryption scheme.

Data authentication

It’s common consensus today that data shouldn’t merely be encrypted, it should be authenticated as well. It means that the application should be able to recognize encrypted data which has been tampered with and reject it. Lack of data authorization will make the application try to process manipulated data and might for example allow conclusions about the plaintext from its reaction. Given that there are multiple ideas of how to achieve authentication, it’s not surprising that developers often mess up here. That’s why modern block cipher modes such as GCM integrated this part into the regular encryption flow.

Note that even without data authentication you might see an application reject manipulated data. That’s because the last block is usually padded before encryption. After decryption the padding will be verified, if it is invalid the data is rejected. Padding doesn’t offer real protection however, in particular it won’t flag manipulation of any block but the last one.

So how can we see whether Password Depot uses authenticated encryption? By changing a byte in the middle of the ciphertext of course! Since with ECB every 16 byte block is encrypted separately, changing a block in the middle won’t affect the last block where the padding is. When I try that with Password Depot, the file opens just fine and all the data is seemingly unaffected:

In addition to proving that no data authentication is implemented, that’s also a clear confirmation that ECB is being used. With ECB only one block is affected by the change, and it was probably some unimportant field – that’s why you cannot see any data corruption here. In fact, even changing the last byte doesn’t make the application reject the data, meaning that there are no padding checks either.

What about the encryption key?

As with so many products, the website of Password Depot stresses the fact that a 256 bit encryption key is used. That sounds pretty secure but leaves out one detail: where does this encryption key come from? While the application can accept an external encryption key file, it will normally take nothing but your master password to decrypt the database. So it can be assumed that the encryption key is usually derived from your master password. And your master password is most definitely not 256 bit strong.

Now a weaker master password isn’t a big deal as long as the application came up with reasonable bruteforce protection. This way anybody trying to guess your password will be slowed down, and this kind of attack would take too much time. Password Depot developers indeed thought of something:

Wait, no… This is not reasonable bruteforce protection. It would make sense with a web service or some other system that the attackers don’t control. Here however, they could replace Password Depot by a build where this delay has been patched out. Or they could remove Password Depot from the equation completely and just let their password guessing tools run directly against the database file, which would be far more efficient anyway.

The proper way of doing this is using an algorithm to derive the password which is intentionally slow. The baseline for such algorithms is PBKDF2, with scrypt and Argon2 having the additional advantage of being memory-hard. Did Password Depot use any of these algorithms? I consider that highly unlikely, even though I don’t have any hard proof. See, Password Depot has a know-how article on bruteforce attacks on their website. Under “protection” this article mentions complex passwords as the solution. And then:

Another way to make brute-force attacks more difficult is to lengthen the time between two login attempts (after entering a password incorrectly).

So the bullshit protection outlined above is apparently considered “state of the art,” with the developers completely unaware of better approaches. This is additionally confirmed by the statement that attackers should be able to generate 2 billion keys per second, not something that would be possible with a good key derivation algorithm.

There is still one key derivation aspect here which we can see directly: key derivation should always depend on an individual salt, ideally a random value. This helps slow down attackers who manage to get their hands on many different password databases, the work performed bruteforcing one database won’t be reusable for the others. So, if Password Depot uses a salt to derive the encryption key, where is it stored? It cannot be stored anywhere outside the database because the database can be moved to another computer and will still work. And if you look at the database above, there isn’t a whole lot of fields which could be used as salt.

In fact, there is exactly one such field: <fingerprint>. It appears to be a random value which is unique for each database. Could it be the salt used here? Easy to test: let’s change it! Changing the value in the <fingerprint> field, my database still opens just fine. So: no salt. Bad database, bad…

Browser integration

If you’ve been reading my blog, you already know that browser integration is a common weak point of password managers. Most of the issues are rather obscure and hard to recognize however. Not so in this case. If you look at the Password Depot options, you will see a panel called “Browser.” This one contains an option called “WebSockets port.”

So when the Password Depot browser extension needs to talk to the Password Depot application, it will connect to this port and use the WebSockets protocol. If you check the TCP ports of the machine, you will indeed see Password Depot listening on port 25109. You can use netstat command line tool for that or the more convenient CurrPorts utility.

Note how this lists 0.0.0.0 as the address rather than the expected 127.0.0.1. This means that connections aren’t merely allowed from applications running on the same machine (such as your browser) but from anywhere on the internet. This is a completely unnecessary risk, but that’s really shadowed by the much bigger issue here.

Here is something you need to know about WebSockets first. Traditionally, when a website needed to access some resource, browsers would enforce the same-origin policy. So access would only be allowed for resources belonging to the same website. Later, browsers had to relax the same-origin policy and implement additional mechanisms in order to allow different websites to interact safely. Features conceived after that, such as WebSockets, weren’t bound by the same-origin policy at all and had more flexible access controls from the start.

The consequence: any website can access any WebSockets server, including local servers running on your machine. It is up to the server to validate the origin of the request and to allow or to deny it. If it doesn’t perform this validation, the browser won’t restrict anything on its own. That’s how Zoom and Logitech ended up with applications that could be manipulated by any website to name only some examples.

So let’s say your server is supposed to communicate with a particular browser extension and wants to check request origin. You will soon notice that there is no proper way of doing this. Not only are browser extension origins browser-dependent, at least in Firefox they are even random and change on every install! That’s why many solutions resort to somehow authenticating the browser extension towards the application with some kind of shared secret. Yet arriving on that shared secret in a way that a website cannot replicate isn’t trivial. That’s why I generally recommend staying away from WebSockets in browser extensions and using native messaging instead, a mechanism meant specifically for browser extensions and with all the security checks already built in.

But Password Depot, like so many others, chose to go with WebSockets. So how does their extension authenticate itself upon connecting to the server? Here is a shortened code excerpt:

var websocketMgr = { _ws:null, _connected:false, _msgToSend:null, initialize:function(msg){ if (!this._ws){ this._ws = new WebSocket(WS_HOST + ':' + options.socketPortNumber); } this._msgToSend = msg; this._ws.onopen = ()=>this.onOpen(); } onOpen:function() { this._connected = true; if (this._msgToSend) { this.send(this._msgToSend); } }, send:function(message){ message.clientVersion = "V12"; if (this._connected && (this._ws.readyState == this._ws.OPEN)){ this._ws.send(JSON.stringify(message)); } else { this.initialize(message); } }

You cannot see any authentication here? Me neither. But maybe there is some authentication info in the actual message? With several layers of indirection in the extension, the message format isn’t really obvious. So to verify the findings there is no way around connecting to the server ourselves and sending a message of our own. Here is what I’ve got:

let ws = new WebSocket("ws://127.0.0.1:25109"); ws.onopen = () => { ws.send(JSON.stringify({clientVersion: "V12", cmd: "checkState"})); }; ws.onmessage = event => { console.log(JSON.parse(event.data)); }

When this code is executed on any HTTP website (not HTTPS because an unencrypted WebSockets connection would be disallowed) you get the following response in the console:

Object { cmd: “checkState”, state: “ready”, clientAlive: “1”, dbName: “test.pswd”, dialogTimeout: “10000”, clientVersion: “12.0.3” }

Yes, we are in! And judging by the code, with somewhat more effort we could request the stored passwords for any website. All we have to do for this is to ask nicely.

To add insult to injury, from the extension code it’s obvious that Password Depot can communicate via native messaging, with the insecure WebSockets-based implementation only kept for backwards compatibility. It’s impossible to disable this functionality in the application however, only changing the port number is supported. This is still true six months and four minor releases after I reported this issue.

More oddities

If you look at the data stored by Password Depot in the %APPDATA% directory, you will notice a file named pwdepot.appdata. It contains seemingly random binary data and has a size that is a multiple of 16 bytes. Could it be encrypted? And if it is, what could possibly be the encryption key?

The encryption key cannot be based on the master password set by the user because the password is bound to a database file, yet this file is shared across all of current user’s databases. The key could be stored somewhere, e.g. in the Windows registry or the application itself. But that would mean that the encryption here is merely obfuscation relying on the attacker being unable to find the key.

As far as I know, the only way this could make sense is by using Windows Data Protection API. It can encrypt data using a user-specific secret and thus protect it against other users when the user is logged off. So I would expect either CryptProtectData or the newer NCryptProtectSecret function to be used here. But looking through the imported functions of the application files in the Password Depot directory, there is no dependency on NCrypt.dll and only unrelated functions imported from Crypt32.dll.

So here we have a guess again, one that I managed to confirm when debugging a related application however: the encryption key is hardcoded in the Password Depot application in a more or less obfuscated way. Security through obscurity at its best.

Summary

Today you’ve hopefully seen that “encrypted” doesn’t automatically mean “secure.” Even if it is “military grade encryption” (common marketing speak for AES), block cipher mode matters as well, and using ECB is a huge red warning flag. Also, any modern application should authenticate its encrypted data, so that manipulated data results in an error rather than attempts to make sense of it somehow. Finally, an important question to ask is how the application arrives on an encryption key.

In addition to that, browser integration is something where most vendors make mistakes. In particular, a browser extension using WebSockets to communicate with the respective application is very hard to secure, and most vendors fail even when they try. There shouldn’t be open ports expecting connections from browser extensions, native messaging is the far more robust mechanism.

IRL (podcast): The 5G Privilege

Monday 12th of August 2019 07:05:20 AM

‘5G’ is a new buzzword floating around every corner of the internet. But what exactly is this hyped-up cellular network, often referred to as the next technological evolution in mobile internet communications? Will it really be 100 times faster than what we have now? What will it make possible that has never been possible before? Who will reap the benefits? And, who will get left behind?

Mike Thelander at Signals Research Group imagines the wild ways 5G might change our lives in the near future. Rhiannon Williams hits the street and takes a new 5G network out for a test drive. Amy France lives in a very rural part of Kansas — she dreams of the day that true, fast internet could come to her farm (but isn’t holding her breath). Larry Irving explains why technology has never been provided equally to everyone, and why he fears 5G will leave too many people out. Shireen Santosham, though, is doing what she can to leverage 5G deployment in order to bridge the digital divide in her city of San Jose.

IRL is an original podcast from Firefox. For more on the series go to irlpodcast.org

Read more about Rhiannon Williams' 5G tests throughout London.

And, find out more about San Jose's smart city vision that hopes to bridge the digital divide.

Cameron Kaiser: And now for something completely different: Making HTML 4.0 great again, and relevant Mac sightings at Vintage Computer Festival West 2019

Monday 12th of August 2019 02:43:57 AM
UPDATE: Additional pictures are up at Talospace.

Vintage Computer Festival West 2019 has come and gone, and I'll be posting many of the pictures on Talospace hopefully tonight or tomorrow. However, since this blog's audience is both Mozilla-related (as syndicated on Planet Mozilla) and PowerPC-related, I've chosen to talk a little bit about old browsers for old machines (since, if you use TenFourFox, you're using a relatively recent browser on an old machine) since that was part of my exhibit this year as well as some of the Apple-related exhibits that were present.

This exhibit I christened "RISCy Business," a collection of various classic RISC-based portables and laptops. The machines I had running for festival attendees were a Tadpole-RDI UltraBook IIi (UltraSPARC IIi) running Solaris 10, an IBM ThinkPad 860 (166MHz PowerPC 603e, essentially a PowerBook 1400 in a better chassis) running AIX 4.1, an SAIC Galaxy 1100 (HP PA-7100LC) running NeXTSTEP 3.3, and an RDI PrecisionBook C160L (HP PA-7300LC) running HP/UX 11.00. I also brought my Sun Ultra-3 (Tadpole Viper with a 1.2GHz UltraSPARC IIIi), though because of its prodigious heat issues I didn't run it at the show. None of these machines retailed for less than ten grand, if they were sold commercially at all (the Galaxy wasn't).

Here they are, for posterity:

The UltraBook played a Solaris port of Quake II (software-rendered) and Firefox 2, the ThinkPad ran AIX's Ultimedia Video Monitor application (using the machine's built-in video capture hardware and an off-the-shelf composite NTSC camera) and Netscape Navigator 4.7, the Galaxy ran the standard NeXTSTEP suite along with some essential apps like OmniWeb 2.7b3 and Doom, and the PrecisionBook ran the HP/UX ports of the Frodo Commodore 64 emulator and Microsoft Internet Explorer 5.0 SP1. (Yes, IE for Unix used to be a thing.)

Now, of course, period-correct computers demand a period-correct website viewable on the browsers of the day, which is the site being displayed on screen and served to the machines from a "back office" Raspberry Pi 3. However, devising a late 1990s site means a certain, shall we say, specific aesthetic and careful analysis of vital browser capabilities for maximum impact. In these enlightened times no one seems to remember any of this stuff and what HTML 4.01 features worked where, so here is a handy table for your next old workstation browser demonstration (using a <table>, of course):

framesanimated GIF<marquee><blink> Mozilla Suite 1.7&check&check&check&check Firefox 2&check&check&check&check Netscape Navigator 4.7&check&check&check&check Internet Explorer for UNIX 5.0 SP1&check&check&check&cross Firefox 52&check&check&check&cross OmniWeb 2.7b3&check&check&cross&cross

Basically I ended up looting oocities and my old files for every obnoxious animated GIF and background I could find. This yielded a website that was surely authentic for the era these machines inhabited, and demonstrated exceptionally good taste.

By popular request, the website the machines are displaying is now live on Floodgap (after a couple minor editorial changes). I think the exhibit was pretty well received:

Probably the star of the show and more or less on topic for this blog was the huge group of Apple I machines (many, if not most, still in working order). They were under Plexiglas, and given that there was seven-figures'-worth of fruity artifacts all in one place, a security guard impassively watched the gawkers.

The Apple I owners' club is there to remind you that you, of course, don't own an Apple I.

A working Xerox 8010, better known as the Xerox Star and one of the innovators of the modern GUI paradigm (plus things like, you know, Ethernet), was on display along with an emulator. Steve Jobs saw one at PARC and we all know how that ended.

One of the systems there, part of the multi-platform Quake deathmatch network exhibit, was a Sun Ultra workstation running an honest-to-goodness installation of the Macintosh Application Environment emulation layer. Just for yuks, it was simultaneously running Windows on its SunPCI x86 side-card as well:

The Quake exhibitors also had a Daystar Millenium in a lovely jet-black case, essentially a Daystar Genesis MP+. These were some of the few multiprocessor Power Macs (and clones at that) before Apple's own dual G4 systems emerged. This system ran four 200MHz PowerPC 604e CPUs, though of course only application software designed for multiprocessing could take advantage of them.

A pair of Pippins were present at the exhibit next to the Quake guys', Apple's infamous attempt to turn the Power Mac into a home console platform and fresh off being cracked:

A carpal Apple Newtons (an eMate and several Message Pads) also stowed up so you card find art if the headwatering recognition was as dab as they said it wan.

There were also a couple Apple II systems hanging around (part of a larger exhibit on 6502-based home computers, hence the Atari 130XE next to it).

I'll be putting up the rest of the photos on Talospace, including a couple other notable historical artifacts and the IBM 604e systems the Quake exhibit had brought along, but as always it was a great time and my exhibit was not judged to be a fire hazard. You should go next year.

The moral of this story is the next time you need to make a 1990s web page that you can actually view on a 1990s browser, not that phony CSS and JavaScript crap facsimile they made up for Captain Marvel, now you know what will actually show a blinking scrolling marquee in a frame when you ask for one. Maybe I should stick an <isindex>-powered guestbook in there too.

(For some additional pictures, see our entry at Talospace.)

Mozilla VR Blog: A Summer with Particles and Emojis

Friday 9th of August 2019 04:00:00 PM

This summer I am very lucky to join the Hubs by Mozilla as a technical artist intern. Over the 12 weeks that I was at Mozilla, I worked on two different projects.
My first project is about particle systems, the thing that I always have great interest in. I was developing the particle system feature for Spoke, the 3D editor which you can easily create a 3D scene and publish to Hubs.

Particle systems are a technique that has been used in a wide range of game physics, motion graphics and computer graphics related fields. They are usually composed of a large number of small sprites or other objects to simulate some chaotic system or natural phenomena. Particles can make a huge impact on the visual result of an application and in virtual and augmented reality, it can deepen the immersive feeling greatly.

Particle systems can be incredibly complex, so for this version of the Particle System, we wanted to separate the particle system from having heavy behaviour controls like some particle systems from native game engines, only keeping the basic attributes that are needed. The Spoke particle system can be separated into two parts, particles and the emitter. Each particle, has a texture/sprite, lifetime, age, size, color, and velocity as it’s basic attributes. The emitter is more simple, as it only has properties for its width and height and information about the particle count (how many particles it can emit per life circle).

By changing the particle count and the emitter size, users can easily customize a particle system for different uses, like to create falling snow in a wintry scene or add a small water splash to a fountain.

Changing the emitter size


Changing the number of particles from 100 to 200

You can also change the opacities and the colors of the particles. The actual color and opacity values are interpolated between start, middle and end colors/opacities.

And for the main visuals, we can change the sprites to the image we want by using a URL to an image, or choosing from your local assets.

What does a particle’s life cycle look like? Let’s take a look at this chart:

Every particle is born with a random negative initial age, which can be adjusted through the Age Randomness property after it’s born, its age will keep growing as time goes by. When its age is bigger than the total lifetime (formed by Lifetime and Lifetime Randomness), the particle will die immediately and be re-assigned a negative initial age, then start over again. The Lifetime here is not the actual lifetime that every particle will live, in order to not have all particles disappear at the same time, we have this Lifetime Randomness attribute to vary the actual lifetime of each particle. The higher the Lifetime Randomness, the larger the differentiation will be among the actual lifetimes of whole particle system. There is another attribute called Age Randomness, which is similar to Lifetime Randomness. The difference is that Age Randomness is used to vary the negative initial ages to have a variation on the birth of the particles, while Lifetime Randomness is to have the variation on the end of their lives.

Every particle also has velocity properties across the x, y and x axis. By adjusting the velocity in three dimensions, users can have a better control on particles’ behaviours. For example, simulation gravity or wind that kind of simple phenomena.

With angular velocity, you can also control on the rotation of the particle system to have a more natural and dynamic result.

The velocity, color and size properties all have the option to use different interpolation functions between their start, middle and end stages.


The particle system is officially out on Spoke, so go try it out and let us know what you think!

Avatar Display Emojis

My other project is about the avatar emoji display screen on Hubs. I did the design of the emojis images, UI/UX design and the actual implementation of this feature. It’s actually a straightforward project: I needed to figure out the style of the emoji display on the chest screen, some graphics design on the interface level, make decisions on the interaction flow and implement it in Hubs.


Evolution of the display emoji design.

We ultimately decided to have the smooth edge emoji with some bloom effect.

Final version of the display emoji design


Icon design for the menu user interface


Interaction design using Hubs display styles

Demo:

When you enter pause mode on Hubs, the emoji box will show up, replacing the chat box, and you can change your avatar’s screen to one of the emojis offered.

I want to say thank you to Hubs for having me this summer. I learned a lot from all the talented people in Hubs, especially Robert, Jim, Brian and Greg who helped me a lot to overcome the difficulties I came across. The encouragement and support from the team is the best thing I got this summer. Miss you guys already!

Chris H-C: My StarCon 2019 Talk: Collecting Data Responsibly and at Scale

Thursday 8th of August 2019 12:51:11 PM

 

Back in January I was privileged to speak at StarCon 2019 at the University of Waterloo about responsible data collection. It was a bitterly-cold weekend with beautiful sun dogs ringing the morning sun. I spent it inside talking about good ways to collect data and how Mozilla serves as a concrete example. It’s 15 minutes short and aimed at a general audience. I hope you like it.

I encourage you to also sample some of the other talks. Two I remember fondly are Aaron Levin’s “Conjure ye File System, transmorgifier” about video games that look like file systems and Cory Dominguez’s lovely analysis of Moby Dick editions in “or, the whale“. Since I missed a whole day, I now get to look forward to fondly discovering new ones from the full list.

:chutten

Mike Hoye: Ten More Simple Rules

Wednesday 7th of August 2019 11:53:33 PM

The Public Library of Science‘s Ten Simple Rules series can be fun reading; they’re introductory papers intended to provide novices or non-domain-experts with a set of quick, evidence-based guidelines for dealing with common problems in and around various fields, and it’s become a pretty popular, accessible format as far as scientific publication goes.

Topic-wise, they’re all over the place: protecting research integrity, creating a data-management plan and taking advantage of Github are right there next to developing good reading habits, organizing an unconference or drawing a scientific comic, and lots of them are kind of great.

I recently had the good fortune to be co-author on one of them that’s right in my wheelhouse and has recently been accepted for publication: Ten Simple Rules for Helping Newcomers Become Contributors to Open Projects. They are, as promised, simple:

  1. Be welcoming.
  2. Help potential contributors evaluate if the project is a good fit.
  3. Make governance explicit.
  4. Keep knowledge up to date and findable.
  5. Have and enforce a code of conduct.
  6. Develop forms of legitimate peripheral participation.
  7. Make it easy for newcomers to get started.
  8. Use opportunities for in-person interaction – with care.
  9. Acknowledge all contributions, and
  10. Follow up on both success and failure.

You should read the whole thing, of course; what we’re proposing are evidence-based practices, and the details matter, but the citations are all there. It’s been a privilege to have been a small part of it, and to have done the work that’s put me in the position to contribute.

Support.Mozilla.Org: Community Management Update

Wednesday 7th of August 2019 02:11:08 PM

Hello SUMO community,

I have a couple announcements for today. I’d like you all to welcome our two new community managers.

First off Kiki has officially joined the SUMO team as a community manager. Kiki has been filling in with Konstantina and Ruben on our social support activities. We had an opportunity to bring her onto the SUMO team full time starting last week. She will be transitioning out of her responsibilities at the Community Development Team and will be continuing her work on the social program as well as managing SUMO days going forward.

In addition, we have hired a new SUMO community manager to join the team. Please welcome Giulia Guizzardi to the SUMO team.

You can find her on the forums as gguizzardi. Below is a short introduction:

Hey everyone, my name is Giulia Guizzardi, and I will be working as a Support Community Manager for Mozilla. 

I am currently based in Berlin, but I was born and raised in the north-east of Italy. I studied Digital Communication in Italy and Finland, and worked for half a year in Poland.

My greatest passion is music, I love participating in festivals and concerts along with collecting records and listening to new releases all day long. Other than that, I am often online, playing video games (Firewatch at the moment) or scrolling Youtube/Reddit.

I am really excited for this opportunity and happy to work alongside the community!

Now that we have two new community managers we will work with Konstantina and Ruben to transition their work to Kiki and Giulia. We’re also kicking off work to create a community strategy which we will be seeking feedback for soon. In the meantime, please help me welcome Kiki and Giulia to the team.

Henrik Skupin: Example in how to investigate CPU spikes in Firefox

Wednesday 7th of August 2019 12:40:11 PM

Note: This article is based on Firefox builds as available for download at least until August 7th, 2019. In case you want to go through those steps on your own, I cannot guarantee that it will lead to the same effects if newer builds are used.

So a couple of months ago when I was looking for some new interesting and challenging sport events, which I could participate in to reach my own limits, I was made aware of the Mega Hike event. It sounded like fun and it was also good to see that one particular event is annually organized in my own city since 2018. As such I accepted it together with a friend, and we had an amazing day. But hey… that’s not what I actually want to talk about in this post!

The thing I was actually more interested in while reading content on this web site, was the high CPU load of Firefox while the page was open in my browser. Once the tab got closed the CPU load dropped back to normal numbers, and went up again once I reopened the tab. Given that I haven’t had that much time to further investigate this behavior, I simply logged bug 1530071 to make people aware of the problem. Sadly the bug got lost in my incoming queue of daily bug mail, and I missed to respond, which itself lead in no further progress been made.

Yesterday I stumbled over the website again, and by any change have been made aware of the problem again. Nothing seemed to have been changed, and Firefox Nightly (70.0a1) was still using around 70% of CPU even with the tab’s content not visible; means moved to a background tab. Given that this is a serious performance and power related issue I thought that investigation might be pretty helpful for developers.

In the following sections I want to lay out the steps I did to nail down this problem.

Energy consumption of Firefox processes

While for the first look the Activity Monitor of MacOS is helpful to get an impression about the memory usage and CPU load of Firefox, it’s a bit hard to see how much each and every open tab is actually using.

You could try to match the listed process ids with a specific tab in the browser by hovering over the appropriate tab title, but the displayed tooltip only contains the process id in  Firefox Nightly builds, but not in beta or final releases. Further multiple tabs will currently share the same process, and as such the displayed value in the Activity Monitor is a shared.

To further drill down the CPU load to a specific tab, Firefox has the about:performance page, which can be opened by typing the value into the location bar.  It’s basically an internal task manager to inspect the energy impact and memory consumption of each tab.

Even more helpful is the option to expand the view for sub frames, which are usually used to embed external content. In case of the Megamarsch page there are three of those, and one actually spikes out with consuming nearly all the energy as used by the tab. As such it might be a good chance that this particular iframe from YouTube, which is embedding a video, is the problem.

To verify that the integrated Firefox Developer Tools can be used. Specially the Page Inspector will help us, which allows to search for specific nodes, CSS classes, or others, and then interact with them. To open it, check the Tools > Web Developer sub menu from inside the main menu.

Given that the URI of the iframe is known, lets search for it in the inspector:

When running the search it will not be the first result as found, so continue until the expected iframe is highlighted in the Inspector pane. Now that we found the embedded content lets delete the node by opening the context menu and clicking Delete Node. If it was the problem, the CPU load should be normal again.

Sadly, and as you will notice when doing it yourself, it’s not the case. Which also means something else on that page is causing it. The easiest solution to figure out which node really causes the spike, is to simply delete more nodes on that page. Start at a higher level and delete the header, footer, or any sidebars first. By doing that always keep an eye on the Activity Monitor, and check if the CPU load maybe has dropped. Once that is the case undo the last step, so that the causing node is getting inserted again. Then remove all sibling nodes, so only the causing node remains. Now drill down even further until no more child nodes remain.

As advice don’t forget to change the update frequency so that values are updated each second, and revert it back after you are done.

In our case the following node which is related to the cart icon remains:

So some kind of loading indicator seems to trigger Firefox to maybe repaint a specific area on the screen. To verify that remove the extra CSS class definitions. Once the icon-web-loading-spinner class has been removed it’s fine. Note that when hovering over the node and the class still be set, a spinning rectangle which is a placeholder for the real element can even be seen.

Checking the remaining stylesheets which get included, the one which remains (after removing all others without a notable effect) is from assets.jimstatic.com. And for the particular CSS class it holds the following animation:

@keyframes
spinit{0%{-webkit-transform:rotate(0deg);transform:rotate(0deg)}to{-webkit-transform:rotate(360deg);transform:rotate(360deg)}}

More interesting is that this specific class defines opacity: 0, which basically means that the node shouldn’t be visible at all, and no re-painting should happen until the node has been made visible.

With these kind of information found I updated the before mentioned bug with all the newly found details, and handed it over to the developers. Everyone who wants to follow the progress of fixing it, can subscribe as part of the CC list and will be automatically notified by Bugzilla for updates.

If you found this post useful please let me know, and I will write more of them in the future.

Eric Shepherd: The Tall-Tale Clock: The myth of task estimates

Tuesday 6th of August 2019 09:04:24 PM

One of my most dreaded tasks is that of estimating how long tasks will take to complete while doing sprint planning. I have never been good at this, and it has always felt like time stolen away from the pool of hours available to do what I can’t help thinking of as “real work.”

While I’m quite a bit better at the time estimating process than I was a decade ago—and perhaps infinitely better at it than I was 20 years ago—I still find that I, like a lot of the creative and technical professionals I know, dread the process of poring over bug and task lists, project planning documents, and the like in order to estimate how long things will take to do.

This is a particularly frustrating process when dealing with tasks that may be nested, have multiple—often not easily detected ahead of time—dependencies, and may involve working with technologies that aren’t actually as ready for prime time as expected. Add to that the fact that your days are filled with distractions, interruptions, and other tasks you need to deal with, and predicting how long a given project will take can start to feel like a guessing game.

The problem isn’t just one of coming up with the estimates. There’s a more fundamental problem of how to measure time. Do you estimate projects in terms of the number of work hours you’ll invest in them? The number of days or weeks you’ll spend on each task? Or some other method of measuring duration?

Hypothetical ideal days

On the MDN team, we have begun over the past year to use a time unit we call the hypothetical ideal day or simply ideal day. This is a theoretical time unit in which you are able to work, uninterrupted, on a project for an entire 8-hour work day. A given task may take any appropriate number of ideal days to complete, depending on its size and complexity. Some tasks may take less than a single ideal day, or may otherwise require a fractional number of ideal days (like 0.5 ideal days, or 1.25 ideal days). We generally round to a quarter of a day.

There obviously isn’t actually any such thing as an ideal, uninterrupted day (hence the words “hypothetical” and “theoretical” earlier in this paragraph). Even on one’s best day, you have to stop to eat, to stretch, and do do any number of other things that you have to do during a day of work. But that’s the point to the ideal day unit: by building right into the unit the understanding that you’re not explicitly accounting for these interruptions in the time value, you can reinforce the idea that schedules are fragile, and that every time a colleague or your manager (or anyone else) causes you to be distracted from your planned tasks, the schedule will slip.

That means that each ideal day may actually last anywhere from one to three or more days in the real world, depending on what’s going on. Every meeting, phone call, sidetracking onto an unscheduled task, bad night’s sleep, high pollen day, or bad news day can impact the amount of real-life time required to complete one ideal day’s worth of work.

Ideal days in sprint planning

The goal, then, during sprint planning is to do your best to leave room for those distractions when mapping ideal days to the actual calendar. Our sprints on the MDN team are 12 business days long. When selecting tasks to attempt to accomplish during a sprint, we start by having each team member count up how many of those 12 days they will be available for work. This involves subtracting from that 12-day sprint any PTO days, company or local holidays, substantial meetings, and so forth.

When calculating my available days, I like to subtract a rough number of partial days to account for any appointments that I know I’ll have. We then typically subtract about 20% (or a day or two per sprint, although the actual amount varies from person to person based on how often they tend to get distracted and how quickly they rebound), to allow for distractions and sidetracking, and to cover typical administrative needs. The result is a rough estimate of the number of ideal days we’re available to work during the sprint.

With that in hand, each member of the team can select a group of tasks that can probably be completed during the number of ideal days we estimate they’ll have available during the sprint. But we know going in that these estimates are in terms of ideal days, not actual business days, and that if anything unanticipated happens, the mapping of ideal days to actual days we did won’t match up anymore, causing the work to take longer than anticipated. This understanding is fundamental to how the system works; by going into each sprint knowing that our mapping of ideal days to actual days is subject to external influences beyond our control, we avoid many of the anxieties that come from having rigid or rigid-feeling schedules.

For your consideration

For example, let’s consider a standard 12-business-day MDN sprint which spans my birthday as well as Martin Luther King, Jr. Day, which is a US Federal holiday. During those 12 days, I also have two doctor appointments scheduled which will have me out of the office for roughly half a day total, and I have about a day’s worth of meetings on my schedule as of sprint planning time. Doing the math, then, we find that I have 8.5 days available to work.

Knowing this, I then review the various task lists and find a total of around 8 to 8.5 days worth of work to do. Perhaps a little less if I think the odds are good that more time will be occupied with other things than the calendar suggests. For example, if my daughter is sick, there’s a decent chance I will be too in a few days, so I might take on just a little less work for the sprint.

As the sprint begins, then, I have an estimated 8 ideal days worth of work to do during the 12-day sprint. Because of the “ideal day” system, everyone on the team knows that if there are any additional interruptions—even short ones—the odds of completing everything on the list are reduced. As such, this system not only helps make it easier to estimate how long tasks will take, but also helps to reinforce with colleagues that we need to stay focused as much as possible, in order to finish everything on time.

If I don’t finish everything on the sprint plan by the end of the sprint, we will discuss it briefly during our end-of-sprint review to see if there’s any adjustment we need to make in future planning sessions, but it’s done with the understanding that life happens, and that sometimes delays just can’t be anticipated or avoided.

On the other hand, if I happen to finish before the sprint is over, I have time to get extra work done, so I go back to the task lists, or to my list of things I want to get done that are not on the priority list right now, and work on those things through the end of the sprint. That way, I’m able to continue to be productive regardless of how accurate my time estimates are.

I can work with this

In general, I really like this way of estimating task schedules. It does a much better job of allowing for the way I work than any other system I’ve been asked to work within. It’s not perfect, and the overhead is a little higher than I’d like, but by and large it does a pretty god job. That’s not to say we won’t try another, possibly better, way of handling the planning process in the future

But for now, my work days are as ideal as can be.

Bryce Van Dyk: Building Geckoview/Firefox for Android under Windows Subsystems for Linux (wsl)

Tuesday 6th of August 2019 05:11:08 PM

These are notes on my recent attempts to get Android builds of Firefox working under WSL 1. After tinkering with this I ultimately decided to do my Android builds in a full blown VM running Linux, but figure these notes may serve useful to myself or others.

This was done on Windows 10 using a Debian 9 WSL machine. The steps below assume an already cloned copy of mozilla-unified or mozilla-central.

Create a .mozconfig ensuring that LF line endings are used, CRLF seems to break parsing of the config under WSL:

# Build GeckoView/Firefox for Android: ac_add_options --enable-application=mobile/android # Targeting the following architecture. # For regular phones, no --target is needed. # For x86 emulators (and x86 devices, which are uncommon): ac_add_options --target=i686 # For newer phones. # ac_add_options --target=aarch64 # Write build artifacts to: mk_add_options MOZ_OBJDIR=@TOPSRCDIR@/../mozilla-builds/objdir-droid-i686-opt

Bootstrap via ./mach bootstrap. After the bootstrap I found I still needed to install yasm in my package manager.

Now you should be ready to build with ./mach build. However, note that the object directory being built into needs to live on the WSL drive, i.e. mk_add_options MOZ_OBJDIR= should point to somewhere like ~/objdir and not /mnt/c/objdir.

This is because the build system will expect to files to be handled in a case sensitive manner and will create files like String.h and string.h in the same directory. Windows doesn't do this outside of WSL by default, and it causes issues with the build. I've got a larger discussion on the nuts and bolts of this, as well as a hacky work around below if you're interested in the details.

At this stage you should have an Android build. It can be packaged via ./mach package and then moved to the Windows mount – or if you have an Android emulator running under windows you can simply use ./mach install – this required required me to ~.mozbuild/android-sdk-linux/platform-tools/adb kill-server then ~.mozbuild/android-sdk-linux/platform-tools/adb start-serverafter enabling debugging on my emulated phone to get my WSLadb` to connect.

For other commands, your mileage may vary. For example ./mach crashtest <crashtest> fails, seemingly due to being unable to call su as expected under WSL.

Case sensitivity of files under Windows

When attempting to build Firefox for Android into an objdir on my Windows C drive I ended up getting a number of errors for due to files including String.h. This was a little confusing, as I recognize string.h, but the upper case S version not so much.

The cause is that the build system contains a list of headers and that there are several cases of headers with the same name only differing by uppercase initial letter, including the above string ones. In fact, there are 3 cases in that file: String.h, Strings.h, and Memory.h, and in my builds they can be safely removed to allow the build to progress.

I initially though this happened because the NTFS file system doesn't support case sensitive file names, whilst whatever file system was being used by WSL did. However, the reality is that NTFS does support case sensitivity and Windows itself is the one imposing case insensitivity.

Indeed, Windows is now exposing functionality to set case sensitivity on directories. Under WSL all directories are created with by default as case sensitive, but fsutil can be used to set the flag on directories outside WSL.

In fact, using fsutil to flag dirs as case sensitive allows for working around the issue with building to a objdir outside of WSL. For example I was able to do this fsutil.exe file setCaseSensitiveInfo ./dist/system_wrappers in the root of my objdir and then perform my build from WSL to outside WSL without issue. This isn't particularly ergonomic for normal use though, because Firefox's build system will destroy and recreate that dir which drops the flag. So I'd either need to manually restore it each time, or modify the build system.

The case sensitivity handling of files on Windows is interesting in a software archeology sense, and I plan to write more on it, but want to avoid this post (further) going off on a tangent around Windows architecture.

More in Tux Machines

Events: LibreOffice Conference 2020, MariaDB's Thomas Boyd and Upcoming Linux Foundation’s Open Source Summit

  • LibreOffice Conference 2020 Proposals

    The Document Foundation has received two different proposals for the organization of LibOCon 2020 from the Turkish and German communities. When this has happened in the past, in 2012 (Berlin vs Zaragoza) and 2013 (Milan vs Montreal), TDF Members have been asked to decide by casting their vote. This document provides an outline of the two proposals, which are attached in their original format.

  • Thomas Boyd Discusses Which Open Source Database is the Best Fit for the Business

    The world's largest and most innovative businesses are turning to enterprise open source databases for mission-critical applications, with the most popular open source relational databases being MariaDB, MySQL, and Postgres. However, while all three of these databases are open source, mature, and available in enterprise editions, there are significant differences between them — both in terms of application development as well as database administration and operations. DBTA recently held a webinar featuring Thomas Boyd, director of technical marketing, MariaDB Corporation, who discussed the differences between MariaDB, MySQL, and Postgres. [...] EnterpriseDB is heap only while MySQL and MariaDB offer InnoDB, Columnar, Aria, MyRocks, and more.

  • Open Source Summit welcomes Platform9 experts

    Cloud-native experts share tips and practical learnings for Kubernetes in the enterprise, Kubernetes on bare metal or with stateful MySQL databases, and optimizing the cost and performance of Serverless applications.

  • Transform Your Career: Attend Open Source Summit North America this August in San Diego

    For the last decade, The Linux Foundation’s Open Source Summit has proven to be invaluable for attendees.  A 2018 participant recently wrote an article on OpenSource.com stating “Last August, I arrived at the Vancouver Convention Centre to give a lightning talk and speak on a panel at Open Source Summit North America 2018. It’s no exaggeration to say that this conference—and applying to speak at it—transformed my career.” We encourage you to read the article and discover why attending Open Source Summit can be a game changer for you as well.

OSS Leftovers

  • Intervalometerator: Open Source Code for a Remote Timelapse DSLR

    Want to set up a remote DSLR for shooting a time-lapse? The Intervalometerator (AKA ‘intvlm8r’) is an open-source intervalometer that can help you do so at minimal hardware cost (as long as you’re comfortable tinkering with hardware and software). Created by Sydney-based coder Greig Sheridan and his photographer partner Rocky over the course of a year, the Intervalometerator is designed to be both cheap and easy to build with familiar tools and using Raspberry Pi and Arduino microcontrollers. “My partner and I have been working for over twelve months now on an intervalometer in order to shoot a DSLR-based time-lapse of the construction of our friends’ home in NZ,” Sheridan tells PetaPixel. “It was at the time a seemingly clever idea for a house-warming present, but it grew like tribbles to consume an incredible amount of effort).

  • Open Source Tools & Framework: Microservices Perspective
  • Open Source flexiWAN SD-WAN Software Beta Ships
  • Agile and open source can complement each other

    Despite the growing popularity of both Agile development and open-source practices, it’s not often that they come up in the same conversation. When these two concepts do intersect, it’s often to highlight the contradicting viewpoints that these two models supposedly represent. While there are core differences, Agile doesn’t have to be the enemy of open source—in fact, I would argue the opposite.

  • SD Times Open-Source Project of the Week: Twilio CLI

    In an effort to help its developers be more productive, Twilio has announced the beta version of Twilio CLI. It is an open-source command line interface that enables developers to access Twilio through their command prompt. “It’s hard to beat the flexibility and power that a CLI provides at development time. Until now, there was no CLI designed for typical communications requirements,” Ashley Roach, the product manager for developer interfaces at Twilio, wrote in a post.

  • Using open source in your enterprise? What to look out for

    According to Statista, the open source market was valued at $11.4 billion in 2017 and is estimated to grow to $32.95 billion by 2022, showing it has no intention of slowing down anytime soon. Founded on the belief that collaboration and cooperation build better software, open source sounds closer to a utopian dream than to the cold digital world of programming. Research showed that open source code takes over proprietary one in applications at 57%. This has numerous benefits, such as speeding up the software development process or creating more effective and innovative software. For example, open source frontend development frameworks, such as Angular, are often found in custom web apps, which allows companies to get their products to market at ever-increasing rates. In addition, companies tend to engage open source when at the cusp of technological innovation, especially when it comes to AR, blockchain, IoT, and AI.

  • Open Source Technology: What's It All About?

    To understand how open source works, it is important to appreciate where it all began. The very idea behind its inception isn’t exactly a new one. It’s been adopted by scientists for decades. Let’s imagine a scientist working on a project to develop a cure for an illness. If this scientist only published the results and kept the methods a secret, this would undoubtedly inhibit scientific discovery and further research in this area. On the other hand, teaming up with other researchers and making results and methodologies visible allows for greater and faster innovation. This is the premise from which open source was originally born. Open source refers to software that has an open source code so it can be viewed, modified for a particular need, and importantly, shared (under license). One of the first well known open source initiatives was developed in 1998 by Netscape, which released its Navigator browser as free software and demonstrated the benefits of taking an open source approach. Since then, there have been a number of pivotal moments in open source history that have shaped the technology industry as we know it today. Nowadays, some of the latest technology you use on a daily basis, like your smartphone or laptop, will have been built using open source software. [...] Recent research found that 60 percent of organizations are already using open source software. Many businesses are realizing the benefits that the technology can bring in relation to driving innovation and reducing costs. This in turn is seeing a growing number of organizations integrate open source into their IT operations or even building entire businesses around it. With emerging technologies such as cloud, AI and machine learning only driving this adoption further, open source will continue to play a central and growing role throughout the technology landscape.

  • How to Take Your Open Source Project from Good to Great

    Whether or not you expect anyone to contribute to your project, you should be prepared for the possibility of others wanting to help your cause. And when that happens, your contributing guide will show those helpers exactly how they can get involved. This guide, usually in the form of a CONTRIBUTING.md file, should include information on how one should submit a pull request or open an issue for your project and what kinds of help you’re looking for (bug fixes, design direction, feature requests, etc.).

  • ForgeRock Delivers Open Source IoT Edge Controller for Device Identity

    According to a recent announcement, ForgeRock, a platform provider of digital identity management solutions, has launched its IoT Edge Controller, which is designed to provide consumer and industrial manufacturers the ability to deliver trusted identity at the device level.

  • Browser Settings Too Complex? Let Firefox Handle That for You

    Firefox SVP David Camp doesn't want internet users wasting time 'understanding how the internet is watching you.'

  • Exclusive: Automattic CEO Matt Mullenweg on what’s next for Tumblr

    It’s been a long and winding road for Tumblr, the blogging site that launched a thousand writing careers. It sold to Yahoo for $1.1 billion in 2013, then withered as Yahoo sold itself to AOL, AOL sold itself to Verizon, and Verizon realized it was a phone company after all. Through all that, the site’s fierce community hung on: it’s still Taylor Swift’s go-to social media platform, and fandoms of all kinds have homes there. Verizon sold Tumblr for a reported $3 million this week, a far cry from the billion-dollar valuation it once had. But to Verizon’s credit, it chose to sell Tumblr to Automattic, the company behind WordPress, the publishing platform that runs some 34 percent of the world’s websites. Automattic CEO Matt Mullenweg thinks the future of Tumblr is bright. He wants the platform to bring back the best of old-school blogging, reinvented for mobile and connected to Tumblr’s still-vibrant community, and he’s retaining all 200 Tumblr employees to build that future. It’s the most exciting vision for Tumblr in years. Matt joined Verge reporter Julia Alexander and me on a special Vergecast interview episode to chat about the deal, how it came together, what Automattic’s plans for Tumblr look like, and whether Tumblr might become an open-source project, like WordPress itself. (“That would be pretty cool,” said Matt.) Oh, and that porn ban.

Apache: Self Assessment and Security

  • The Apache® Software Foundation Announces Annual Report for 2019 Fiscal Year

    The Apache® Software Foundation (ASF), the all-volunteer developers, stewards, and incubators of more than 350 Open Source projects and initiatives, announced today the availability of the annual report for its 2019 fiscal year, which ended 30 April 2019.

  • Open Source at the ASF: A Year in Numbers

    332 active projects, 71 million lines of code changed, 7,000+ committers… The Apache Software Foundation has published its annual report for fiscal 2019. The hub of a sprawling, influential open source community, the ASF remains in rude good health, despite challenges this year including the need for “an outsized amount of effort” dealing with trademark infringements, and “some in the tech industry trying to exploit the goodwill earned by the larger Open Source community.” [...] The ASF names 10 “platinum” sponsors: AWS, Cloudera, Comcast, Facebook, Google, LeaseWeb, Microsoft, the Pineapple Fund, Tencent Cloud, and Verizon Media

  • Apache Software Foundation Is Worth $20 Billion

    Yes, Apache is worth $20 billion by its own valuation of the software it offers for free. But what price can you realistically put on open source code? If you only know the name Apache in connection with the web server then you are missing out on some interesting software. The Apache Software Foundation ASF, grew out of the Apache HTTP Server project in 1999 with the aim of furthering open source software. It provides a licence, the Apache licence, a decentralized governance and requires projects to be licensed to the ASF so that it can protect the intellectual property rights.

  • Apache Security Advisories Red Flag Wrong Versions in Patching Gaffe

    Researchers have pinpointed errors in two dozen Apache Struts security advisories, which warn users of vulnerabilities in the popular open-source web app development framework. They say that the security advisories listed incorrect versions impacted by the vulnerabilities. The concern from this research is that security administrators in companies using the actual impacted versions would incorrectly think that their versions weren’t affected – and would thus refrain from applying patches, said researchers with Synopsys who made the discovery, Thursday. “The real question here from this research is whether there remain unpatched versions of the newly disclosed versions in production scenarios,” Tim Mackey, principal security strategist for the Cybersecurity Research Center at Synopsys, told Threatpost. “In all cases, the Struts community had already issued patches for the vulnerabilities so the patches exist, it’s just a question of applying them.”

Google and Android Code

  • Google releases source code for I/O 2019 app with Android Q gesture nav, dark theme

    The Google I/O companion app for Android often takes advantage of the latest design stylings and OS features. It demoed Android Q’s gesture navigation and dark theme this year, with the company today releasing the I/O 2019 source code.

  • Introducing Coil, an open-source Android image loading library backed by Kotlin Coroutines

    Yesterday, Colin White, a Senior Android Engineer at Instacart, introduced Coroutine Image Loader (Coil). It is a fast, lightweight, and modern image loading library for Android backed by Kotlin.

  • Google open-sources Live Transcribe’s speech engine

    Google today open-sourced the speech engine that powers its Android speech recognition transcription tool Live Transcribe. The company hopes doing so will let any developer deliver captions for long-form conversations. The source code is available now on GitHub. Google released Live Transcribe in February. The tool uses machine learning algorithms to turn audio into real-time captions. Unlike Android’s upcoming Live Caption feature, Live Transcribe is a full-screen experience, uses your smartphone’s microphone (or an external microphone), and relies on the Google Cloud Speech API. Live Transcribe can caption real-time spoken words in over 70 languages and dialects. You can also type back into it — Live Transcribe is really a communication tool. The other main difference: Live Transcribe is available on 1.8 billion Android devices. (When Live Caption arrives later this year, it will only work on select Android Q devices.)