Language Selection

English French German Italian Portuguese Spanish

Blender 2.80

Filed under
Graphics/Benchmarks
Software

Thanks to the new modern 3D viewport you will be able to display a scene optimized for the task you are performing. A new Workbench render engine was designed for getting work done in the viewport, supporting tasks like scene layout, modeling and sculpting. The engine also feature overlays, providing fine control over which utilities are visible on top of the render.

Overlays also work on top of Eevee and Cycles render previews, so you can edit and paint the scene with full shading.

Read more

Also: Blender 2.80 Officially Released With Its Revamped UI, Eevee PBR Renderer

By Corbet from LWN

Blender 2.80 is out

  • Blender 2.80 is out, a major advancement for this FOSS 3D creation suite

    Hot on the heels of the announcements of both Epic Games and Ubisoft supporting further Blender development, the massive Blender 2.80 release is now available.

    An incredible step-up for the project including a needed revamp to the user interface, along with a new dark theme and modern icon set. There's also "Eevee", a new physically based real-time renderer, with support for some advanced features like volumetrics, screen-space reflections and refractions, subsurface scattering, soft and contact shadows, depth of field and more.

Blender 2.80 Released!

Zohaib Ahsan's coverage of Blender 2.80 today

  • Blender 2.80 officially released with whole new Workspace and 3D Viewport

    Now available for download is the new Blender 2.80 that comes with a redesigned user interface and a whole bunch of new tools.

    Before we get into what the new Blender has to offer, let’s see what this freeware is all about! Blender is a complete 3D creation suite that deals with all elements of the 3D pipeline, such as modeling, simulation, animation, rendering, and video tracking. It is also worth mentioning that people from different walks of life have contributed to the development of this software, therefore the company really emphasizes on YOU, which can be seen from their tag line: ‘Blender, made by you’.

    Not like Blender 2.79 was lacking anything, the new Blender brings a lot of new stuff to the table. “What new stuff?”, you might ask. Well, let’s have a look!

Blender 2.8 at LJ

Blender 2.80 is Here, And It Blows the Pants Off

  • Blender 2.80 is Here, And It Blows the Pants Off Any Release Before It

    A brand new version of the free 3D graphics software Blender is here — and I’ll be honest: it looks amazing.

    Am I skilled enough in the intricacies of 3D modelling, CGI, and visual effects work to the point that I can provide you with enlightened insight into the improvements — and boy are there improvements — on offer in this release?

    Heck no! I can barely navigate the real world, much less a CGI one.

Blender 2.80 released with new features and improvements

  • Blender 2.80 released with new features and improvements

    Blender is a cross-platform community-driven project under the GNU General Public License (GPL) and runs equally well on Linux, Windows and Macintosh computers. Its interface uses OpenGL to provide a consistent experience. It provides a broad spectrum of modeling, texturing, lighting, animation and video post-processing functionality in one package. Through its open architecture, Blender provides cross-platform interoperability, extensibility, an incredibly small footprint, and a tightly integrated workflow. Blender is one of the most popular Open Source 3D graphics applications in the world. It supports the entirety of the 3D pipeline—modeling, rigging, animation, simulation, rendering, compositing and motion tracking, even video editing and game creation.

    There is Blender's API available for advanced users who know Python scripting to customize the application and write specialized tools for Blender; more often these contributed tools are included in next releases of Blender. Blender has no price tag, but you can invest, participate, and help to advance a powerful collaborative tool: Blender is your own 3D software.
    Blender is being actively developed by hundreds of volunteers from all around the world. These volunteers include artists, VFX experts, hobbyists, scientists, and many more. All of them are united by an interest to further a completely free and open source 3D creation pipeline. You can get involved with this awesome project and contribute in anyway you like to do.

Blender 2.8 Has Been Released!, Say Goodbye to Low Spec

  • Blender 2.8 Has Been Released!, Say Goodbye For Low Spec Computers!

    One of the requirements for running version 2.8 is that we must have a computer with OpenGL 3.3. Yes, this is bad news for me and other low specification computer users. Because, at this time we are unable to run Blender 2.8 on computers with OpenGL below the standards specified by the Blender. So for those of you who want to run Blender 2.8, maybe, it's better to upgrade the hardware you have, or maybe buy a new PC that has more qualified specifications.

The beast of 3D editors version 2.80 is free in the wild

  • The beast of 3D editors version 2.80 is free in the wild !

    Blender is the free and open source 3D creation suite. It is characterized by high features that made it a fierce competitor for commercial tools. It is considered one of the most important open source pieces.
    Recently, all those interested in the field of 3D tools have been closely watching the developments and news of version 2.80, and also waiting for its release date.

Comment viewing options

Select your preferred way to display the comments and click "Save settings" to activate your changes.

More in Tux Machines

LWN: Spectre, Linux and Debian Development

  • Grand Schemozzle: Spectre continues to haunt

    The Spectre v1 hardware vulnerability is often characterized as allowing array bounds checks to be bypassed via speculative execution. While that is true, it is not the full extent of the shenanigans allowed by this particular class of vulnerabilities. For a demonstration of that fact, one need look no further than the "SWAPGS vulnerability" known as CVE-2019-1125 to the wider world or as "Grand Schemozzle" to the select group of developers who addressed it in the Linux kernel. Segments are mostly an architectural relic from the earliest days of x86; to a great extent, they did not survive into the 64-bit era. That said, a few segments still exist for specific tasks; these include FS and GS. The most common use for GS in current Linux systems is for thread-local or CPU-local storage; in the kernel, the GS segment points into the per-CPU data area. User space is allowed to make its own use of GS; the arch_prctl() system call can be used to change its value. As one might expect, the kernel needs to take care to use its own GS pointer rather than something that user space came up with. The x86 architecture obligingly provides an instruction, SWAPGS, to make that relatively easy. On entry into the kernel, a SWAPGS instruction will exchange the current GS segment pointer with a known value (which is kept in a model-specific register); executing SWAPGS again before returning to user space will restore the user-space value. Some carefully placed SWAPGS instructions will thus prevent the kernel from ever running with anything other than its own GS pointer. Or so one would think.

  • Long-term get_user_pages() and truncate(): solved at last?

    Technologies like RDMA benefit from the ability to map file-backed pages into memory. This benefit extends to persistent-memory devices, where the backing store for the file can be mapped directly without the need to go through the kernel's page cache. There is a fundamental conflict, though, between mapping a file's backing store directly and letting the filesystem code modify that file's on-disk layout, especially when the mapping is held in place for a long time (as RDMA is wont to do). The problem seems intractable, but there may yet be a solution in the form of this patch set (marked "V1,000,002") from Ira Weiny. The problems raised by the intersection of mapping a file (via get_user_pages()), persistent memory, and layout changes by the filesystem were the topic of a contentious session at the 2019 Linux Storage, Filesystem, and Memory-Management Summit. The core question can be reduced to this: what should happen if one process calls truncate() while another has an active get_user_pages() mapping that pins some or all of that file's pages? If the filesystem actually truncates the file while leaving the pages mapped, data corruption will certainly ensue. The options discussed in the session were to either fail the truncate() call or to revoke the mapping, causing the process that mapped the pages to receive a SIGBUS signal if it tries to access them afterward. There were passionate proponents for both options, and no conclusion was reached. Weiny's new patch set resolves the question by causing an operation like truncate() to fail if long-term mappings exist on the file in question. But it also requires user space to jump through some hoops before such mappings can be created in the first place. This approach comes from the conclusion that, in the real world, there is no rational use case where somebody might want to truncate a file that has been pinned into place for use with RDMA, so there is no reason to make that operation work. There is ample reason, though, for preventing filesystem corruption and for informing an application that gets into such a situation that it has done something wrong.

  • Hardening the "file" utility for Debian

    In addition, he had already encountered problems with file running in environments with non-standard libraries that were loaded using the LD_PRELOAD environment variable. Those libraries can (and do) make system calls that the regular file binary does not make; the system calls were disallowed by the seccomp() filter. Building a Debian package often uses FakeRoot (or fakeroot) to run commands in a way that appears that they have root privileges for filesystem operations—without actually granting any extra privileges. That is done so that tarballs and the like can be created containing files with owners other than the user ID running the Debian packaging tools, for example. Fakeroot maintains a mapping of the "changes" made to owners, groups, and permissions for files so that it can report those to other tools that access them. It does so by interposing a library ahead of the GNU C library (glibc) to intercept file operations. In order to do its job, fakeroot spawns a daemon (faked) that is used to maintain the state of the changes that programs make inside of the fakeroot. The libfakeroot library that is loaded with LD_PRELOAD will then communicate to the daemon via either System V (sysv) interprocess communication (IPC) calls or by using TCP/IP. Biedl referred to a bug report in his message, where Helmut Grohne had reported a problem with running file inside a fakeroot.

Flameshot is a brilliant screenshot tool for Linux

The default screenshot tool in Ubuntu is alright for basic snips but if you want a really good one you need to install a third-party screenshot app. Shutter is probably my favorite, but I decided to give Flameshot a try. Packages are available for various distributions including Ubuntu, Arch, openSuse and Debian. You find installation instructions on the official project website. Read more

Android Leftovers

IBM/Red Hat and Intel Leftovers

  • Troubleshooting Red Hat OpenShift applications with throwaway containers

    Imagine this scenario: Your cool microservice works fine from your local machine but fails when deployed into your Red Hat OpenShift cluster. You cannot see anything wrong with the code or anything wrong in your services, configuration maps, secrets, and other resources. But, you know something is not right. How do you look at things from the same perspective as your containerized application? How do you compare the runtime environment from your local application with the one from your container? If you performed your due diligence, you wrote unit tests. There are no hard-coded configurations or hidden assumptions about the runtime environment. The cause should be related to the configuration your application receives inside OpenShift. Is it time to run your app under a step-by-step debugger or add tons of logging statements to your code? We’ll show how two features of the OpenShift command-line client can help: the oc run and oc debug commands.

  • What piece of advice had the greatest impact on your career?

    I love learning the what, why, and how of new open source projects, especially when they gain popularity in the DevOps space. Classification as a "DevOps technology" tends to mean scalable, collaborative systems that go across a broad range of challenges—from message bus to monitoring and back again. There is always something new to explore, install, spin up, and explore.

  • How DevOps is like auto racing

    When I talk about desired outcomes or answer a question about where to get started with any part of a DevOps initiative, I like to mention NASCAR or Formula 1 racing. Crew chiefs for these race teams have a goal: finish in the best place possible with the resources available while overcoming the adversity thrown at you. If the team feels capable, the goal gets moved up a series of levels to holding a trophy at the end of the race. To achieve their goals, race teams don’t think from start to finish; they flip the table to look at the race from the end goal to the beginning. They set a goal, a stretch goal, and then work backward from that goal to determine how to get there. Work is delegated to team members to push toward the objectives that will get the team to the desired outcome. [...] Race teams practice pit stops all week before the race. They do weight training and cardio programs to stay physically ready for the grueling conditions of race day. They are continually collaborating to address any issue that comes up. Software teams should also practice software releases often. If safety systems are in place and practice runs have been going well, they can release to production more frequently. Speed makes things safer in this mindset. It’s not about doing the “right” thing; it’s about addressing as many blockers to the desired outcome (goal) as possible and then collaborating and adjusting based on the real-time feedback that’s observed. Expecting anomalies and working to improve quality and minimize the impact of those anomalies is the expectation of everyone in a DevOps world.

  • Deep Learning Reference Stack v4.0 Now Available

    Artificial Intelligence (AI) continues to represent one of the biggest transformations underway, promising to impact everything from the devices we use to cloud technologies, and reshape infrastructure, even entire industries. Intel is committed to advancing the Deep Learning (DL) workloads that power AI by accelerating enterprise and ecosystem development. From our extensive work developing AI solutions, Intel understands how complex it is to create and deploy applications for deep learning workloads. That?s why we developed an integrated Deep Learning Reference Stack, optimized for Intel Xeon Scalable processor and released the companion Data Analytics Reference Stack. Today, we?re proud to announce the next Deep Learning Reference Stack release, incorporating customer feedback and delivering an enhanced user experience with support for expanded use cases.

  • Clear Linux Releases Deep Learning Reference Stack 4.0 For Better AI Performance

    Intel's Clear Linux team on Wednesday announced their Deep Learning Reference Stack 4.0 during the Linux Foundation's Open-Source Summit North America event taking place in San Diego. Clear Linux's Deep Learning Reference Stack continues to be engineered for showing off the most features and maximum performance for those interested in AI / deep learning and running on Intel Xeon Scalable CPUs. This optimized stack allows developers to more easily get going with a tuned deep learning stack that should already be offering near optimal performance.