AirsideVR image courtesy of Rubix Limited

Unreal Engine 5 offers significant new potential for the simulation industry

April 28, 2022
The release of Unreal Engine 5 is making waves across many industries. Its potential to change the face of next-generation game development is clear, while creators in film and television, live events, architecture, automotive, and more all have much to celebrate. But groundbreaking new toolsets for generating highly realistic, high-accuracy, massive Open Worlds are equally applicable to the simulation industry. Here, we’ll take a look at just some of the highlights and examine what they mean to this community.

Bigger, more accurate Open Worlds

With UE5, the sky doesn’t have to be the limit. A new World Partition system changes how levels are managed and streamed, making it possible to handle much bigger worlds that would not otherwise fit into memory, or would require very long load times. Using World Partition, the world exists as a single persistent level that is automatically divided into a grid. In the Unreal Editor, you can select the area of interest to work on in a new World Partition Editor window. At runtime, only the necessary cells are streamed based on distance.
 

Working hand in hand with World Partition, a new One File Per Actor (OFPA) system means that multiple developers can work on the same level simultaneously without stepping on each other’s toes, making for faster, more collaborative workflows. Meanwhile, Data Layers enable you to have multiple variations of the same level—such as daytime and nighttime versions, or intact and destroyed assets—as layers that exist in the same space, and which can be enabled or disabled during runtime by Blueprints.

In Unreal Engine 4, one of the limitations to the size of the world you could create was precision. Unreal Engine 5 introduces support for Large World Coordinates (LWC), which enables double-precision floating-point data for a wide range of systems. This greatly improves Actor placement accuracy and orientation precision, and lays the groundwork for creating absolutely massive worlds, without the need for rebasing or other tricks. In addition to core data types, 64-bit precision is enabled for HLSL, Niagara visual effects, and Chaos physics. The latter of these is now also able to run in its own separate thread on a fixed tick interval for more predictable, networkable simulations.
These new toolsets augment existing Unreal Engine features for bringing real-world data to your real-time application, such as the Georeferencing plugin that enables you to associate locations in an Unreal Engine level with locations in physical space. In addition, there’s amazing support coming from the Unreal Engine ecosystem, including Cesium for Unreal with its 3D Tiles integration, the ArcGIS Maps SDK for Unreal Engine, and the SimBlocks.io CDB Datasmith Exporter.

And the future holds more promise. As we start to see AI solutions augmenting captured geographic data—such as Blackshark, with their mission to provide a semantic, photorealistic 3D digital twin of the entire planet as a plugin for Unreal Engine, AVES Reality who creates dedicated VR twins of parts of the world as virtual test environments, and several others—the future looks exciting. Equally important work is being done in the community by very semantic large world open standards like 3D Tiles Next, which contain not only basic information but also attributes that enable the simulation to interact more intelligently with its environment.

Photorealism in real time

What use is a world that’s big, but not believable? In Unreal Engine 5, a number of new systems combine to enable you create stunningly detailed immersive worlds that are hard to distinguish from reality.

With Nanite, UE5’s new virtualized micropolygon geometry system, you can directly import source assets such as highly detailed CAD models of devices, vehicles, or buildings, and multi-million polygon photogrammetry scans of terrain and environments, without any need to degrade them. Nanite lets you create amazingly detailed scenes with no need to bake details to normal maps or worry about draw call constraints. It works by intelligently streaming and processing only the detail you can perceive.

To show very detailed geometry at its best, you need very detailed shadows. That’s where Virtual Shadow Maps (VSMs) come in. Essentially very high-resolution shadow maps, VSMs split the shadow maps into tiles, which are allocated and rendered only as needed to shade on-screen pixels based on an analysis of the depth buffer—much like Nanite.

A lot of a scene’s realism comes from the way it is lit. It’s long been possible to create acceptable real-time lighting and reflections with a lot of elbow grease—authoring light map UVs, baking light maps, and placing reflection captures, for example. In Unreal Engine 5, Lumen changes all of that. With the fully dynamic global illumination and reflections system, indirect lighting immediately reacts to changes to direct lighting or geometry—for example, changing the sun’s angle with the time of day, turning on a flashlight, or opening an exterior door. The system renders diffuse interreflection with infinite bounces and indirect specular reflections in huge, detailed environments, at scales ranging from kilometers to millimeters. There’s even support for emissive materials.
Since Quixel joined the Epic Games family in 2019, the entire Quixel Megascans library—the world's largest library of AAA, cinema-quality assets based on real-world scan data—has been free for all use with Unreal Engine. In UE5, we went a step further. Quixel Bridge is now built right into the Unreal Editor, putting thousands of extremely high-quality assets—including materials, buildings, environments, props, plants, and most recently trees—at your fingertips. It’s simply a matter of drag and drop.
Can’t find exactly what you’re looking for in the library? Now you can scan whatever you want yourself. RealityScan is a new free 3D scanning app developed by CapturingReality—another recent addition to the Epic Games family—and Quixel. RealityScan is currently in limited beta, with Early Access to follow later this year.

When Lumen, Nanite, VSMs, and data captured from reality are combined, and used alongside existing features like Volumetric Clouds, and the Water System, you have the potential to create worlds of astonishing realism that will completely immerse your end-users.

Semantically rich experiences

So now it’s big, and it’s beautiful. But is it smart? After all, how smart does a game engine need to be?

Time to leave the misconceptions at the door. Unreal Engine 5 launches with a remarkable selection of Beta and Experimental features that are paving the way for you to build the next generation of scenario generators or exercise configurators to train both humans and machines.
 

Artificial intelligence (AI) and logic 

Take artificial intelligence. New features in UE5 give you the ability to create more believable AI agents than ever before. MassEntity provides a framework for data-oriented calculations that can be used where performance is key—including the simulation of tens of thousands of AI agents in the scene. In addition, there are Smart Objects—a collection of objects placed in a level that AI agents and players can interact with. This system is easily configurable and can add an unprecedented level of interactivity to your scene.
 
There are also significant improvements to AI agent navigation, with features such as Mass Avoidance and Zone Graph. Mass Avoidance provides high-performance avoidance for any Entity using the MassEntity system, and Zone Graph provides efficient long-distance navigation via specific navigation flows.
 
Then, there’s State Tree, Unreal Engine’s scalable and general-purpose hierarchical state machine that combines the Selectors from behavior trees with States and Transitions from state machines. With it, you can create highly performant logic that stays flexible and organized.

Machine learning (ML)

Autonomous systems using Unreal Engine to create a ground truth are usually connected to their own neural networks. UE5 introduces Neural Network Inference (NNI), a native plugin used to evaluate neural networks in real time inside Unreal Engine, enabling developers to directly integrate standard ML training frameworks.

This plugin provides features such as the machine-learning based ML Deformer system, which enables very high-resolution vertex offset data to be compressed through a ML network and played back in real time. The system acts as a foundation to many ML-based approaches to solving development challenges, including animation, ML-based AI, camera tracking, and much more.

NNI supports the industry-standard ONNX model format and can run any model exported as ONNX from standard ML training frameworks (PyTorch, TensorFlow, MXNet, etc). This enables users to take their ML models from anywhere and run them directly in the engine. The team collaborated closely with Microsoft to use their ONNX Runtime project as the core of the NNI plugin's inference system.
These are just some of the highlights of Unreal Engine 5 that bring new potential to the simulation industry. You can view all of the new features in the release notes.

Want to discuss what you’ve seen and how it applies to your upcoming projects? We’d love to chat. Get in touch below, and let’s start talking.

    Let's talk!

    Interested in finding out how you could unleash Unreal Engine’s potential for simulation? Get in touch to start that conversation.