Images courtesy of Digital Twin Cities Centre

Visualizing the invisible: the sound of real and virtual worlds

Ken Pimentel |
March 8, 2021
Real-time technology is inevitably associated with the instantaneous rendering of visuals—the stunning backdrops of The Mandalorian or photorealistic digital humans, for example. 

But game engines like Unreal Engine are also a powerful tool for simulating and visualizing an invisible energy: sound. This capability can be applied in interesting ways for the AEC industry.

Most audio simulation efforts use specialized tools that can take hours of processing to produce a result. These are not real-time simulations, as any tweak of the parameters would require hours of reprocessing. This is analogous to the situation in rendering where using an offline path-tracer to compute the bouncing of light in a scene can take minutes or hours per frame. 

In a similar way, sound simulators compute how sound waves are reflected and refracted by the material they encounter. The results from these simulators can be brought into Unreal Engine to explore the soundscape using the engine’s graphics capability along with its sound capabilities. Cundall is one company currently exploring the use of the advanced sound capabilities in Unreal Engine for real-time simulation.

It won’t be long before real-time audio simulations can be used to demonstrate how different surface finishes facilitate speech intelligibility in a lecture theater, how music sounds in an auditorium, or how the choice of finishes in a shopping mall drives a sense of calm.

At the Digital Twin Cities Centre (DTCC), hosted by Chalmers University of Technology, researchers are interested in visualizing the invisible. For them, visualizing the behavior of sound waves can help improve the urban planning processes by explaining complex processes using powerful visualization methods.

Real-time technology for real-world acoustics 

Cundall is a global consultancy that offers engineering services across a wide range of sectors, from residential to healthcare to aviation. The list of services it provides is exhaustive, encompassing civil engineering, smart buildings, sustainability, and acoustic engineering, to name a few.
Image courtesy of ©Tim Cornbill (Left) / Image courtesy of ©Mark Forrer (Right)
Andrew Parkin is Partner and Global Head of Acoustics at the company. As an acoustics consultant, his role is to help clients and stakeholders understand how sound will behave in spaces that are often yet to be built or refurbished. “By optimizing the accuracy of acoustic predictions during a building design process, we’re able to minimize the disconnect between a client’s expectations and eventual reality—game engines have a key part to play, as they help us do this,” he explains. 

When Cundall is designing a space, its advice relates to everything from internal finishes to the geometry of the room. The client or stakeholders need to understand the reason behind decisions in order to justify Cundall’s recommendations.

Recommendations based on technical data—such as numerical reverberation time—mean little to anyone except acousticians or particularly informed laypeople. The ability to communicate in an experiential way instead can provide a huge step towards full engagement.

This is where Cundall’s real-time experiential system, Virtual Acoustic Reality (VAR), comes in. The company’s original system for simulating audio uses CATT Acoustic for audio prediction, the CATT-Walker plugin for walkthroughs, a game engine for front-end graphics, and Oculus Rift as a visual out, with a high-quality soundcard and noise-cancelling headphones for audio.
While the accuracy of the audio simulations this system produces is high, it is also clunky, slow, and very processor-hungry. The system requires pre-analysis of a space based on set internal finishes. Every time a change is made to the design, the model needs to be taken offline and re-calculated, which takes significant time and effort—and in the world of consultancy, time is money.

This is fine for presentation of a final design, but is not conducive to an iterative design process where changes are made often and various options need to be explored and discounted quickly. Being able to review how these changes will sound in a dynamic, accurate fashion can greatly enhance the design and engagement process.

Dynamic audio simulations are so complex, however, that it is not currently possible to do them accurately in real time—hence the requirement to pre-bake. By moving away from traditional acoustic modelling software to a more gamified solution, Cundall hopes to optimize processor use by rationalizing processes, enabling more to be done in real time. 

The ultimate goal is to get real-time simulations as close in accuracy to proprietary software as possible, with minimal or zero pre-baking, along with ability to make changes to the model on the fly and get results instantaneously—rather than up to 24 hours later, as is the case at present.

It’s with this in mind that Cundall has been experimenting with a new updated system based on Unreal Engine. “We would ideally like to streamline the system so that it uses a single platform—Unreal Engine—which will make it quicker and more stable,” says Parkin.
 

The Unreal Marketplace was Cundall’s gateway into the engine. “The Marketplace was the instigating element that brought us into the Unreal world as a business, initially utilizing third-party plugins to research the potential of the engine for development of the VAR system,” explains Parkin.

As acousticians—rather than computer programmers—the Blueprint visual scripting system has given Parkin’s team the ability to use virtually the full range of concepts and tools generally only available to programmers. “The Blueprint visual scripting system has proven very intuitive to understanding the mechanics of creating interactive content,” he says. “This is especially important for acoustic consultants such as ourselves that may not have a coding background.”

Cundall’s experimentation with Unreal Engine as a platform for acoustic simulation is only just beginning. Early tests have shown huge benefits in speed and performance, with further work needed to match the levels of accuracy achieved with the old system. “The ultimate goal for us would be to have a virtual model of a building that accurately portrays how a building or room will sound when in the design stage,” says Parkin. “Being then able to press a button and have the internal finishes, or room shape change, with a corresponding instant change in acoustic conditions, would be awesome.”

Visualizing sound for urban planning

Real-time technology not only provides opportunities for assessing sound in internal spaces—it can be used to visualize the behavior of soundwaves in external environments, too. 
Images courtesy of Digital Twin Cities Centre
The Digital Twin Cities Centre (DTCC) in Sweden is one of the institutions at the forefront of smart city research. Its main focus is on creating tools and methods that enable a wide range of applications, including for the AEC industry. Data is visualized in its Digital Twin City Platform, which is built on Unreal Engine.

One type of data the DTCC team analyzes is sound. “As sound is invisible to the eye, these visualizations can help stakeholders understand potential issues related to the sound environment, which can be used by planners in urban planning processes, ” says Fabio Latino, Design Lead at the DTCC.
 

How sound behaves might not be the first consideration that comes to mind when you think about designing city spaces—but acoustics could play a big part in building better urban environments in the future. “Acoustics have an impact on our health and wellbeing, although it tends to fall low on the priority list of urban developers, and its consequences are often not fully understood,” explains Latino.

In conversations with stakeholders involved in urban planning processes, the ability to visualize how different transport solutions could affect the urban soundscape can help them understand how those solutions affect the environment and human health. This can in turn contribute to better design decisions.
Images courtesy of Digital Twin Cities Centre
Sanjay Somanath is a PhD student at Chalmers University of Technology. His research focuses on developing digital tools for neighbourhood planning, and his work with the DTCC involves processing GIS data into Unreal Engine. “The goal is to visualize noise-analysis data for different urban scenarios,” says Somanath. “We created a custom material pipeline within the landscape material to accept a color scale and a texture to then visualize this 2D data onto our digital twin model within Unreal Engine.”
 

The researchers explored a number of options before they landed on Unreal Engine as their real-time platform of choice. “Apart from the state-of-the-art technical features Unreal Engine provides, Epic has put a lot of work into making the software a truly unique ecosystem that caters to the needs of the AEC industry,” says Vasilis Naserentin, Lead Developer at the DTCC. “We sit in the interface between research and development for AEC, so it is of great importance to be able to tend to both worlds.”

Like Cundall, the researchers at Chalmers make good use of Blueprint. The visual scripting system acted as the glue between the different systems they built, enabling those systems to communicate with each other. “As an immediate result, we’re able to test out new ideas and develop prototypes to better convey information to clients without spending too much development time,” explains Orfeas Eleftheriou, Unreal Engine Developer at the DTCC.
Images courtesy of Digital Twin Cities Centre
The team was also drawn to the support Unreal Engine offers for various tools used for architectural design. Using Datasmith, the researchers were able to quickly and easily import data of different types and sources. “Datasmith enabled us to focus solely on building experiences, instead of worrying about how to handle different types of data,” says Eleftheriou.

The option to render out high-resolution screenshots and videos out of the box appealed to the researchers, as this enabled them to showcase work as soon as possible. “Sequencer has played a significant role in our projects,” confirms Eleftheriou. “When we wanted to test out the accuracy of our tools and showcase various features, we were able to render out high-fidelity videos, which helped us establish better communication with people of different levels of expertise.”

Finally, real-time technology enables the DTCC to prototype and iterate on ideas faster. “I have mostly worked with static images as an initial design tool to envision ideas for interactive environments,” says Beata Stahre Wästberg, Associate Professor and Senior Researcher at the Department of Computer Science and Engineering at Chalmers. “Unreal Engine has enabled us to include the interactive aspects early in the design process, which has helped us develop our ideas and be more effective.”

Improving auditory experiences for the future

Cundall and the DTCC at Chalmers University of Technology are two organizations at the cutting edge of sound simulation and visualization. Their work illustrates how real-time technology can be a powerful tool to improve immersion in virtual worlds, and enhance auditory experiences in the real one.

As pioneers like these further explore the potential of real-time technology for creating auditory experiences, we’re sure to see more immersive virtual worlds and better-designed urban spaces in the future. 

    Get Unreal Engine today!

    Get the world’s most open and advanced creation tool. 
    With every feature and full source code access included, Unreal Engine comes fully loaded out of the box.