<?xml version="1.0" encoding="utf-8" standalone="yes"?><rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom"><channel><title>Artificial Audio</title><link>https://artificial-audio.github.io/</link><atom:link href="https://artificial-audio.github.io/index.xml" rel="self" type="application/rss+xml"/><description>Artificial Audio</description><generator>Hugo Blox Builder (https://hugoblox.com)</generator><language>en-us</language><lastBuildDate>Mon, 24 Oct 2022 00:00:00 +0000</lastBuildDate><item><title>Example Event</title><link>https://artificial-audio.github.io/event/example/</link><pubDate>Sat, 01 Jun 2030 13:00:00 +0000</pubDate><guid>https://artificial-audio.github.io/event/example/</guid><description>&lt;p>Slides can be added in a few ways:&lt;/p>
&lt;ul>
&lt;li>&lt;strong>Create&lt;/strong> slides using Wowchemy&amp;rsquo;s &lt;a href="https://docs.hugoblox.com/managing-content/#create-slides" target="_blank" rel="noopener">&lt;em>Slides&lt;/em>&lt;/a> feature and link using &lt;code>slides&lt;/code> parameter in the front matter of the talk file&lt;/li>
&lt;li>&lt;strong>Upload&lt;/strong> an existing slide deck to &lt;code>static/&lt;/code> and link using &lt;code>url_slides&lt;/code> parameter in the front matter of the talk file&lt;/li>
&lt;li>&lt;strong>Embed&lt;/strong> your slides (e.g. Google Slides) or presentation video on this page using &lt;a href="https://docs.hugoblox.com/writing-markdown-latex/" target="_blank" rel="noopener">shortcodes&lt;/a>.&lt;/li>
&lt;/ul>
&lt;p>Further event details, including page elements such as image galleries, can be added to the body of this page.&lt;/p></description></item><item><title>Reverberation Enhancement System</title><link>https://artificial-audio.github.io/portfolio/res/</link><pubDate>Fri, 19 Dec 2025 00:00:00 +0000</pubDate><guid>https://artificial-audio.github.io/portfolio/res/</guid><description>&lt;p>Combining digital signal processing with acoustic feedback to transform the acoustics of any space.&lt;/p>
&lt;hr>
&lt;h1 id="concept">Concept&lt;/h1>
&lt;p>A reverberation enhancement system is an active system capable of controlling the room acoustics of a physical space. Microphones capture the sound present in the room, a digital signal processor enhances the signals, and loudspeakers reproduce them back into the room. This pipeline simulates changes in the geometry and absorption properties of the original space.&lt;/p>
&lt;video controls width="100%">
&lt;source src="https://dl.dropboxusercontent.com/s/1ti44nyluih25nr/RES_MozartVideo_Long_02.mp4?dl=0" type="video/mp4">
Your browser does not support the video tag.
&lt;/video>
&lt;p>The video was shot in the acoustics lab &lt;em>Mozart&lt;/em> at the Fraunhofer IIS, Erlangen, Germany. &lt;br>
Additional information regarding the specific reverberation enhancement system adopted can be found &lt;a href="https://www.sebastianjiroschlecht.com/project/reverberationenhancement/#fn:1" target="_blank" rel="noopener">here&lt;/a>.&lt;/p>
&lt;hr>
&lt;h1 id="music">Music&lt;/h1>
&lt;p>Beyond engineering and room acoustics, reverberation enhancement systems unlock new possibilities for artistic expression.&lt;/p>
&lt;ul>
&lt;li>
&lt;p>&lt;a href="https://vimeo.com/783198885?fl=pl&amp;amp;fe=vl" target="_blank" rel="noopener">&lt;strong>Kaikuja Säiliöstä&lt;/strong>&lt;/a>&lt;/p>
&lt;p>Corresponding author: Andrea Mancianti.&lt;/p>
&lt;p>A collection of site-specific immersive sound studies for brass ensemble and live electronics, written for Öljysäiliö 468, a decommissioned oil tank in East Helsinki.&lt;/p>
&lt;p>Additional information regarding the musical piece and the creative process can be found &lt;a href="https://andreamancianti.com/project/kaikuja-sailiosta/" target="_blank" rel="noopener">here&lt;/a>.&lt;/p>
&lt;/li>
&lt;li>
&lt;p>&lt;a href="https://www.youtube.com/watch?v=9yNQbnhjIkk" target="_blank" rel="noopener">&lt;strong>Paradosso&lt;/strong>&lt;/a>&lt;/p>
&lt;p>Corresponding author: Eduard Tampu.&lt;/p>
&lt;p>An exploration of the influence of reverberation enhancement systems in the production and performance of electroacoustic music.&lt;/p>
&lt;p>Additional information regarding the study&lt;a href="#ref7">[7]&lt;/a> that led to the creation of this musical piece can be found &lt;a href="https://github.com/tampueduard/artisticActiveAcousticEnhancement" target="_blank" rel="noopener">here&lt;/a>.&lt;/p>
&lt;/li>
&lt;/ul>
&lt;hr>
&lt;h1 id="tools">Tools&lt;/h1>
&lt;p>Open-source resources to facilitate the study and the use of reverberation enhancement systems.&lt;/p>
&lt;ul>
&lt;li>
&lt;p>&lt;a href="https://github.com/GianMarcoDeBortoli/TVFDN-plugin" target="_blank" rel="noopener">&lt;strong>Time-Varying Feedback Delay Network plugin&lt;/strong>&lt;/a>&lt;/p>
&lt;p>Real-time audio plugin implementing a Time-Varying Feedback Delay Network.&lt;/p>
&lt;p>This plugin is a multi-input multi-output reverberator based on a standard feedback delay network with a time-varying feedback matrix. This architecture was originally designed and proposed for reverberation enhancement systems&lt;a href="#ref2">[2]&lt;/a>. It is built in C++ using the &lt;a href="https://github.com/juce-framework/JUCE" target="_blank" rel="noopener">JUCE&lt;/a> framework.&lt;/p>
&lt;/li>
&lt;li>
&lt;p>&lt;a href="https://github.com/GianMarcoDeBortoli/DIY-RES" target="_blank" rel="noopener">&lt;strong>DIY-RES&lt;/strong>&lt;/a>&lt;/p>
&lt;p>Setting up a reverberation enhancement system can be a challenge. DIY-RES is a guide on how to install a system using open-source software only. The proposed installation uses the &lt;em>Time-Varying Feedback Delay Network plugin&lt;/em> as the system DSP.&lt;/p>
&lt;p>DIY-RES offers:&lt;/p>
&lt;ul>
&lt;li>
&lt;p>&lt;strong>Installation instructions&lt;/strong>&lt;/p>
&lt;p>A written guide to setting up the transducers and using the &lt;em>Time-Varying Feedback Delay Network plugin&lt;/em> in the installation.&lt;/p>
&lt;/li>
&lt;li>
&lt;p>&lt;strong>Signal routing templates&lt;/strong>&lt;/p>
&lt;p>Max/MSP and Reaper templates for routing signals from the microphones through the plugin to the loudspeakers.&lt;/p>
&lt;/li>
&lt;/ul>
&lt;/li>
&lt;li>
&lt;p>&lt;a href="https://zenodo.org/records/15737243" target="_blank" rel="noopener">&lt;strong>DataRES&lt;/strong>&lt;/a>&lt;/p>
&lt;p>Dataset for research on reverberation enhancement systems.&lt;/p>
&lt;p>Measurements from rooms with installed reverberation enhancement systems have been collected in a single open database&lt;a href="#ref8">[8]&lt;/a>. This work facilitates the study of real-world system implementations and improves result reproducibility.&lt;/p>
&lt;/li>
&lt;li>
&lt;p>&lt;a href="https://github.com/GianMarcoDeBortoli/PyRES" target="_blank" rel="noopener">&lt;strong>PyRES&lt;/strong>&lt;/a>&lt;/p>
&lt;p>Python library for reverberation enhancement system development and simulation.&lt;/p>
&lt;p>PyRES is open-source software for testing digital signal processing architectures in reverberation enhancement systems&lt;a href="#ref8">[8]&lt;/a>. It interfaces with DataRES, enabling simulation of real-world systems. Using &lt;a href="https://github.com/gdalsanto/flamo" target="_blank" rel="noopener">FLAMO&lt;/a> as a back-end enables the generation of custom-made DSPs as chains of elementary processing blocks. Each block operation is defined as differentiable, allowing each architecture to be trained in a DDSP fashion. &lt;br>
PyRES includes additional functionalities for visualization, evaluation, and auralization.&lt;/p>
&lt;/li>
&lt;/ul>
&lt;hr>
&lt;h1 id="publications">Publications&lt;/h1>
&lt;table>
&lt;thead>
&lt;tr>
&lt;th style="text-align: left">&lt;/th>
&lt;th style="text-align: left">Year&lt;/th>
&lt;th style="text-align: left">Authors&lt;/th>
&lt;th style="text-align: left">Article &amp;amp; accompanying material&lt;/th>
&lt;/tr>
&lt;/thead>
&lt;tbody>
&lt;tr>
&lt;td style="text-align: left">&lt;span id="ref1">[1]&lt;/span>&lt;/td>
&lt;td style="text-align: left">2012&lt;/td>
&lt;td style="text-align: left">Sebastian J. Schlecht &amp;amp; Emanuël A. P. Habets&lt;/td>
&lt;td style="text-align: left">&lt;a href="https://ieeexplore.ieee.org/abstract/document/6376933" target="_blank" rel="noopener">Reverberation enhancement from a feedback delay network perspective&lt;/a>&lt;/td>
&lt;/tr>
&lt;tr>
&lt;td style="text-align: left">&lt;span id="ref2">[2]&lt;/span>&lt;/td>
&lt;td style="text-align: left">2015&lt;/td>
&lt;td style="text-align: left">Sebastian J. Schlecht &amp;amp; Emanuël A. P. Habets&lt;/td>
&lt;td style="text-align: left">&lt;a href="https://aes2.org/publications/elibrary-page/?id=17831" target="_blank" rel="noopener">Reverberation Enhancement Systems with Time-Varying Mixing Matrices&lt;/a>&lt;/td>
&lt;/tr>
&lt;tr>
&lt;td style="text-align: left">&lt;span id="ref3">[3]&lt;/span>&lt;/td>
&lt;td style="text-align: left">2015&lt;/td>
&lt;td style="text-align: left">Sebastian J. Schlecht &amp;amp; Emanuël A. P. Habets&lt;/td>
&lt;td style="text-align: left">&lt;a href="https://pubs.aip.org/asa/jasa/article/138/3/1389/680169" target="_blank" rel="noopener">Time-varying feedback matrices in feedback delay networks and their application in artificial reverberation&lt;/a>&lt;/td>
&lt;/tr>
&lt;tr>
&lt;td style="text-align: left">&lt;span id="ref4">[4]&lt;/span>&lt;/td>
&lt;td style="text-align: left">2016&lt;/td>
&lt;td style="text-align: left">Sebastian J. Schlecht &amp;amp; Emanuël A. P. Habets&lt;/td>
&lt;td style="text-align: left">&lt;a href="https://pubs.aip.org/asa/jasa/article/140/1/601/604219" target="_blank" rel="noopener">The stability of multichannel sound systems with time-varying mixing matrices&lt;/a>&lt;/td>
&lt;/tr>
&lt;tr>
&lt;td style="text-align: left">&lt;span id="ref5">[5]&lt;/span>&lt;/td>
&lt;td style="text-align: left">2024&lt;/td>
&lt;td style="text-align: left">Gian Marco De Bortoli, Karolina Prawda, &amp;amp; Sebastian J. Schlecht&lt;/td>
&lt;td style="text-align: left">&lt;a href="https://aes2.org/publications/elibrary-page/?id=22773" target="_blank" rel="noopener">Active Acoustics with a Phase Cancelling Modal Reverberator&lt;/a> &lt;br> &lt;a href="https://gianmarcodebortoli.github.io/AA-modalReverberator/" target="_blank" rel="noopener">Accompanying material&lt;/a>&lt;/td>
&lt;/tr>
&lt;tr>
&lt;td style="text-align: left">&lt;span id="ref6">[6]&lt;/span>&lt;/td>
&lt;td style="text-align: left">2024&lt;/td>
&lt;td style="text-align: left">Gian Marco De Bortoli, Gloria Dal Santo, &lt;br> et al.&lt;/td>
&lt;td style="text-align: left">&lt;a href="https://www.dafx.de/paper-archive/2024/papers/DAFx24_paper_64.pdf" target="_blank" rel="noopener">Differentiable Active Acoustics: Optimizing Stability via Gradient Descent&lt;/a> &lt;br> &lt;a href="http://research.spa.aalto.fi/publications/papers/dafx24-diff-aa/" target="_blank" rel="noopener">Accompanying material&lt;/a>&lt;/td>
&lt;/tr>
&lt;tr>
&lt;td style="text-align: left">&lt;span id="ref7">[7]&lt;/span>&lt;/td>
&lt;td style="text-align: left">2024&lt;/td>
&lt;td style="text-align: left">Eduard Tampu&lt;/td>
&lt;td style="text-align: left">&lt;a href="https://aaltodoc.aalto.fi/items/49215eb1-8399-4cca-9db1-bf16fd699a5d" target="_blank" rel="noopener">Active Acoustics: A compositional and performative approach to regenerative systems&lt;/a> &lt;br> &lt;a href="https://github.com/tampueduard/artisticActiveAcousticEnhancement" target="_blank" rel="noopener">Accompanying material&lt;/a>&lt;/td>
&lt;/tr>
&lt;tr>
&lt;td style="text-align: left">&lt;span id="ref8">[8]&lt;/span>&lt;/td>
&lt;td style="text-align: left">2025&lt;/td>
&lt;td style="text-align: left">Gian Marco De Bortoli, Karolina Prawda, Philip Coleman, &amp;amp; Sebastian J. Schlecht&lt;/td>
&lt;td style="text-align: left">&lt;a href="https://dafx.de/paper-archive/2025/DAFx25_paper_47.pdf" target="_blank" rel="noopener">DataRES and PyRES: A Room Dataset and a Python Library for Reverberation Enhancement System Development, Evaluation, and Simulation&lt;/a> &lt;br> &lt;a href="https://gianmarcodebortoli.github.io/PyRES/" target="_blank" rel="noopener">Accompanying material&lt;/a>&lt;/td>
&lt;/tr>
&lt;/tbody>
&lt;/table>
&lt;hr></description></item><item><title>Acoustic Illusions for Extended Realities</title><link>https://artificial-audio.github.io/portfolio/acousticillusion/</link><pubDate>Sat, 06 Sep 2025 00:00:00 +0000</pubDate><guid>https://artificial-audio.github.io/portfolio/acousticillusion/</guid><description>&lt;p>Blend real and virtual sounds seamlessly by creating binaural illusions of acoustic sources.&lt;/p>
&lt;hr>
&lt;h1 id="concept">Concept&lt;/h1>
&lt;p>Augmented and mixed reality systems overlay virtual sound onto the physical world. For the illusion to hold, virtual sources must be acoustically indistinguishable from real ones — a challenge that demands accurate binaural rendering, precise spatial reproduction, and an understanding of human perception.&lt;/p>
&lt;p>Our research investigates when and why listeners accept virtual sounds as real, developing evaluation paradigms and rendering techniques that push the boundaries of auditory plausibility.&lt;/p>
&lt;hr>
&lt;h1 id="transfer-plausibility">Transfer-Plausibility&lt;/h1>
&lt;p>We developed the concept of &lt;em>transfer-plausibility&lt;/em>&lt;a href="#ref7">[7]&lt;/a>&lt;a href="#ref13">[13]&lt;/a>: a rigorous framework for evaluating whether virtual sources are accepted as real when both real and virtual sounds coexist. This goes beyond traditional authenticity testing and captures the perceptual demands unique to AR/MR scenarios. Our 3AFC transfer-plausibility test proved more sensitive than alternative evaluation methods, establishing it as a standard for AR audio research.&lt;/p>
&lt;hr>
&lt;h1 id="binaural-rendering-for-6-degrees-of-freedom">Binaural Rendering for 6 Degrees of Freedom&lt;/h1>
&lt;p>Rendering spatial audio for listeners who can freely move and rotate in a space requires processing recorded Ambisonics sound fields with distance and position information&lt;a href="#ref1">[1]&lt;/a>&lt;a href="#ref2">[2]&lt;/a>. Our work addresses source distance modeling and listener navigation through measured sound fields, enabling experiences like &lt;a href="https://www.sebastianjiroschlecht.com/project/insidethequartet/" target="_blank" rel="noopener">&lt;strong>Inside the Quartet&lt;/strong>&lt;/a> — an immersive experience placing the listener inside a string quartet&lt;a href="#ref10">[10]&lt;/a>.&lt;/p>
&lt;p>Code: &lt;a href="https://leomccormack.github.io/sparta-site/docs/plugins/sparta-suite/#6dofconv" target="_blank" rel="noopener">&lt;strong>SPARTA 6DoFconv&lt;/strong>&lt;/a> — plugin for six-degrees-of-freedom convolution with spatial room impulse responses.&lt;/p>
&lt;p>Code: &lt;a href="https://github.com/thomas-mckenzie/srir_interpolation" target="_blank" rel="noopener">&lt;strong>SRIR Interpolation Toolkit&lt;/strong>&lt;/a> — perceptually informed interpolation of spatial room impulse responses between measurement positions.&lt;/p>
&lt;hr>
&lt;h1 id="latency--perceptual-thresholds">Latency &amp;amp; Perceptual Thresholds&lt;/h1>
&lt;p>Low-latency processing is critical for maintaining the auditory illusion in real-time AR. We characterized the latency limits of head-tracked binaural rendering systems and their impact on plausibility&lt;a href="#ref9">[9]&lt;/a>, providing practical guidelines for system design.&lt;/p>
&lt;p>Code: &lt;a href="https://github.com/ahihi/latency-analyzer" target="_blank" rel="noopener">&lt;strong>Latency Analyzer&lt;/strong>&lt;/a> — tools for measuring binaural rendering latency.&lt;/p>
&lt;hr>
&lt;h1 id="head-worn-device-transparency">Head-Worn Device Transparency&lt;/h1>
&lt;p>Wearing headphones or AR glasses disrupts the perception of real sounds. We developed methods for predicting perceptual transparency of head-worn devices&lt;a href="#ref8">[8]&lt;/a>, informing the design of passthrough processing that preserves natural listening.&lt;/p>
&lt;hr>
&lt;h1 id="audiovisual-congruence">Audiovisual Congruence&lt;/h1>
&lt;p>How do visual cues interact with spatial audio? We studied whether loudspeaker models or human avatars in VR affect localization performance&lt;a href="#ref11">[11]&lt;/a>, revealing the interplay between visual representation and spatial hearing accuracy.&lt;/p>
&lt;hr>
&lt;h1 id="room-acoustic-memory">Room Acoustic Memory&lt;/h1>
&lt;p>Can listeners remember and compare the acoustic character of spaces? Our experiments&lt;a href="#ref7">[7]&lt;/a> investigate how accurately listeners retain room acoustic impressions, informing how quickly AR systems must adapt when transitioning between environments.&lt;/p>
&lt;hr>
&lt;h1 id="experiences">Experiences&lt;/h1>
&lt;ul>
&lt;li>
&lt;p>&lt;a href="https://www.sebastianjiroschlecht.com/project/insidethequartet/" target="_blank" rel="noopener">&lt;strong>Inside the Quartet&lt;/strong>&lt;/a> — immersive spatial audio placing the listener inside a string quartet, demonstrating high-quality binaural rendering for musical performance&lt;a href="#ref10">[10]&lt;/a>.&lt;/p>
&lt;/li>
&lt;li>
&lt;p>&lt;a href="https://www.sebastianjiroschlecht.com/publication/SpaceWalkSound/" target="_blank" rel="noopener">&lt;strong>Space Walk&lt;/strong>&lt;/a> — a navigable virtual planetarium for the Oculus Quest with spatialized music&lt;a href="#ref4">[4]&lt;/a>, combining stereophonic and immersive sound spatialization.&lt;/p>
&lt;/li>
&lt;/ul>
&lt;hr>
&lt;h1 id="references">References&lt;/h1>
&lt;table>
&lt;thead>
&lt;tr>
&lt;th style="text-align: left">&lt;/th>
&lt;th style="text-align: left">Year&lt;/th>
&lt;th style="text-align: left">Authors&lt;/th>
&lt;th style="text-align: left">Article&lt;/th>
&lt;/tr>
&lt;/thead>
&lt;tbody>
&lt;tr>
&lt;td style="text-align: left">&lt;span id="ref1">[1]&lt;/span>&lt;/td>
&lt;td style="text-align: left">2018&lt;/td>
&lt;td style="text-align: left">A. Plinge, S. J. Schlecht et al.&lt;/td>
&lt;td style="text-align: left">&lt;a href="https://doi.org/10.22032/dbt.39955" target="_blank" rel="noopener">Six-degrees-of-freedom binaural audio reproduction of first-order Ambisonics&lt;/a>&lt;/td>
&lt;/tr>
&lt;tr>
&lt;td style="text-align: left">&lt;span id="ref2">[2]&lt;/span>&lt;/td>
&lt;td style="text-align: left">2019&lt;/td>
&lt;td style="text-align: left">O. S. Rummukainen, S. J. Schlecht &amp;amp; E. A. P. Habets&lt;/td>
&lt;td style="text-align: left">&lt;a href="https://doi.org/10.1109/vr.2019.8798177" target="_blank" rel="noopener">Perceptual study of near-field binaural audio rendering in 6DoF VR&lt;/a>&lt;/td>
&lt;/tr>
&lt;tr>
&lt;td style="text-align: left">&lt;span id="ref3">[3]&lt;/span>&lt;/td>
&lt;td style="text-align: left">2020&lt;/td>
&lt;td style="text-align: left">N. Meyer-Kahlen, S. J. Schlecht &amp;amp; T. Lokki&lt;/td>
&lt;td style="text-align: left">&lt;a href="http://research.spa.aalto.fi/publications/papers/dafx20-fadefdn/" target="_blank" rel="noopener">Fade-in control for feedback delay networks&lt;/a>&lt;/td>
&lt;/tr>
&lt;tr>
&lt;td style="text-align: left">&lt;span id="ref4">[4]&lt;/span>&lt;/td>
&lt;td style="text-align: left">2021&lt;/td>
&lt;td style="text-align: left">A. Mancianti, S. J. Schlecht et al.&lt;/td>
&lt;td style="text-align: left">&lt;a href="https://doi.org/10.5281/zenodo.5717860" target="_blank" rel="noopener">Space Walk — visiting the solar system through an immersive sonic journey in VR&lt;/a>&lt;/td>
&lt;/tr>
&lt;tr>
&lt;td style="text-align: left">&lt;span id="ref5">[5]&lt;/span>&lt;/td>
&lt;td style="text-align: left">2021&lt;/td>
&lt;td style="text-align: left">N. Meyer-Kahlen, S. J. Schlecht &amp;amp; T. Lokki&lt;/td>
&lt;td style="text-align: left">&lt;a href="https://doi.org/10.1121/10.0007048" target="_blank" rel="noopener">Perceptual roughness of spatially assigned sparse noise for rendering reverberation&lt;/a>&lt;/td>
&lt;/tr>
&lt;tr>
&lt;td style="text-align: left">&lt;span id="ref6">[6]&lt;/span>&lt;/td>
&lt;td style="text-align: left">2022&lt;/td>
&lt;td style="text-align: left">N. Meyer-Kahlen, S. J. Schlecht &amp;amp; T. Lokki&lt;/td>
&lt;td style="text-align: left">&lt;a href="https://doi.org/10.1121/10.0013364" target="_blank" rel="noopener">Clearly audible room acoustical differences may not reveal where you are in a room&lt;/a>&lt;/td>
&lt;/tr>
&lt;tr>
&lt;td style="text-align: left">&lt;span id="ref7">[7]&lt;/span>&lt;/td>
&lt;td style="text-align: left">2022&lt;/td>
&lt;td style="text-align: left">N. Meyer-Kahlen, S. J. Schlecht et al.&lt;/td>
&lt;td style="text-align: left">&lt;a href="http://research.spa.aalto.fi/publications/papers/i3da21-motus/" target="_blank" rel="noopener">Transfer-plausibility of binaural rendering with different real-world references&lt;/a>&lt;/td>
&lt;/tr>
&lt;tr>
&lt;td style="text-align: left">&lt;span id="ref8">[8]&lt;/span>&lt;/td>
&lt;td style="text-align: left">2022&lt;/td>
&lt;td style="text-align: left">P. Lladó, T. McKenzie, N. Meyer-Kahlen &amp;amp; S. J. Schlecht&lt;/td>
&lt;td style="text-align: left">&lt;a href="https://doi.org/10.17743/jaes.2022.0024" target="_blank" rel="noopener">Predicting perceptual transparency of head-worn devices&lt;/a>&lt;/td>
&lt;/tr>
&lt;tr>
&lt;td style="text-align: left">&lt;span id="ref9">[9]&lt;/span>&lt;/td>
&lt;td style="text-align: left">2023&lt;/td>
&lt;td style="text-align: left">N. Meyer-Kahlen, S. J. Schlecht &amp;amp; T. Lokki&lt;/td>
&lt;td style="text-align: left">&lt;a href="https://doi.org/10.17743/jaes.2022.0089" target="_blank" rel="noopener">Latency analysis of binaural rendering systems&lt;/a>&lt;/td>
&lt;/tr>
&lt;tr>
&lt;td style="text-align: left">&lt;span id="ref10">[10]&lt;/span>&lt;/td>
&lt;td style="text-align: left">2023&lt;/td>
&lt;td style="text-align: left">N. Meyer-Kahlen et al.&lt;/td>
&lt;td style="text-align: left">&lt;a href="https://www.sebastianjiroschlecht.com/project/insidethequartet/" target="_blank" rel="noopener">Inside the Quartet — spatial audio experience&lt;/a>&lt;/td>
&lt;/tr>
&lt;tr>
&lt;td style="text-align: left">&lt;span id="ref11">[11]&lt;/span>&lt;/td>
&lt;td style="text-align: left">2024&lt;/td>
&lt;td style="text-align: left">A. Hofmann, N. Meyer-Kahlen, S. J. Schlecht &amp;amp; T. Lokki&lt;/td>
&lt;td style="text-align: left">&lt;a href="https://doi.org/10.17743/jaes.2022.0162" target="_blank" rel="noopener">Audiovisual congruence and localization in VR&lt;/a>&lt;/td>
&lt;/tr>
&lt;tr>
&lt;td style="text-align: left">&lt;span id="ref12">[12]&lt;/span>&lt;/td>
&lt;td style="text-align: left">2024&lt;/td>
&lt;td style="text-align: left">N. Meyer-Kahlen &amp;amp; S. J. Schlecht&lt;/td>
&lt;td style="text-align: left">&lt;a href="https://doi.org/10.1121/10.0024960" target="_blank" rel="noopener">Directional distribution of the pseudo intensity vector in anisotropic late reverberation&lt;/a>&lt;/td>
&lt;/tr>
&lt;tr>
&lt;td style="text-align: left">&lt;span id="ref13">[13]&lt;/span>&lt;/td>
&lt;td style="text-align: left">2024&lt;/td>
&lt;td style="text-align: left">N. Meyer-Kahlen, S. J. Schlecht et al.&lt;/td>
&lt;td style="text-align: left">&lt;a href="https://doi.org/10.17743/jaes.2022.0178" target="_blank" rel="noopener">Testing auditory illusions in AR: Plausibility, transfer-plausibility, and authenticity&lt;/a>&lt;/td>
&lt;/tr>
&lt;/tbody>
&lt;/table>
&lt;hr></description></item><item><title>Feedback Delay Networks for Artificial Reverberation</title><link>https://artificial-audio.github.io/portfolio/fdn/</link><pubDate>Sat, 06 Sep 2025 00:00:00 +0000</pubDate><guid>https://artificial-audio.github.io/portfolio/fdn/</guid><description>&lt;p>The fastest and most versatile way to add reverberation to sound.&lt;/p>
&lt;hr>
&lt;h1 id="concept">Concept&lt;/h1>
&lt;p>Feedback delay networks (FDNs) are recursive filters that simulate the complex sound reflections in an acoustic space. A set of delay lines are connected through a feedback matrix, producing dense, natural-sounding reverberation from a compact set of parameters. Since their introduction, FDNs have become the standard building block for real-time artificial reverberators in games, music production, and spatial audio.&lt;/p>
&lt;p>Our research pushes FDN theory and practice forward — from the mathematical foundations of lossless and allpass designs to practical tools for colorless, high-quality reverberation.&lt;/p>
&lt;hr>
&lt;h1 id="lossless--allpass-design">Lossless &amp;amp; Allpass Design&lt;/h1>
&lt;p>We established the necessary and sufficient conditions for lossless FDNs&lt;a href="#ref4">[4]&lt;/a> and extended these results to allpass feedback delay networks&lt;a href="#ref11">[11]&lt;/a>, enabling precise spectral shaping of the reverb tail. Frequency-dependent Schroeder allpass filters&lt;a href="#ref7">[7]&lt;/a> add further control over the spectral envelope of the reverberation.&lt;/p>
&lt;p>Code: &lt;a href="https://github.com/SebastianJiroSchlecht/fdnToolbox" target="_blank" rel="noopener">&lt;strong>fdnToolbox&lt;/strong>&lt;/a> — comprehensive MATLAB toolbox for FDN design and analysis, released under GNU-GPL 3.0.&lt;/p>
&lt;hr>
&lt;h1 id="echo-density--mixing-time">Echo Density &amp;amp; Mixing Time&lt;/h1>
&lt;p>How quickly does an FDN build up a diffuse sound field? We developed the analytical characterization of echo density and mixing time in FDNs&lt;a href="#ref5">[5]&lt;/a>, a critical perceptual factor for natural-sounding reverberation. This work enables designers to predict and control the perceptual onset of diffuseness.&lt;/p>
&lt;hr>
&lt;h1 id="scattering--delay-feedback-matrices">Scattering &amp;amp; Delay Feedback Matrices&lt;/h1>
&lt;p>Extending FDNs with scattering junctions&lt;a href="#ref9">[9]&lt;/a> and delay-embedded feedback matrices&lt;a href="#ref6">[6]&lt;/a> yields denser and more physically motivated reflections. These architectures bridge the gap between abstract delay networks and physical models of wave propagation.&lt;/p>
&lt;p>Demo: &lt;a href="https://www.audiolabs-erlangen.de/resources/2019-WASPAA-DFM-FDN/" target="_blank" rel="noopener">&lt;strong>Dense reverberation with delay feedback matrices&lt;/strong>&lt;/a> — interactive listening examples from the delay feedback matrix framework.&lt;/p>
&lt;hr>
&lt;h1 id="decorrelation">Decorrelation&lt;/h1>
&lt;p>Maximizing output signal decorrelation&lt;a href="#ref13">[13]&lt;/a> is essential for spatial audio and multichannel reverb rendering. Our analysis provides design guidelines for feedback matrices that produce uncorrelated output channels, directly applicable to surround and immersive audio production.&lt;/p>
&lt;p>Code: &lt;a href="https://github.com/Ion3rik/fdnDecorrelation" target="_blank" rel="noopener">&lt;strong>fdnDecorrelation&lt;/strong>&lt;/a> — implementation of decorrelation analysis for FDNs.&lt;/p>
&lt;hr>
&lt;h1 id="modal-decomposition">Modal Decomposition&lt;/h1>
&lt;p>We introduced a framework for decomposing FDNs into their constituent resonant modes&lt;a href="#ref8">[8]&lt;/a>, bridging the gap between delay-network and modal descriptions of room acoustics. This decomposition enables new analysis methods and connects FDN design to the physics of room resonances. Further work on modal excitation&lt;a href="#ref15">[15]&lt;/a> reveals how input signals interact with the FDN&amp;rsquo;s resonant structure.&lt;/p>
&lt;p>Code: &lt;a href="https://github.com/SebastianJiroSchlecht/FDNModalShapes" target="_blank" rel="noopener">&lt;strong>FDNModalShapes&lt;/strong>&lt;/a> — visualizing and sonifying modal excitation patterns in FDNs.&lt;/p>
&lt;hr>
&lt;h1 id="grouped-fdns">Grouped FDNs&lt;/h1>
&lt;p>Grouped FDNs with frequency-dependent coupling&lt;a href="#ref12">[12]&lt;/a> allow richer spectral and spatial control by connecting groups of delay lines through structured feedback. This architecture enables independent tuning of different frequency bands while maintaining the computational efficiency of FDNs.&lt;/p>
&lt;p>Code: &lt;a href="https://github.com/orchidas/Frequency-dependent-GFDN" target="_blank" rel="noopener">&lt;strong>Frequency-dependent GFDN&lt;/strong>&lt;/a> — implementation by Orchisama Das.&lt;/p>
&lt;hr>
&lt;h1 id="colorless-reverberation">Colorless Reverberation&lt;/h1>
&lt;p>Achieving a spectrally flat (&amp;ldquo;colorless&amp;rdquo;) reverb tail is a key design goal. Using differentiable signal processing, we optimize FDN parameters via gradient descent to minimize spectral coloration&lt;a href="#ref14">[14]&lt;/a>&lt;a href="#ref18">[18]&lt;/a>. Even tiny FDN configurations can produce high-quality colorless reverberation when properly optimized.&lt;/p>
&lt;p>Code: &lt;a href="https://github.com/gdalsanto/diff-fdn-colorless" target="_blank" rel="noopener">&lt;strong>diff-fdn-colorless&lt;/strong>&lt;/a> — differentiable optimization of FDN coloration.&lt;/p>
&lt;p>Demo: &lt;a href="http://research.spa.aalto.fi/publications/papers/eurasip-colorless-fdn/" target="_blank" rel="noopener">&lt;strong>Colorless FDN listening examples&lt;/strong>&lt;/a> — audio comparisons of optimized configurations.&lt;/p>
&lt;hr>
&lt;h1 id="velvet-noise--non-exponential-decay">Velvet Noise &amp;amp; Non-Exponential Decay&lt;/h1>
&lt;p>Dark velvet noise sequences&lt;a href="#ref16">[16]&lt;/a> extend FDNs to model non-exponential reverberation decay, capturing the complex energy envelopes found in real rooms. The binaural dark velvet noise reverberator&lt;a href="#ref16">[16]&lt;/a> supports spatial rendering.&lt;/p>
&lt;p>Code: &lt;a href="https://github.com/Ion3rik/dark-velvet-noise-reverb" target="_blank" rel="noopener">&lt;strong>dark-velvet-noise-reverb&lt;/strong>&lt;/a> — an FDN-inspired reverberator using dark velvet noise sequences, with binaural support.&lt;/p>
&lt;p>Demo: &lt;a href="https://research.spa.aalto.fi/publications/papers/dafx24-bdvn/" target="_blank" rel="noopener">&lt;strong>Binaural DVN listening examples&lt;/strong>&lt;/a> — spatial reverb rendering demos.&lt;/p>
&lt;hr>
&lt;h1 id="tools">Tools&lt;/h1>
&lt;ul>
&lt;li>
&lt;p>&lt;a href="https://github.com/SebastianJiroSchlecht/fdnToolbox" target="_blank" rel="noopener">&lt;strong>FDNTB — Feedback Delay Network Toolbox&lt;/strong>&lt;/a> — MATLAB toolbox for FDN design and analysis. Includes feedback matrices, topologies, attenuation filters, modal decomposition, and time-varying matrices. Project page: &lt;a href="https://www.sebastianjiroschlecht.com/project/fdntb/" target="_blank" rel="noopener">FDNTB at DAFx 2020&lt;/a>.&lt;/p>
&lt;/li>
&lt;li>
&lt;p>&lt;a href="https://github.com/gdalsanto/flamo" target="_blank" rel="noopener">&lt;strong>FLAMO&lt;/strong>&lt;/a> — PyTorch library for building and optimizing differentiable audio systems. Chain differentiable gains, filters, delays, and transforms into FDN architectures and train them end-to-end. &lt;a href="https://gdalsanto.github.io/flamo" target="_blank" rel="noopener">Documentation&lt;/a> · &lt;a href="https://pypi.org/project/flamo/" target="_blank" rel="noopener">PyPI&lt;/a>.&lt;/p>
&lt;/li>
&lt;/ul>
&lt;hr>
&lt;h1 id="references">References&lt;/h1>
&lt;table>
&lt;thead>
&lt;tr>
&lt;th style="text-align: left">&lt;/th>
&lt;th style="text-align: left">Year&lt;/th>
&lt;th style="text-align: left">Authors&lt;/th>
&lt;th style="text-align: left">Article&lt;/th>
&lt;/tr>
&lt;/thead>
&lt;tbody>
&lt;tr>
&lt;td style="text-align: left">&lt;span id="ref1">[1]&lt;/span>&lt;/td>
&lt;td style="text-align: left">2012&lt;/td>
&lt;td style="text-align: left">S. J. Schlecht &amp;amp; E. A. P. Habets&lt;/td>
&lt;td style="text-align: left">&lt;a href="https://doi.org/10.1109/eeei.2012.6376933" target="_blank" rel="noopener">Connections between parallel and serial combinations of comb filters and feedback delay networks&lt;/a>&lt;/td>
&lt;/tr>
&lt;tr>
&lt;td style="text-align: left">&lt;span id="ref2">[2]&lt;/span>&lt;/td>
&lt;td style="text-align: left">2015&lt;/td>
&lt;td style="text-align: left">S. J. Schlecht &amp;amp; E. A. P. Habets&lt;/td>
&lt;td style="text-align: left">&lt;a href="https://doi.org/10.1121/1.4928394" target="_blank" rel="noopener">Time-varying feedback matrices in feedback delay networks and their application in artificial reverberation&lt;/a>&lt;/td>
&lt;/tr>
&lt;tr>
&lt;td style="text-align: left">&lt;span id="ref3">[3]&lt;/span>&lt;/td>
&lt;td style="text-align: left">2017&lt;/td>
&lt;td style="text-align: left">S. J. Schlecht &amp;amp; E. A. P. Habets&lt;/td>
&lt;td style="text-align: left">&lt;a href="https://doi.org/10.1109/taslp.2016.2635027" target="_blank" rel="noopener">Accurate reverberation time control in feedback delay networks&lt;/a>&lt;/td>
&lt;/tr>
&lt;tr>
&lt;td style="text-align: left">&lt;span id="ref4">[4]&lt;/span>&lt;/td>
&lt;td style="text-align: left">2017&lt;/td>
&lt;td style="text-align: left">S. J. Schlecht &amp;amp; E. A. P. Habets&lt;/td>
&lt;td style="text-align: left">&lt;a href="https://doi.org/10.1109/tsp.2016.2637323" target="_blank" rel="noopener">On lossless feedback delay networks&lt;/a>&lt;/td>
&lt;/tr>
&lt;tr>
&lt;td style="text-align: left">&lt;span id="ref5">[5]&lt;/span>&lt;/td>
&lt;td style="text-align: left">2017&lt;/td>
&lt;td style="text-align: left">S. J. Schlecht &amp;amp; E. A. P. Habets&lt;/td>
&lt;td style="text-align: left">&lt;a href="https://doi.org/10.1109/taslp.2016.2635027" target="_blank" rel="noopener">Feedback delay networks: echo density and mixing time&lt;/a>&lt;/td>
&lt;/tr>
&lt;tr>
&lt;td style="text-align: left">&lt;span id="ref6">[6]&lt;/span>&lt;/td>
&lt;td style="text-align: left">2019&lt;/td>
&lt;td style="text-align: left">S. J. Schlecht &amp;amp; E. A. P. Habets&lt;/td>
&lt;td style="text-align: left">&lt;a href="https://doi.org/10.1109/waspaa.2019.8937284" target="_blank" rel="noopener">Dense reverberation with delay feedback matrices&lt;/a>&lt;/td>
&lt;/tr>
&lt;tr>
&lt;td style="text-align: left">&lt;span id="ref7">[7]&lt;/span>&lt;/td>
&lt;td style="text-align: left">2019&lt;/td>
&lt;td style="text-align: left">S. J. Schlecht&lt;/td>
&lt;td style="text-align: left">&lt;a href="https://doi.org/10.3390/app10010187" target="_blank" rel="noopener">Frequency-dependent Schroeder allpass filters&lt;/a>&lt;/td>
&lt;/tr>
&lt;tr>
&lt;td style="text-align: left">&lt;span id="ref8">[8]&lt;/span>&lt;/td>
&lt;td style="text-align: left">2019&lt;/td>
&lt;td style="text-align: left">S. J. Schlecht &amp;amp; E. A. P. Habets&lt;/td>
&lt;td style="text-align: left">&lt;a href="https://doi.org/10.1109/tsp.2019.2937286" target="_blank" rel="noopener">Modal decomposition of feedback delay networks&lt;/a>&lt;/td>
&lt;/tr>
&lt;tr>
&lt;td style="text-align: left">&lt;span id="ref9">[9]&lt;/span>&lt;/td>
&lt;td style="text-align: left">2020&lt;/td>
&lt;td style="text-align: left">S. J. Schlecht &amp;amp; E. A. P. Habets&lt;/td>
&lt;td style="text-align: left">&lt;a href="https://doi.org/10.1109/taslp.2020.3001395" target="_blank" rel="noopener">Scattering in feedback delay networks&lt;/a>&lt;/td>
&lt;/tr>
&lt;tr>
&lt;td style="text-align: left">&lt;span id="ref10">[10]&lt;/span>&lt;/td>
&lt;td style="text-align: left">2020&lt;/td>
&lt;td style="text-align: left">S. J. Schlecht&lt;/td>
&lt;td style="text-align: left">&lt;a href="https://www.sebastianjiroschlecht.com/project/fdntb/" target="_blank" rel="noopener">FDNTB: The Feedback Delay Network Toolbox&lt;/a>&lt;/td>
&lt;/tr>
&lt;tr>
&lt;td style="text-align: left">&lt;span id="ref11">[11]&lt;/span>&lt;/td>
&lt;td style="text-align: left">2021&lt;/td>
&lt;td style="text-align: left">S. J. Schlecht&lt;/td>
&lt;td style="text-align: left">&lt;a href="https://doi.org/10.1109/tsp.2021.3053507" target="_blank" rel="noopener">Allpass feedback delay networks&lt;/a>&lt;/td>
&lt;/tr>
&lt;tr>
&lt;td style="text-align: left">&lt;span id="ref12">[12]&lt;/span>&lt;/td>
&lt;td style="text-align: left">2023&lt;/td>
&lt;td style="text-align: left">O. Das, S. J. Schlecht &amp;amp; E. De Sena&lt;/td>
&lt;td style="text-align: left">&lt;a href="https://doi.org/10.1109/taslp.2023.3277368" target="_blank" rel="noopener">Grouped feedback delay networks with frequency-dependent coupling&lt;/a>&lt;/td>
&lt;/tr>
&lt;tr>
&lt;td style="text-align: left">&lt;span id="ref13">[13]&lt;/span>&lt;/td>
&lt;td style="text-align: left">2023&lt;/td>
&lt;td style="text-align: left">S. J. Schlecht, J. Fagerström &amp;amp; V. Välimäki&lt;/td>
&lt;td style="text-align: left">&lt;a href="https://doi.org/10.1109/taslp.2023.3313440" target="_blank" rel="noopener">Decorrelation in feedback delay networks&lt;/a>&lt;/td>
&lt;/tr>
&lt;tr>
&lt;td style="text-align: left">&lt;span id="ref14">[14]&lt;/span>&lt;/td>
&lt;td style="text-align: left">2023&lt;/td>
&lt;td style="text-align: left">G. Dal Santo et al.&lt;/td>
&lt;td style="text-align: left">&lt;a href="http://research.spa.aalto.fi/publications/papers/dafx23-colorless-fdn/" target="_blank" rel="noopener">Differentiable feedback delay network for colorless reverberation&lt;/a>&lt;/td>
&lt;/tr>
&lt;tr>
&lt;td style="text-align: left">&lt;span id="ref15">[15]&lt;/span>&lt;/td>
&lt;td style="text-align: left">2024&lt;/td>
&lt;td style="text-align: left">S. J. Schlecht et al.&lt;/td>
&lt;td style="text-align: left">&lt;a href="https://doi.org/10.1109/lsp.2024.3466790" target="_blank" rel="noopener">Modal excitation in feedback delay networks&lt;/a>&lt;/td>
&lt;/tr>
&lt;tr>
&lt;td style="text-align: left">&lt;span id="ref16">[16]&lt;/span>&lt;/td>
&lt;td style="text-align: left">2024&lt;/td>
&lt;td style="text-align: left">J. Fagerström, S. J. Schlecht &amp;amp; V. Välimäki&lt;/td>
&lt;td style="text-align: left">&lt;a href="https://doi.org/10.17743/jaes.2022.0138" target="_blank" rel="noopener">Non-exponential reverberation modeling using dark velvet noise&lt;/a>&lt;/td>
&lt;/tr>
&lt;tr>
&lt;td style="text-align: left">&lt;span id="ref17">[17]&lt;/span>&lt;/td>
&lt;td style="text-align: left">2024&lt;/td>
&lt;td style="text-align: left">G. Dal Santo et al.&lt;/td>
&lt;td style="text-align: left">&lt;a href="http://research.spa.aalto.fi/publications/papers/dafx24-rir2fdn/" target="_blank" rel="noopener">RIR2FDN: Improved room impulse response analysis and synthesis&lt;/a>&lt;/td>
&lt;/tr>
&lt;tr>
&lt;td style="text-align: left">&lt;span id="ref18">[18]&lt;/span>&lt;/td>
&lt;td style="text-align: left">2025&lt;/td>
&lt;td style="text-align: left">G. Dal Santo et al.&lt;/td>
&lt;td style="text-align: left">&lt;a href="https://doi.org/10.1186/s13636-025-00401-w" target="_blank" rel="noopener">Optimizing tiny colorless feedback delay networks&lt;/a>&lt;/td>
&lt;/tr>
&lt;tr>
&lt;td style="text-align: left">&lt;span id="ref19">[19]&lt;/span>&lt;/td>
&lt;td style="text-align: left">2025&lt;/td>
&lt;td style="text-align: left">G. Dal Santo et al.&lt;/td>
&lt;td style="text-align: left">&lt;a href="https://doi.org/10.1109/icassp49660.2025.10888532" target="_blank" rel="noopener">FLAMO: Frequency-sampling library for audio-module optimization&lt;/a>&lt;/td>
&lt;/tr>
&lt;/tbody>
&lt;/table>
&lt;hr></description></item><item><title>Robust Measurement of Room Acoustics</title><link>https://artificial-audio.github.io/portfolio/measurement/</link><pubDate>Sat, 06 Sep 2025 00:00:00 +0000</pubDate><guid>https://artificial-audio.github.io/portfolio/measurement/</guid><description>&lt;p>Collect and clean room impulse responses in noisy and uncontrolled environments.&lt;/p>
&lt;hr>
&lt;h1 id="concept">Concept&lt;/h1>
&lt;p>Measuring how a room sounds — its reverberation, reflections, and spatial characteristics — is fundamental to acoustics research, architectural design, and audio production. But real-world measurements are messy: background noise, non-stationary disturbances, and equipment limitations corrupt the signals we rely on.&lt;/p>
&lt;p>Our research develops robust measurement techniques and analysis tools that extract reliable acoustic information even under difficult conditions, and makes the resulting data accessible for further processing and simulation.&lt;/p>
&lt;hr>
&lt;h1 id="swept-sine-measurement-in-noise">Swept-Sine Measurement in Noise&lt;/h1>
&lt;p>Real acoustic measurements are often contaminated by non-stationary noise — footsteps, door slams, HVAC clicks. We developed the &lt;em>Rule of Two&lt;/em>&lt;a href="#ref5">[5]&lt;/a> and its short-term extension&lt;a href="#ref7">[7]&lt;/a>: practical methods for identifying clean measurements among repeated sweeps in the presence of transient disturbances. Further work on noise removal&lt;a href="#ref8">[8]&lt;/a> enables reliable data collection even in occupied, active spaces.&lt;/p>
&lt;p>Code: &lt;a href="https://github.com/KPrawda/mosaic_noise_removal" target="_blank" rel="noopener">&lt;strong>mosaic_noise_removal&lt;/strong>&lt;/a> — Python implementation of non-stationary noise removal from repeated swept-sine measurements.&lt;/p>
&lt;p>Code: &lt;a href="https://github.com/KPrawda/short-time-coherence-model" target="_blank" rel="noopener">&lt;strong>short-time-coherence-model&lt;/strong>&lt;/a> — short-time coherence model for localizing noise events in sweep measurements&lt;a href="#ref9">[9]&lt;/a>.&lt;/p>
&lt;p>Demo: &lt;a href="http://research.spa.aalto.fi/publications/papers/jasa-el-ro2/" target="_blank" rel="noopener">&lt;strong>Rule of Two listening examples&lt;/strong>&lt;/a> — interactive demonstration of clean vs. corrupted sweep selection.&lt;/p>
&lt;hr>
&lt;h1 id="calibrating-reverberation-models">Calibrating Reverberation Models&lt;/h1>
&lt;p>The classic Sabine and Eyring reverberation time formulas underpin room acoustics design, yet their empirical accuracy is rarely scrutinized. We revisited these formulas using over 5,000 measurements in the variable-acoustics lab Arni&lt;a href="#ref2">[2]&lt;/a>, providing updated calibrations&lt;a href="#ref4">[4]&lt;/a> that improve prediction accuracy for practical room design.&lt;/p>
&lt;p>Data: &lt;a href="https://zenodo.org/records/6985104" target="_blank" rel="noopener">&lt;strong>Arni dataset&lt;/strong>&lt;/a> — room acoustic parameter measurements across thousands of absorption configurations.&lt;/p>
&lt;hr>
&lt;h1 id="energy-decay-analysis">Energy Decay Analysis&lt;/h1>
&lt;p>Extracting reverberation parameters from room impulse responses traditionally relies on iterative curve fitting that is fragile and requires manual tuning. &lt;a href="https://github.com/georg-goetz/DecayFitNet" target="_blank" rel="noopener">&lt;strong>DecayFitNet&lt;/strong>&lt;/a> replaces this with a lightweight neural network&lt;a href="#ref3">[3]&lt;/a> that estimates multi-exponential energy decay parameters in a single forward pass — deterministic, fast, and validated on 20,000+ real acoustic measurements.&lt;/p>
&lt;hr>
&lt;h1 id="common-slope-late-reverberation-model">Common-Slope Late Reverberation Model&lt;/h1>
&lt;p>Real rooms rarely exhibit simple single-exponential decay. The common-slope model&lt;a href="#ref6">[6]&lt;/a> captures multi-exponential and directional decay behavior by separating shared decay slopes from direction-dependent amplitudes. This parametric model is particularly effective for coupled rooms and complex geometries, and enables efficient real-time rendering.&lt;/p>
&lt;p>Code: &lt;a href="https://github.com/georg-goetz/CommonSlopeAnalysis" target="_blank" rel="noopener">&lt;strong>CommonSlopeAnalysis&lt;/strong>&lt;/a> — MATLAB toolkit for common-slope analysis of late reverberation.&lt;/p>
&lt;p>Code: &lt;a href="https://github.com/audiolabs/blind-multi-room-model" target="_blank" rel="noopener">&lt;strong>blind-multi-room-model&lt;/strong>&lt;/a> — blind estimation of multi-room acoustic models from measurements. &lt;a href="https://zenodo.org/records/13341566" target="_blank" rel="noopener">Dataset&lt;/a>.&lt;/p>
&lt;p>Demo: &lt;a href="http://research.spa.aalto.fi/publications/papers/ieeetaslp-common-slope/" target="_blank" rel="noopener">&lt;strong>Common-slope listening examples&lt;/strong>&lt;/a> — interactive demonstrations of common-slope analysis results.&lt;/p>
&lt;hr>
&lt;h1 id="measurement-signal-design">Measurement Signal Design&lt;/h1>
&lt;p>Designing optimal excitation signals improves measurement quality at the source. We developed two-stage filter designs&lt;a href="#ref10">[10]&lt;/a> for measurement processing that extract cleaner impulse responses from noisy recordings.&lt;/p>
&lt;p>Code: &lt;a href="https://github.com/KPrawda/Two_stage_filter" target="_blank" rel="noopener">&lt;strong>Two_stage_filter&lt;/strong>&lt;/a> — implementation of two-stage filter design for measurement signal processing.&lt;/p>
&lt;p>Demo: &lt;a href="http://research.spa.aalto.fi/publications/papers/ieee-spl-two-stage/" target="_blank" rel="noopener">&lt;strong>Two-stage filter listening examples&lt;/strong>&lt;/a> — audio comparisons.&lt;/p>
&lt;hr>
&lt;h1 id="references">References&lt;/h1>
&lt;table>
&lt;thead>
&lt;tr>
&lt;th style="text-align: left">&lt;/th>
&lt;th style="text-align: left">Year&lt;/th>
&lt;th style="text-align: left">Authors&lt;/th>
&lt;th style="text-align: left">Article&lt;/th>
&lt;/tr>
&lt;/thead>
&lt;tbody>
&lt;tr>
&lt;td style="text-align: left">&lt;span id="ref1">[1]&lt;/span>&lt;/td>
&lt;td style="text-align: left">2020&lt;/td>
&lt;td style="text-align: left">K. Prawda, S. J. Schlecht &amp;amp; V. Välimäki&lt;/td>
&lt;td style="text-align: left">&lt;a href="http://research.spa.aalto.fi/publications/papers/smc20-RTmodels/" target="_blank" rel="noopener">Evaluation of reverberation time models with variable acoustics&lt;/a>&lt;/td>
&lt;/tr>
&lt;tr>
&lt;td style="text-align: left">&lt;span id="ref2">[2]&lt;/span>&lt;/td>
&lt;td style="text-align: left">2021&lt;/td>
&lt;td style="text-align: left">K. Prawda, S. J. Schlecht &amp;amp; V. Välimäki&lt;/td>
&lt;td style="text-align: left">&lt;a href="https://www.akustinenseura.fi/wp-content/uploads/2021/11/akustiikkapaivat_2021_s150.pdf" target="_blank" rel="noopener">Room acoustic parameters measurements in variable acoustic laboratory Arni&lt;/a>&lt;/td>
&lt;/tr>
&lt;tr>
&lt;td style="text-align: left">&lt;span id="ref3">[3]&lt;/span>&lt;/td>
&lt;td style="text-align: left">2022&lt;/td>
&lt;td style="text-align: left">G. Götz, S. J. Schlecht &amp;amp; V. Pulkki&lt;/td>
&lt;td style="text-align: left">&lt;a href="https://doi.org/10.1121/10.0013416" target="_blank" rel="noopener">DecayFitNet: neural network for energy decay analysis&lt;/a>&lt;/td>
&lt;/tr>
&lt;tr>
&lt;td style="text-align: left">&lt;span id="ref4">[4]&lt;/span>&lt;/td>
&lt;td style="text-align: left">2022&lt;/td>
&lt;td style="text-align: left">K. Prawda, S. J. Schlecht &amp;amp; V. Välimäki&lt;/td>
&lt;td style="text-align: left">&lt;a href="https://doi.org/10.1121/10.0013575" target="_blank" rel="noopener">Calibrating the Sabine and Eyring formulas&lt;/a>&lt;/td>
&lt;/tr>
&lt;tr>
&lt;td style="text-align: left">&lt;span id="ref5">[5]&lt;/span>&lt;/td>
&lt;td style="text-align: left">2022&lt;/td>
&lt;td style="text-align: left">K. Prawda, S. J. Schlecht &amp;amp; V. Välimäki&lt;/td>
&lt;td style="text-align: left">&lt;a href="https://doi.org/10.1121/10.0009915" target="_blank" rel="noopener">Robust selection of clean swept-sine measurements in non-stationary noise&lt;/a>&lt;/td>
&lt;/tr>
&lt;tr>
&lt;td style="text-align: left">&lt;span id="ref6">[6]&lt;/span>&lt;/td>
&lt;td style="text-align: left">2023&lt;/td>
&lt;td style="text-align: left">G. Götz, S. J. Schlecht &amp;amp; V. Pulkki&lt;/td>
&lt;td style="text-align: left">&lt;a href="https://doi.org/10.1109/taslp.2023.3317572" target="_blank" rel="noopener">Common-slope modeling of late reverberation&lt;/a>&lt;/td>
&lt;/tr>
&lt;tr>
&lt;td style="text-align: left">&lt;span id="ref7">[7]&lt;/span>&lt;/td>
&lt;td style="text-align: left">2023&lt;/td>
&lt;td style="text-align: left">K. Prawda, S. J. Schlecht &amp;amp; V. Välimäki&lt;/td>
&lt;td style="text-align: left">&lt;a href="http://research.spa.aalto.fi/publications/papers/jasa-el-ro2/" target="_blank" rel="noopener">Short-term Rule of Two: localizing non-stationary noise events in swept-sine measurements&lt;/a>&lt;/td>
&lt;/tr>
&lt;tr>
&lt;td style="text-align: left">&lt;span id="ref8">[8]&lt;/span>&lt;/td>
&lt;td style="text-align: left">2024&lt;/td>
&lt;td style="text-align: left">K. Prawda, S. J. Schlecht &amp;amp; V. Välimäki&lt;/td>
&lt;td style="text-align: left">&lt;a href="https://doi.org/10.1121/10.0028203" target="_blank" rel="noopener">Non-stationary noise removal from repeated sweep measurements&lt;/a>&lt;/td>
&lt;/tr>
&lt;tr>
&lt;td style="text-align: left">&lt;span id="ref9">[9]&lt;/span>&lt;/td>
&lt;td style="text-align: left">2024&lt;/td>
&lt;td style="text-align: left">K. Prawda, S. J. Schlecht &amp;amp; V. Välimäki&lt;/td>
&lt;td style="text-align: left">&lt;a href="https://doi.org/10.1121/10.0028172" target="_blank" rel="noopener">Short-time coherence model for swept-sine measurements&lt;/a>&lt;/td>
&lt;/tr>
&lt;tr>
&lt;td style="text-align: left">&lt;span id="ref10">[10]&lt;/span>&lt;/td>
&lt;td style="text-align: left">2024&lt;/td>
&lt;td style="text-align: left">V. Välimäki, K. Prawda &amp;amp; S. J. Schlecht&lt;/td>
&lt;td style="text-align: left">&lt;a href="https://doi.org/10.1109/lsp.2024.3352510" target="_blank" rel="noopener">Two-stage filter design for measurement processing&lt;/a>&lt;/td>
&lt;/tr>
&lt;/tbody>
&lt;/table>
&lt;hr></description></item><item><title>Similarity of Sound</title><link>https://artificial-audio.github.io/portfolio/similarity/</link><pubDate>Sat, 06 Sep 2025 00:00:00 +0000</pubDate><guid>https://artificial-audio.github.io/portfolio/similarity/</guid><description>&lt;p>Computational measures of perceptual similarity of sound.&lt;/p>
&lt;hr>
&lt;h1 id="concept">Concept&lt;/h1>
&lt;p>How do we quantify whether two sounds are &amp;ldquo;similar&amp;rdquo;? This question arises everywhere in audio research — from evaluating whether a synthetic reverb matches a measured room, to interpolating between spatial impulse responses, to assessing whether rendering artifacts are perceptible.&lt;/p>
&lt;p>Our research develops metrics and methods for comparing sounds and acoustic environments in perceptually meaningful ways, bridging the gap between signal-level differences and what listeners actually hear.&lt;/p>
&lt;hr>
&lt;h1 id="similarity-metrics-for-late-reverberation">Similarity Metrics for Late Reverberation&lt;/h1>
&lt;p>We developed and compared computational metrics that capture perceptual similarity between reverberant sound fields&lt;a href="#ref9">[9]&lt;/a>, going beyond simple energy-based measures to account for spectral, temporal, and spatial structure. These metrics enable objective evaluation of reverberation rendering quality.&lt;/p>
&lt;p>Code: &lt;a href="https://github.com/gdalsanto/similarity-metrics-for-rirs" target="_blank" rel="noopener">&lt;strong>similarity-metrics-for-rirs&lt;/strong>&lt;/a> — Python toolkit for computing and comparing similarity metrics for room impulse responses.&lt;/p>
&lt;p>Demo: &lt;a href="http://research.spa.aalto.fi/publications/papers/asilomar24-reverb-similarity/" target="_blank" rel="noopener">&lt;strong>Reverb similarity listening examples&lt;/strong>&lt;/a> — audio comparisons with different metrics.&lt;/p>
&lt;hr>
&lt;h1 id="optimal-transport-for-audio">Optimal Transport for Audio&lt;/h1>
&lt;p>Optimal transport theory provides a principled mathematical framework for comparing distributions. We apply it to quantify distances between time-frequency representations of audio signals&lt;a href="#ref10">[10]&lt;/a> and to interpolate between spatial room impulse responses&lt;a href="#ref6">[6]&lt;/a>, yielding smooth, perceptually meaningful transitions between acoustic environments.&lt;/p>
&lt;p>Code: &lt;a href="https://github.com/thomas-mckenzie/srir_interpolation" target="_blank" rel="noopener">&lt;strong>SRIR Interpolation via Optimal Transport&lt;/strong>&lt;/a> — perceptually informed interpolation of spatial room impulse responses using partial optimal transport.&lt;/p>
&lt;hr>
&lt;h1 id="source-signal-similarity--room-perception">Source Signal Similarity &amp;amp; Room Perception&lt;/h1>
&lt;p>How does the source signal itself affect our ability to hear differences between rooms? We investigated how the similarity of source material — speech, music, noise — influences listeners&amp;rsquo; ability to distinguish between different positions in a room&lt;a href="#ref8">[8]&lt;/a>, revealing that source characteristics interact strongly with spatial perception.&lt;/p>
&lt;hr>
&lt;h1 id="perceptual-roughness">Perceptual Roughness&lt;/h1>
&lt;p>Sparse noise signals used in efficient spatial audio rendering can introduce roughness artifacts. We quantified the perceptual roughness of spatially assigned sparse noise&lt;a href="#ref2">[2]&lt;/a>&lt;a href="#ref7">[7]&lt;/a>, establishing the thresholds where rendering artifacts become audible and guiding the design of velvet noise reverberators.&lt;/p>
&lt;p>Demo: &lt;a href="http://research.spa.aalto.fi/publications/papers/dafx23-vn-roughness" target="_blank" rel="noopener">&lt;strong>Velvet noise roughness analysis&lt;/strong>&lt;/a> — frequency-dependent temporal roughness of velvet noise, with listening examples.&lt;/p>
&lt;hr>
&lt;h1 id="room-acoustic-memory--room-transitions">Room Acoustic Memory &amp;amp; Room Transitions&lt;/h1>
&lt;p>How accurately do listeners remember and compare room acoustics? Our experiments reveal the limits of acoustic memory&lt;a href="#ref3">[3]&lt;/a>&lt;a href="#ref6">[6]&lt;/a>, directly informing the design of systems that transition between coupled rooms. This includes work on the perceived aperture position during room transitions&lt;a href="#ref4">[4]&lt;/a> and the perceptual analysis of directional late reverberation&lt;a href="#ref1">[1]&lt;/a>.&lt;/p>
&lt;p>Data: &lt;a href="https://zenodo.org/doi/10.5281/zenodo.4095493" target="_blank" rel="noopener">&lt;strong>Room transition datasets&lt;/strong>&lt;/a> — measured acoustic data of transitions between coupled rooms.&lt;/p>
&lt;p>Demo: &lt;a href="http://research.spa.aalto.fi/publications/papers/jasa_whereyouare/" target="_blank" rel="noopener">&lt;strong>Where you are in a room&lt;/strong>&lt;/a> — interactive exploration of room position perception.&lt;/p>
&lt;hr>
&lt;h1 id="references">References&lt;/h1>
&lt;table>
&lt;thead>
&lt;tr>
&lt;th style="text-align: left">&lt;/th>
&lt;th style="text-align: left">Year&lt;/th>
&lt;th style="text-align: left">Authors&lt;/th>
&lt;th style="text-align: left">Article&lt;/th>
&lt;/tr>
&lt;/thead>
&lt;tbody>
&lt;tr>
&lt;td style="text-align: left">&lt;span id="ref1">[1]&lt;/span>&lt;/td>
&lt;td style="text-align: left">2021&lt;/td>
&lt;td style="text-align: left">B. Alary, P. Massé, S. J. Schlecht et al.&lt;/td>
&lt;td style="text-align: left">&lt;a href="https://doi.org/10.1121/10.0004770" target="_blank" rel="noopener">Perceptual analysis of directional late reverberation&lt;/a>&lt;/td>
&lt;/tr>
&lt;tr>
&lt;td style="text-align: left">&lt;span id="ref2">[2]&lt;/span>&lt;/td>
&lt;td style="text-align: left">2021&lt;/td>
&lt;td style="text-align: left">N. Meyer-Kahlen, S. J. Schlecht &amp;amp; T. Lokki&lt;/td>
&lt;td style="text-align: left">&lt;a href="https://doi.org/10.1121/10.0007048" target="_blank" rel="noopener">Perceptual roughness of spatially assigned sparse noise for rendering reverberation&lt;/a>&lt;/td>
&lt;/tr>
&lt;tr>
&lt;td style="text-align: left">&lt;span id="ref3">[3]&lt;/span>&lt;/td>
&lt;td style="text-align: left">2022&lt;/td>
&lt;td style="text-align: left">N. Meyer-Kahlen, S. J. Schlecht &amp;amp; T. Lokki&lt;/td>
&lt;td style="text-align: left">&lt;a href="https://doi.org/10.1121/10.0013364" target="_blank" rel="noopener">Clearly audible room acoustical differences may not reveal where you are in a room&lt;/a>&lt;/td>
&lt;/tr>
&lt;tr>
&lt;td style="text-align: left">&lt;span id="ref4">[4]&lt;/span>&lt;/td>
&lt;td style="text-align: left">2022&lt;/td>
&lt;td style="text-align: left">T. McKenzie et al.&lt;/td>
&lt;td style="text-align: left">&lt;a href="https://doi.org/10.1121/10.0014178" target="_blank" rel="noopener">The auditory perceived aperture position of the transition between rooms&lt;/a>&lt;/td>
&lt;/tr>
&lt;tr>
&lt;td style="text-align: left">&lt;span id="ref5">[5]&lt;/span>&lt;/td>
&lt;td style="text-align: left">2023&lt;/td>
&lt;td style="text-align: left">T. McKenzie et al.&lt;/td>
&lt;td style="text-align: left">&lt;a href="https://doi.org/10.17743/jaes.2022.0084" target="_blank" rel="noopener">Auralization of measured room transitions in VR&lt;/a>&lt;/td>
&lt;/tr>
&lt;tr>
&lt;td style="text-align: left">&lt;span id="ref6">[6]&lt;/span>&lt;/td>
&lt;td style="text-align: left">2023&lt;/td>
&lt;td style="text-align: left">N. Meyer-Kahlen et al.&lt;/td>
&lt;td style="text-align: left">&lt;a href="https://doi.org/10.1109/icassp49357.2023.10095452" target="_blank" rel="noopener">Interpolation of spatial room impulse responses using partial optimal transport&lt;/a>&lt;/td>
&lt;/tr>
&lt;tr>
&lt;td style="text-align: left">&lt;span id="ref7">[7]&lt;/span>&lt;/td>
&lt;td style="text-align: left">2023&lt;/td>
&lt;td style="text-align: left">N. Meyer-Kahlen et al.&lt;/td>
&lt;td style="text-align: left">&lt;a href="http://research.spa.aalto.fi/publications/papers/dafx23-vn-roughness" target="_blank" rel="noopener">How smooth do you think I am: frequency-dependent temporal roughness of velvet noise&lt;/a>&lt;/td>
&lt;/tr>
&lt;tr>
&lt;td style="text-align: left">&lt;span id="ref8">[8]&lt;/span>&lt;/td>
&lt;td style="text-align: left">2023&lt;/td>
&lt;td style="text-align: left">T. McKenzie, S. J. Schlecht et al.&lt;/td>
&lt;td style="text-align: left">&lt;a href="http://sebastianjiroschlecht.com/publication/Source-Signal-Similarity/" target="_blank" rel="noopener">The role of source signal similarity in distinguishing between different positions in a room&lt;/a>&lt;/td>
&lt;/tr>
&lt;tr>
&lt;td style="text-align: left">&lt;span id="ref9">[9]&lt;/span>&lt;/td>
&lt;td style="text-align: left">2024&lt;/td>
&lt;td style="text-align: left">G. Dal Santo, N. Meyer-Kahlen et al.&lt;/td>
&lt;td style="text-align: left">&lt;a href="https://doi.org/10.1109/ieeeconf60004.2024.10943013" target="_blank" rel="noopener">Similarity metrics for late reverberation&lt;/a>&lt;/td>
&lt;/tr>
&lt;tr>
&lt;td style="text-align: left">&lt;span id="ref10">[10]&lt;/span>&lt;/td>
&lt;td style="text-align: left">2024&lt;/td>
&lt;td style="text-align: left">V. Välimäki, K. Prawda &amp;amp; S. J. Schlecht&lt;/td>
&lt;td style="text-align: left">&lt;a href="https://doi.org/10.1109/ieeeconf60004.2024.10943074" target="_blank" rel="noopener">Time-frequency audio similarity using optimal transport&lt;/a>&lt;/td>
&lt;/tr>
&lt;/tbody>
&lt;/table>
&lt;hr></description></item><item><title>Spatial Audio &amp; Room Transitions</title><link>https://artificial-audio.github.io/portfolio/spatialaudio/</link><pubDate>Fri, 05 Sep 2025 00:00:00 +0000</pubDate><guid>https://artificial-audio.github.io/portfolio/spatialaudio/</guid><description>&lt;p>Navigate through acoustic environments with six degrees of freedom.&lt;/p>
&lt;hr>
&lt;h1 id="concept">Concept&lt;/h1>
&lt;p>Real spaces are not isolated boxes — rooms connect through doorways, corridors, and open plans, and listeners move freely through them. Reproducing this experience in virtual and augmented reality requires rendering spatial audio that evolves naturally as the listener walks, turns, and transitions between coupled acoustic environments.&lt;/p>
&lt;p>Our research tackles the full pipeline: from measuring and modeling the acoustics of connected spaces, to rendering smooth room transitions, to evaluating whether the result sounds convincing to listeners.&lt;/p>
&lt;hr>
&lt;h1 id="room-transitions-in-coupled-spaces">Room Transitions in Coupled Spaces&lt;/h1>
&lt;p>When sound travels between connected rooms, energy initially increases before decaying — creating a convex energy decay curve rather than the typical concave shape. We characterized this fade-in phenomenon&lt;a href="#ref2">[2]&lt;/a>&lt;a href="#ref3">[3]&lt;/a> and developed auralization methods&lt;a href="#ref5">[5]&lt;/a>&lt;a href="#ref9">[9]&lt;/a> for smooth transitions between coupled acoustic environments, including real-time rendering&lt;a href="#ref11">[11]&lt;/a>.&lt;/p>
&lt;p>Code: &lt;a href="https://github.com/kyungyunlee/fade-in-reverb" target="_blank" rel="noopener">&lt;strong>fade-in-reverb&lt;/strong>&lt;/a> — implementation of fade-in reverberation for multi-room environments using the common-slope model.&lt;/p>
&lt;p>Demo: &lt;a href="https://kyungyunlee.github.io/fade-in-reverb-demo/" target="_blank" rel="noopener">&lt;strong>Fade-in reverb examples&lt;/strong>&lt;/a> — audio demonstrations of room transitions.&lt;/p>
&lt;p>Data: &lt;a href="https://zenodo.org/doi/10.5281/zenodo.4095493" target="_blank" rel="noopener">&lt;strong>Room transition datasets&lt;/strong>&lt;/a> — measured acoustic data of transitions between coupled rooms.&lt;/p>
&lt;hr>
&lt;h1 id="common-slope-late-reverberation-model">Common-Slope Late Reverberation Model&lt;/h1>
&lt;p>The common-slope model&lt;a href="#ref8">[8]&lt;/a> decomposes late reverberation into shared decay slopes with direction-dependent amplitudes, enabling efficient, physically motivated rendering of complex reverb fields. Its extension to coupled rooms&lt;a href="#ref7">[7]&lt;/a> and acoustic radiance transfer&lt;a href="#ref12">[12]&lt;/a> provides a complete framework for spatial rendering in multi-room environments.&lt;/p>
&lt;p>Code: &lt;a href="https://github.com/georg-goetz/CommonSlopeAnalysis" target="_blank" rel="noopener">&lt;strong>CommonSlopeAnalysis&lt;/strong>&lt;/a> — MATLAB toolkit for common-slope analysis of late reverberation.&lt;/p>
&lt;p>Demo: &lt;a href="http://research.spa.aalto.fi/publications/papers/ieeetaslp-common-slope/" target="_blank" rel="noopener">&lt;strong>Common-slope listening examples&lt;/strong>&lt;/a> — interactive demonstrations of the model.&lt;/p>
&lt;p>Demo: &lt;a href="http://research.spa.aalto.fi/publications/papers/aes-games-common-slope-rendering/" target="_blank" rel="noopener">&lt;strong>Dynamic rendering&lt;/strong>&lt;/a> — real-time common-slope rendering for games and VR&lt;a href="#ref10">[10]&lt;/a>.&lt;/p>
&lt;hr>
&lt;h1 id="spatial-room-impulse-response-rendering">Spatial Room Impulse Response Rendering&lt;/h1>
&lt;p>Reproducing measured spatial room impulse responses (SRIRs) with full 6DoF listener movement requires interpolation between sparse measurement positions and anisotropic multi-slope resynthesis&lt;a href="#ref4">[4]&lt;/a>. We developed methods for resynthesizing SRIR tails that preserve directional decay characteristics lost in conventional processing.&lt;/p>
&lt;p>Code: &lt;a href="https://github.com/chris-hld/Directional-Multi-Slope-Room-Impulse-Response-Denoising" target="_blank" rel="noopener">&lt;strong>Directional Multi-Slope RIR Resynthesis&lt;/strong>&lt;/a> — anisotropic multi-slope decay envelope resynthesis.&lt;/p>
&lt;p>Demo: &lt;a href="http://research.spa.aalto.fi/publications/papers/jaes-anisotropic-multislope-SRIR-resynthesis/" target="_blank" rel="noopener">&lt;strong>Resynthesis listening examples&lt;/strong>&lt;/a> — audio comparisons.&lt;/p>
&lt;p>Code: &lt;a href="https://github.com/thomas-mckenzie/srir_interpolation" target="_blank" rel="noopener">&lt;strong>SRIR Interpolation Toolkit&lt;/strong>&lt;/a> — perceptually informed interpolation of SRIRs between measurement positions.&lt;/p>
&lt;hr>
&lt;h1 id="autonomous-room-measurement">Autonomous Room Measurement&lt;/h1>
&lt;p>The &lt;a href="https://github.com/georg-goetz/ARTSRAM" target="_blank" rel="noopener">&lt;strong>ARTSRAM&lt;/strong>&lt;/a> robot twin system&lt;a href="#ref4a">[4a]&lt;/a> uses autonomous Roomba-based platforms with loudspeakers and microphone arrays to collect hundreds of room impulse response measurements through random walk procedures — no prior grid planning required.&lt;/p>
&lt;p>Demo: &lt;a href="http://research.spa.aalto.fi/publications/papers/artsram/" target="_blank" rel="noopener">&lt;strong>ARTSRAM project page&lt;/strong>&lt;/a> — measurement setup and results.&lt;/p>
&lt;hr>
&lt;h1 id="velvet-noise-for-spatial-reverberation">Velvet Noise for Spatial Reverberation&lt;/h1>
&lt;p>Sparse pulse sequences (velvet noise)&lt;a href="#ref6">[6]&lt;/a> provide an efficient basis for multichannel reverberation rendering. Dark velvet noise extends this to non-exponential decay modeling, and our perceptual studies establish quality thresholds for when rendering artifacts become audible.&lt;/p>
&lt;p>Demo: &lt;a href="http://research.spa.aalto.fi/publications/papers/dafx22-multichannel-ivn/" target="_blank" rel="noopener">&lt;strong>Multichannel velvet noise rendering&lt;/strong>&lt;/a> — multichannel interleaved velvet noise.&lt;/p>
&lt;p>Demo: &lt;a href="http://research.spa.aalto.fi/publications/papers/dafx20-vfdn/" target="_blank" rel="noopener">&lt;strong>Velvet-noise FDN&lt;/strong>&lt;/a> — spatial reverberation with velvet noise.&lt;/p>
&lt;hr>
&lt;h1 id="references">References&lt;/h1>
&lt;table>
&lt;thead>
&lt;tr>
&lt;th style="text-align: left">&lt;/th>
&lt;th style="text-align: left">Year&lt;/th>
&lt;th style="text-align: left">Authors&lt;/th>
&lt;th style="text-align: left">Article&lt;/th>
&lt;/tr>
&lt;/thead>
&lt;tbody>
&lt;tr>
&lt;td style="text-align: left">&lt;span id="ref1">[1]&lt;/span>&lt;/td>
&lt;td style="text-align: left">2020&lt;/td>
&lt;td style="text-align: left">J. Fagerström et al.&lt;/td>
&lt;td style="text-align: left">&lt;a href="http://research.spa.aalto.fi/publications/papers/dafx20-vfdn/" target="_blank" rel="noopener">Velvet-noise FDN for spatial reverberation&lt;/a>&lt;/td>
&lt;/tr>
&lt;tr>
&lt;td style="text-align: left">&lt;span id="ref2">[2]&lt;/span>&lt;/td>
&lt;td style="text-align: left">2021&lt;/td>
&lt;td style="text-align: left">T. McKenzie et al.&lt;/td>
&lt;td style="text-align: left">&lt;a href="https://doi.org/10.1109/icassp39728.2021.9415122" target="_blank" rel="noopener">Acoustic analysis and dataset of transitions between coupled rooms&lt;/a>&lt;/td>
&lt;/tr>
&lt;tr>
&lt;td style="text-align: left">&lt;span id="ref3">[3]&lt;/span>&lt;/td>
&lt;td style="text-align: left">2021&lt;/td>
&lt;td style="text-align: left">T. McKenzie et al.&lt;/td>
&lt;td style="text-align: left">&lt;a href="https://doi.org/10.1109/i3da48870.2021.9610955" target="_blank" rel="noopener">Auralization of the transition between coupled rooms&lt;/a>&lt;/td>
&lt;/tr>
&lt;tr>
&lt;td style="text-align: left">&lt;span id="ref4">[4]&lt;/span>&lt;/td>
&lt;td style="text-align: left">2022&lt;/td>
&lt;td style="text-align: left">C. Hold, T. McKenzie, G. Götz et al.&lt;/td>
&lt;td style="text-align: left">&lt;a href="https://doi.org/10.17743/jaes.2022.0017" target="_blank" rel="noopener">Resynthesis of spatial RIR tails with anisotropic multi-slope decays&lt;/a>&lt;/td>
&lt;/tr>
&lt;tr>
&lt;td style="text-align: left">&lt;span id="ref4a">[4a]&lt;/span>&lt;/td>
&lt;td style="text-align: left">2021&lt;/td>
&lt;td style="text-align: left">G. Götz, S. J. Schlecht &amp;amp; V. Pulkki&lt;/td>
&lt;td style="text-align: left">&lt;a href="https://doi.org/10.17743/jaes.2021.0002" target="_blank" rel="noopener">ARTSRAM: Adaptive real-time spatial reverberation analysis&lt;/a>&lt;/td>
&lt;/tr>
&lt;tr>
&lt;td style="text-align: left">&lt;span id="ref5">[5]&lt;/span>&lt;/td>
&lt;td style="text-align: left">2023&lt;/td>
&lt;td style="text-align: left">T. McKenzie et al.&lt;/td>
&lt;td style="text-align: left">&lt;a href="https://doi.org/10.17743/jaes.2022.0084" target="_blank" rel="noopener">Auralization of measured room transitions in VR&lt;/a>&lt;/td>
&lt;/tr>
&lt;tr>
&lt;td style="text-align: left">&lt;span id="ref6">[6]&lt;/span>&lt;/td>
&lt;td style="text-align: left">2022&lt;/td>
&lt;td style="text-align: left">N. Meyer-Kahlen, S. J. Schlecht &amp;amp; V. Välimäki&lt;/td>
&lt;td style="text-align: left">&lt;a href="https://doi.org/10.1049/ell2.12501" target="_blank" rel="noopener">Colours of velvet noise&lt;/a>&lt;/td>
&lt;/tr>
&lt;tr>
&lt;td style="text-align: left">&lt;span id="ref7">[7]&lt;/span>&lt;/td>
&lt;td style="text-align: left">2022&lt;/td>
&lt;td style="text-align: left">G. Götz et al.&lt;/td>
&lt;td style="text-align: left">&lt;a href="http://research.spa.aalto.fi/publications/papers/ieeetaslp-common-slope/" target="_blank" rel="noopener">Common-slope modeling of late reverberation in coupled rooms&lt;/a>&lt;/td>
&lt;/tr>
&lt;tr>
&lt;td style="text-align: left">&lt;span id="ref8">[8]&lt;/span>&lt;/td>
&lt;td style="text-align: left">2023&lt;/td>
&lt;td style="text-align: left">G. Götz, S. J. Schlecht &amp;amp; V. Pulkki&lt;/td>
&lt;td style="text-align: left">&lt;a href="https://doi.org/10.1109/taslp.2023.3317572" target="_blank" rel="noopener">Common-slope modeling of late reverberation&lt;/a>&lt;/td>
&lt;/tr>
&lt;tr>
&lt;td style="text-align: left">&lt;span id="ref9">[9]&lt;/span>&lt;/td>
&lt;td style="text-align: left">2024&lt;/td>
&lt;td style="text-align: left">K. Prawda, N. Meyer-Kahlen &amp;amp; S. J. Schlecht&lt;/td>
&lt;td style="text-align: left">&lt;a href="http://research.spa.aalto.fi/publications/papers/aes-games-common-slope-rendering/" target="_blank" rel="noopener">Dynamic late reverberation rendering using the common-slope model&lt;/a>&lt;/td>
&lt;/tr>
&lt;tr>
&lt;td style="text-align: left">&lt;span id="ref10">[10]&lt;/span>&lt;/td>
&lt;td style="text-align: left">2024&lt;/td>
&lt;td style="text-align: left">K. Y. Lee, V. Huhtala et al.&lt;/td>
&lt;td style="text-align: left">&lt;a href="https://kyungyunlee.github.io/fade-in-reverb-demo/" target="_blank" rel="noopener">Fade-in reverberation in multi-room environments using the common-slope model&lt;/a>&lt;/td>
&lt;/tr>
&lt;tr>
&lt;td style="text-align: left">&lt;span id="ref11">[11]&lt;/span>&lt;/td>
&lt;td style="text-align: left">2024&lt;/td>
&lt;td style="text-align: left">P. Götz, G. Götz et al.&lt;/td>
&lt;td style="text-align: left">&lt;a href="https://doi.org/10.1109/iwaenc61483.2024.10694356" target="_blank" rel="noopener">A common-slopes late reverberation model based on acoustic radiance transfer&lt;/a>&lt;/td>
&lt;/tr>
&lt;/tbody>
&lt;/table>
&lt;hr></description></item><item><title>Differentiable Audio Processing &amp; Deep Learning</title><link>https://artificial-audio.github.io/portfolio/ddsp/</link><pubDate>Thu, 04 Sep 2025 00:00:00 +0000</pubDate><guid>https://artificial-audio.github.io/portfolio/ddsp/</guid><description>&lt;p>Bridging classical signal processing with modern machine learning for audio.&lt;/p>
&lt;hr>
&lt;h1 id="concept">Concept&lt;/h1>
&lt;p>Classical audio signal processing offers transparent, interpretable algorithms — but tuning their parameters to match complex acoustic targets remains an open challenge. Deep learning brings powerful optimization, but often at the cost of interpretability and efficiency.&lt;/p>
&lt;p>Our research bridges these worlds through differentiable signal processing (DDSP): embedding classical audio structures (filters, delays, feedback networks) into differentiable computation graphs that can be optimized end-to-end with gradient descent. Alongside this, we develop neural network approaches for tasks where traditional methods fall short.&lt;/p>
&lt;hr>
&lt;h1 id="differentiable-feedback-delay-networks">Differentiable Feedback Delay Networks&lt;/h1>
&lt;p>Making FDN parameters differentiable allows reverberation to be optimized toward target decay, coloration, or perceptual objectives using gradient-based training. We showed that even tiny FDN configurations produce high-quality colorless reverberation when optimized this way&lt;a href="#ref3">[3]&lt;/a>&lt;a href="#ref10">[10]&lt;/a>, and developed RIR2FDN&lt;a href="#ref6">[6]&lt;/a> for automatically synthesizing FDN configurations that match measured room impulse responses.&lt;/p>
&lt;p>Code: &lt;a href="https://github.com/gdalsanto/diff-fdn-colorless" target="_blank" rel="noopener">&lt;strong>diff-fdn-colorless&lt;/strong>&lt;/a> — optimize FDN parameters for spectrally flat reverberation via gradient descent.&lt;/p>
&lt;p>Demo: &lt;a href="http://research.spa.aalto.fi/publications/papers/dafx23-colorless-fdn/" target="_blank" rel="noopener">&lt;strong>Colorless FDN examples&lt;/strong>&lt;/a> — audio comparisons.&lt;/p>
&lt;p>Code: &lt;a href="https://github.com/gdalsanto/rir2fdn" target="_blank" rel="noopener">&lt;strong>rir2fdn&lt;/strong>&lt;/a> — analyze measured RIRs and synthesize matching FDN configurations.&lt;/p>
&lt;p>Demo: &lt;a href="http://research.spa.aalto.fi/publications/papers/dafx24-rir2fdn/" target="_blank" rel="noopener">&lt;strong>RIR2FDN project page&lt;/strong>&lt;/a> — listening examples of RIR-to-FDN conversion.&lt;/p>
&lt;hr>
&lt;h1 id="flamo-differentiable-audio-systems-library">FLAMO: Differentiable Audio Systems Library&lt;/h1>
&lt;p>&lt;a href="https://github.com/gdalsanto/flamo" target="_blank" rel="noopener">&lt;strong>FLAMO&lt;/strong>&lt;/a> (Frequency-sampling Library for Audio-Module Optimization)&lt;a href="#ref9">[9]&lt;/a> is a PyTorch library for building and optimizing differentiable linear time-invariant audio systems. It provides differentiable gains, filters (biquads, state variable filters, graphic EQs), delays, and transforms that can be chained into complex architectures and trained end-to-end.&lt;/p>
&lt;p>&lt;a href="https://gdalsanto.github.io/flamo" target="_blank" rel="noopener">Documentation&lt;/a> · &lt;a href="https://pypi.org/project/flamo/" target="_blank" rel="noopener">PyPI&lt;/a>&lt;/p>
&lt;hr>
&lt;h1 id="differentiable-active-acoustics">Differentiable Active Acoustics&lt;/h1>
&lt;p>Reverberation enhancement systems form an electro-acoustic feedback loop whose stability is critical. We treat this loop as a differentiable system and optimize stability and performance via gradient descent&lt;a href="#ref5">[5]&lt;/a>, opening new possibilities for automated active acoustics design.&lt;/p>
&lt;p>Demo: &lt;a href="http://research.spa.aalto.fi/publications/papers/dafx24-diff-aa/" target="_blank" rel="noopener">&lt;strong>Differentiable active acoustics project page&lt;/strong>&lt;/a> — demonstrations of stability optimization.&lt;/p>
&lt;hr>
&lt;h1 id="room-impulse-response-completion">Room Impulse Response Completion&lt;/h1>
&lt;p>Rendering immersive audio in VR and games requires fast RIR generation. &lt;a href="https://github.com/linjac/rir-completion/" target="_blank" rel="noopener">&lt;strong>DECOR&lt;/strong>&lt;/a> (Deep Exponential Completion Of Room impulse responses)&lt;a href="#ref8">[8]&lt;/a> predicts late reverberation from only the early 50 ms of a measured response — an encoder-decoder network that synthesizes multi-exponential decay envelopes of filtered noise.&lt;/p>
&lt;p>Demo: &lt;a href="https://linjac.github.io/rir-completion/" target="_blank" rel="noopener">&lt;strong>RIR completion project page&lt;/strong>&lt;/a> — interactive examples.&lt;/p>
&lt;hr>
&lt;h1 id="neural-decay-analysis">Neural Decay Analysis&lt;/h1>
&lt;p>&lt;a href="https://github.com/georg-goetz/DecayFitNet" target="_blank" rel="noopener">&lt;strong>DecayFitNet&lt;/strong>&lt;/a>&lt;a href="#ref1">[1]&lt;/a> is a lightweight neural network that replaces brittle iterative fitting for multi-exponential energy decay estimation. Trained on synthetic data, it provides deterministic inference without manual tuning, validated on over 20,000 real acoustic measurements.&lt;/p>
&lt;hr>
&lt;h1 id="physical-modeling-with-neural-operators">Physical Modeling with Neural Operators&lt;/h1>
&lt;p>Fourier neural operators&lt;a href="#ref2">[2]&lt;/a> learn to approximate PDE solutions for physical models of musical instruments, enabling real-time sound synthesis that captures the physics of vibrating strings and resonant bodies.&lt;/p>
&lt;p>Demo: &lt;a href="https://julian-parker.github.io/DAFX22_FNO/" target="_blank" rel="noopener">&lt;strong>FNO for physical modeling&lt;/strong>&lt;/a> — Fourier neural operator examples.&lt;/p>
&lt;hr>
&lt;h1 id="klann-knowledge-leveraging-audio-networks">KLANN: Knowledge-Leveraging Audio Networks&lt;/h1>
&lt;p>&lt;a href="https://github.com/ville14/KLANN" target="_blank" rel="noopener">&lt;strong>KLANN&lt;/strong>&lt;/a>&lt;a href="#ref7">[7]&lt;/a> integrates domain knowledge into neural network architectures for audio processing, combining the efficiency of classical signal processing structures with the flexibility of learned parameters.&lt;/p>
&lt;p>Demo: &lt;a href="https://ville14.github.io/KLANN-examples/" target="_blank" rel="noopener">&lt;strong>KLANN examples&lt;/strong>&lt;/a> — audio processing results.&lt;/p>
&lt;hr>
&lt;h1 id="references">References&lt;/h1>
&lt;table>
&lt;thead>
&lt;tr>
&lt;th style="text-align: left">&lt;/th>
&lt;th style="text-align: left">Year&lt;/th>
&lt;th style="text-align: left">Authors&lt;/th>
&lt;th style="text-align: left">Article&lt;/th>
&lt;/tr>
&lt;/thead>
&lt;tbody>
&lt;tr>
&lt;td style="text-align: left">&lt;span id="ref1">[1]&lt;/span>&lt;/td>
&lt;td style="text-align: left">2022&lt;/td>
&lt;td style="text-align: left">G. Götz, S. J. Schlecht &amp;amp; V. Pulkki&lt;/td>
&lt;td style="text-align: left">&lt;a href="https://doi.org/10.1121/10.0013416" target="_blank" rel="noopener">DecayFitNet: neural network for energy decay analysis&lt;/a>&lt;/td>
&lt;/tr>
&lt;tr>
&lt;td style="text-align: left">&lt;span id="ref2">[2]&lt;/span>&lt;/td>
&lt;td style="text-align: left">2022&lt;/td>
&lt;td style="text-align: left">J. D. Parker, S. J. Schlecht et al.&lt;/td>
&lt;td style="text-align: left">&lt;a href="https://julian-parker.github.io/DAFX22_FNO/" target="_blank" rel="noopener">Physical modeling with Fourier neural operators&lt;/a>&lt;/td>
&lt;/tr>
&lt;tr>
&lt;td style="text-align: left">&lt;span id="ref3">[3]&lt;/span>&lt;/td>
&lt;td style="text-align: left">2023&lt;/td>
&lt;td style="text-align: left">G. Dal Santo, K. Prawda et al.&lt;/td>
&lt;td style="text-align: left">&lt;a href="http://research.spa.aalto.fi/publications/papers/dafx23-colorless-fdn/" target="_blank" rel="noopener">Differentiable feedback delay network for colorless reverberation&lt;/a>&lt;/td>
&lt;/tr>
&lt;tr>
&lt;td style="text-align: left">&lt;span id="ref4">[4]&lt;/span>&lt;/td>
&lt;td style="text-align: left">2023&lt;/td>
&lt;td style="text-align: left">L. Luoma, P. Fricker &amp;amp; S. J. Schlecht&lt;/td>
&lt;td style="text-align: left">&lt;a href="https://doi.org/10.14627/537740052" target="_blank" rel="noopener">Deep learning for loudspeaker digital twin creation&lt;/a>&lt;/td>
&lt;/tr>
&lt;tr>
&lt;td style="text-align: left">&lt;span id="ref5">[5]&lt;/span>&lt;/td>
&lt;td style="text-align: left">2024&lt;/td>
&lt;td style="text-align: left">G. M. De Bortoli, G. Dal Santo et al.&lt;/td>
&lt;td style="text-align: left">&lt;a href="http://research.spa.aalto.fi/publications/papers/dafx24-diff-aa/" target="_blank" rel="noopener">Differentiable active acoustics: optimizing stability via gradient descent&lt;/a>&lt;/td>
&lt;/tr>
&lt;tr>
&lt;td style="text-align: left">&lt;span id="ref6">[6]&lt;/span>&lt;/td>
&lt;td style="text-align: left">2024&lt;/td>
&lt;td style="text-align: left">G. Dal Santo et al.&lt;/td>
&lt;td style="text-align: left">&lt;a href="http://research.spa.aalto.fi/publications/papers/dafx24-rir2fdn/" target="_blank" rel="noopener">RIR2FDN: Improved room impulse response analysis and synthesis&lt;/a>&lt;/td>
&lt;/tr>
&lt;tr>
&lt;td style="text-align: left">&lt;span id="ref7">[7]&lt;/span>&lt;/td>
&lt;td style="text-align: left">2024&lt;/td>
&lt;td style="text-align: left">V. Huhtala, L. Juvela &amp;amp; S. J. Schlecht&lt;/td>
&lt;td style="text-align: left">&lt;a href="https://doi.org/10.1109/lsp.2024.3389465" target="_blank" rel="noopener">KLANN: Knowledge-leveraging artificial neural network&lt;/a>&lt;/td>
&lt;/tr>
&lt;tr>
&lt;td style="text-align: left">&lt;span id="ref8">[8]&lt;/span>&lt;/td>
&lt;td style="text-align: left">2025&lt;/td>
&lt;td style="text-align: left">J. Lin, G. Götz &amp;amp; S. J. Schlecht&lt;/td>
&lt;td style="text-align: left">&lt;a href="https://doi.org/10.1186/s13636-024-00383-1" target="_blank" rel="noopener">Deep room impulse response completion&lt;/a>&lt;/td>
&lt;/tr>
&lt;tr>
&lt;td style="text-align: left">&lt;span id="ref9">[9]&lt;/span>&lt;/td>
&lt;td style="text-align: left">2025&lt;/td>
&lt;td style="text-align: left">G. Dal Santo et al.&lt;/td>
&lt;td style="text-align: left">&lt;a href="https://doi.org/10.1109/icassp49660.2025.10888532" target="_blank" rel="noopener">FLAMO: Frequency-sampling library for audio-module optimization&lt;/a>&lt;/td>
&lt;/tr>
&lt;tr>
&lt;td style="text-align: left">&lt;span id="ref10">[10]&lt;/span>&lt;/td>
&lt;td style="text-align: left">2025&lt;/td>
&lt;td style="text-align: left">G. Dal Santo, K. Prawda et al.&lt;/td>
&lt;td style="text-align: left">&lt;a href="https://doi.org/10.1186/s13636-025-00401-w" target="_blank" rel="noopener">Optimizing tiny colorless feedback delay networks&lt;/a>&lt;/td>
&lt;/tr>
&lt;tr>
&lt;td style="text-align: left">&lt;span id="ref11">[11]&lt;/span>&lt;/td>
&lt;td style="text-align: left">2025&lt;/td>
&lt;td style="text-align: left">M. Scerbo, S. J. Schlecht et al.&lt;/td>
&lt;td style="text-align: left">&lt;a href="https://doi.org/10.1109/taslpro.2025.3592322" target="_blank" rel="noopener">Modeling feedback delay network output equivalences&lt;/a>&lt;/td>
&lt;/tr>
&lt;/tbody>
&lt;/table>
&lt;hr></description></item><item><title>Coding Virtual Worlds</title><link>https://artificial-audio.github.io/courses/codingvirtualworlds/</link><pubDate>Fri, 31 Jan 2025 00:00:00 +0000</pubDate><guid>https://artificial-audio.github.io/courses/codingvirtualworlds/</guid><description>&lt;p>Program interactive virtual environments with spatial audio, 3D graphics, and real-time signal processing for immersive experiences.&lt;/p>
&lt;p>Build virtual worlds from the ground up, integrating audiovisual content with game engines and audio programming frameworks.&lt;/p></description></item><item><title>Music Processing &amp; Synthesis</title><link>https://artificial-audio.github.io/courses/musicprocessingsynthesis/</link><pubDate>Fri, 31 Jan 2025 00:00:00 +0000</pubDate><guid>https://artificial-audio.github.io/courses/musicprocessingsynthesis/</guid><description>&lt;p>Digital signal processing techniques for music analysis, synthesis, and transformation — from classic synthesis methods to modern neural audio.&lt;/p>
&lt;p>Learn how to create, manipulate, and understand sound through digital processing, with hands-on projects in music technology.&lt;/p></description></item><item><title>Statistical Signal Processing</title><link>https://artificial-audio.github.io/courses/statisticalsignalprocessing/</link><pubDate>Fri, 31 Jan 2025 00:00:00 +0000</pubDate><guid>https://artificial-audio.github.io/courses/statisticalsignalprocessing/</guid><description>&lt;p>Fundamental techniques for analyzing and processing signals with statistical methods, including estimation theory, detection, and adaptive filtering.&lt;/p>
&lt;p>This course covers the mathematical foundations and practical applications of statistical signal processing, essential for audio and acoustics research.&lt;/p></description></item><item><title>Stochastic Processes</title><link>https://artificial-audio.github.io/courses/stochasticprocesses/</link><pubDate>Fri, 31 Jan 2025 00:00:00 +0000</pubDate><guid>https://artificial-audio.github.io/courses/stochasticprocesses/</guid><description>&lt;p>Mathematical framework for modeling random phenomena in time, with applications to various signal processing including sound, image, music and language.&lt;/p>
&lt;p>Explore probability theory, random walks, Markov processes, and their relevance to computational audio and acoustics.&lt;/p></description></item><item><title>Virtual Acoustics Lab</title><link>https://artificial-audio.github.io/courses/virtualacousticslab/</link><pubDate>Fri, 31 Jan 2025 00:00:00 +0000</pubDate><guid>https://artificial-audio.github.io/courses/virtualacousticslab/</guid><description>&lt;p>Simulate and measure acoustic environments virtually — room acoustics, auralization, and immersive audio for extended realities.&lt;/p>
&lt;p>Practical laboratory experience with virtual acoustics tools, room simulation, and binaural rendering for VR and AR applications.&lt;/p></description></item><item><title>Contact</title><link>https://artificial-audio.github.io/contact/</link><pubDate>Mon, 24 Oct 2022 00:00:00 +0000</pubDate><guid>https://artificial-audio.github.io/contact/</guid><description/></item><item><title>People</title><link>https://artificial-audio.github.io/people/</link><pubDate>Mon, 24 Oct 2022 00:00:00 +0000</pubDate><guid>https://artificial-audio.github.io/people/</guid><description/></item><item><title>Tour</title><link>https://artificial-audio.github.io/tour/</link><pubDate>Mon, 24 Oct 2022 00:00:00 +0000</pubDate><guid>https://artificial-audio.github.io/tour/</guid><description/></item><item><title>Jian Yang and Monica Hall Win the Best Paper Award at Wowchemy 2020</title><link>https://artificial-audio.github.io/post/20-12-02-icml-best-paper/</link><pubDate>Wed, 02 Dec 2020 00:00:00 +0000</pubDate><guid>https://artificial-audio.github.io/post/20-12-02-icml-best-paper/</guid><description>&lt;p>Congratulations to Jian Yang and Monica Hall for winning the Best Paper Award at the 2020 Conference on Wowchemy for their paper “Learning Wowchemy”.&lt;/p>
&lt;p>Lorem ipsum dolor sit amet, consectetur adipiscing elit. Integer tempus augue non tempor egestas. Proin nisl nunc, dignissim in accumsan dapibus, auctor ullamcorper neque. Quisque at elit felis. Vestibulum ante ipsum primis in faucibus orci luctus et ultrices posuere cubilia curae; Aenean eget elementum odio. Cras interdum eget risus sit amet aliquet. In volutpat, nisl ut fringilla dignissim, arcu nisl suscipit ante, at accumsan sapien nisl eu eros.&lt;/p>
&lt;p>Sed eu dui nec ligula bibendum dapibus. Nullam imperdiet auctor tortor, vel cursus mauris malesuada non. Quisque ultrices euismod dapibus. Aenean sed gravida risus. Sed nisi tortor, vulputate nec quam non, placerat porta nisl. Nunc varius lobortis urna, condimentum facilisis ipsum molestie eu. Ut molestie eleifend ligula sed dignissim. Duis ut tellus turpis. Praesent tincidunt, nunc sed congue malesuada, mauris enim maximus massa, eget interdum turpis urna et ante. Morbi sem nisl, cursus quis mollis et, interdum luctus augue. Aliquam laoreet, leo et accumsan tincidunt, libero neque aliquet lectus, a ultricies lorem mi a orci.&lt;/p>
&lt;p>Mauris dapibus sem vel magna convallis laoreet. Donec in venenatis urna, vitae sodales odio. Praesent tortor diam, varius non luctus nec, bibendum vel est. Quisque id sem enim. Maecenas at est leo. Vestibulum tristique pellentesque ex, blandit placerat nunc eleifend sit amet. Fusce eget lectus bibendum, accumsan mi quis, luctus sem. Etiam vitae nulla scelerisque, eleifend odio in, euismod quam. Etiam porta ullamcorper massa, vitae gravida turpis euismod quis. Mauris sodales sem ac ultrices viverra. In placerat ultrices sapien. Suspendisse eu arcu hendrerit, luctus tortor cursus, maximus dolor. Proin et velit et quam gravida dapibus. Donec blandit justo ut consequat tristique.&lt;/p></description></item><item><title>Richard Hendricks Wins First Place in the Wowchemy Prize</title><link>https://artificial-audio.github.io/post/20-12-01-wowchemy-prize/</link><pubDate>Tue, 01 Dec 2020 00:00:00 +0000</pubDate><guid>https://artificial-audio.github.io/post/20-12-01-wowchemy-prize/</guid><description>&lt;p>Congratulations to Richard Hendricks for winning first place in the Wowchemy Prize.&lt;/p>
&lt;p>Lorem ipsum dolor sit amet, consectetur adipiscing elit. Integer tempus augue non tempor egestas. Proin nisl nunc, dignissim in accumsan dapibus, auctor ullamcorper neque. Quisque at elit felis. Vestibulum ante ipsum primis in faucibus orci luctus et ultrices posuere cubilia curae; Aenean eget elementum odio. Cras interdum eget risus sit amet aliquet. In volutpat, nisl ut fringilla dignissim, arcu nisl suscipit ante, at accumsan sapien nisl eu eros.&lt;/p>
&lt;p>Sed eu dui nec ligula bibendum dapibus. Nullam imperdiet auctor tortor, vel cursus mauris malesuada non. Quisque ultrices euismod dapibus. Aenean sed gravida risus. Sed nisi tortor, vulputate nec quam non, placerat porta nisl. Nunc varius lobortis urna, condimentum facilisis ipsum molestie eu. Ut molestie eleifend ligula sed dignissim. Duis ut tellus turpis. Praesent tincidunt, nunc sed congue malesuada, mauris enim maximus massa, eget interdum turpis urna et ante. Morbi sem nisl, cursus quis mollis et, interdum luctus augue. Aliquam laoreet, leo et accumsan tincidunt, libero neque aliquet lectus, a ultricies lorem mi a orci.&lt;/p>
&lt;p>Mauris dapibus sem vel magna convallis laoreet. Donec in venenatis urna, vitae sodales odio. Praesent tortor diam, varius non luctus nec, bibendum vel est. Quisque id sem enim. Maecenas at est leo. Vestibulum tristique pellentesque ex, blandit placerat nunc eleifend sit amet. Fusce eget lectus bibendum, accumsan mi quis, luctus sem. Etiam vitae nulla scelerisque, eleifend odio in, euismod quam. Etiam porta ullamcorper massa, vitae gravida turpis euismod quis. Mauris sodales sem ac ultrices viverra. In placerat ultrices sapien. Suspendisse eu arcu hendrerit, luctus tortor cursus, maximus dolor. Proin et velit et quam gravida dapibus. Donec blandit justo ut consequat tristique.&lt;/p></description></item><item><title>An example preprint / working paper</title><link>https://artificial-audio.github.io/publication/preprint/</link><pubDate>Sun, 07 Apr 2019 00:00:00 +0000</pubDate><guid>https://artificial-audio.github.io/publication/preprint/</guid><description>&lt;div class="alert alert-note">
&lt;div>
Create your slides in Markdown - click the &lt;em>Slides&lt;/em> button to check out the example.
&lt;/div>
&lt;/div>
&lt;p>Add the publication&amp;rsquo;s &lt;strong>full text&lt;/strong> or &lt;strong>supplementary notes&lt;/strong> here. You can use rich formatting such as including &lt;a href="https://docs.hugoblox.com/content/writing-markdown-latex/" target="_blank" rel="noopener">code, math, and images&lt;/a>.&lt;/p></description></item><item><title>An example journal article</title><link>https://artificial-audio.github.io/publication/journal-article/</link><pubDate>Tue, 01 Sep 2015 00:00:00 +0000</pubDate><guid>https://artificial-audio.github.io/publication/journal-article/</guid><description>&lt;div class="alert alert-note">
&lt;div>
Click the &lt;em>Cite&lt;/em> button above to demo the feature to enable visitors to import publication metadata into their reference management software.
&lt;/div>
&lt;/div>
&lt;div class="alert alert-note">
&lt;div>
Create your slides in Markdown - click the &lt;em>Slides&lt;/em> button to check out the example.
&lt;/div>
&lt;/div>
&lt;p>Add the publication&amp;rsquo;s &lt;strong>full text&lt;/strong> or &lt;strong>supplementary notes&lt;/strong> here. You can use rich formatting such as including &lt;a href="https://docs.hugoblox.com/content/writing-markdown-latex/" target="_blank" rel="noopener">code, math, and images&lt;/a>.&lt;/p></description></item><item><title>An example conference paper</title><link>https://artificial-audio.github.io/publication/conference-paper/</link><pubDate>Mon, 01 Jul 2013 00:00:00 +0000</pubDate><guid>https://artificial-audio.github.io/publication/conference-paper/</guid><description>&lt;div class="alert alert-note">
&lt;div>
Click the &lt;em>Cite&lt;/em> button above to demo the feature to enable visitors to import publication metadata into their reference management software.
&lt;/div>
&lt;/div>
&lt;div class="alert alert-note">
&lt;div>
Create your slides in Markdown - click the &lt;em>Slides&lt;/em> button to check out the example.
&lt;/div>
&lt;/div>
&lt;p>Add the publication&amp;rsquo;s &lt;strong>full text&lt;/strong> or &lt;strong>supplementary notes&lt;/strong> here. You can use rich formatting such as including &lt;a href="https://docs.hugoblox.com/content/writing-markdown-latex/" target="_blank" rel="noopener">code, math, and images&lt;/a>.&lt;/p></description></item><item><title/><link>https://artificial-audio.github.io/admin/config.yml</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://artificial-audio.github.io/admin/config.yml</guid><description/></item></channel></rss>