Program

The Signal Processing Summit will run over three days, October 14–16, 2025 (Tuesday through Thursday), from 8:30 a.m. to 5:00 p.m. each day.

All Sessions to be Scheduled Soon

Alexander Talashov • Talk
Tue, Oct 14 • 8:30 AM
Accelerated Audio Computing: Leveraging GPUs for Modern DSP Workflows
View abstract
Tue, Oct 14 • 8:30 AM – 9:30 AM

Historically, GPU-based audio processing has been viewed as a technical outlier—long theorized but rarely implemented in real-world scenarios. While the parallel architecture of GPUs offers immense computational potential, traditional digital signal processing workflows have relied on sequential, CPU- or DSP-based models, making GPU integration inherently complex. The architectural mismatch between SIMD-based GPU design and MIMD-oriented DSP algorithms has posed significant challenges - until now.

Software and systems architecture now make real-time GPU audio processing not only viable but highly performant. By leveraging the parallelism of modern GPUs, developers can build audio applications that dramatically outperform CPU-only solutions, unlocking a unique feature set and new creative possibilities. This evolution is especially timely, as modern audio pipelines increasingly converge with AI, machine learning, acoustic simulation, immersive spatial audio, and so on.

Previously, issues such as latency, memory handling, and deterministic processing led many to believe GPUs were unsuitable for audio DSP. However, a new low-level framework, purpose-built for real-time GPU audio, has changed that narrative. GPU AUDIO INC has developed technology that overcomes these long-standing barriers, enabling scalable, low-latency DSP directly on the GPU.

This presentation offers a hands-on introduction to the capabilities of GPU-based audio processing - and how it can transform your development workflow. You'll explore the architecture behind the GPU Audio SDK, gain a clear understanding of the technical challenges it solves, and get practical experience implementing real-time DSP on GPU hardware.

Participants will:

  • Explore the core technology powering GPU Audio’s real-time processing framework (latencies as low as 30-50 microseconds for consumer-grade GPUs and desktop-grade OS!).
  • Implement a basic IIR audio processor using GPU-accelerated techniques.
  • Analyze performance metrics and gain insight into the GPU Audio Scheduler and its role in real-time processing.
  • Access the SDK and learn how to get started with your own GPU-based audio projects.

Whether you're developing audio software and web-services, building spatial audio engines for games & VR, or exploring machine learning in audio tech, automotive, or consumer applications, this session offers a forward-looking foundation in GPU-accelerated DSP.

Paul Beckmann • Talk
Tue, Oct 14 • 9:30 AM
Are Digital Signal Processors Going Away?
View abstract
Tue, Oct 14 • 9:30 AM – 10:30 AM

There is an ongoing narrative that specialized DSP processors are being replaced by general purpose processors. This presentation examines the core architectural principles that set DSPs apart and provide speed and efficiency advantages over general purpose processors. We compare benchmarks across commercially available processors and discuss emerging trends and innovations that could shape the next generation of DSP technology.

Jatin Chowdhury • Talk
Tue, Oct 14 • 10:30 AM
Performance Engineering for Modern DSP
View abstract
Tue, Oct 14 • 10:30 AM – 11:30 AM

There has been significant changes in computer hardware over the past two decades, however most publicly available advice on performance engineering and optimization for DSP software is unchanged from the late '90's or early 2000's.

This talk will discuss some of the features available in modern CPUs and DSPs including SIMD vectorization, execution pipelines, and hardware caching, as well as advice for how to optimize DSP code with those features in mind.

Samuel Fischmann • Talk
Tue, Oct 14 • 11:30 AM
Measure for Measure: The Loudness Wars
View abstract
Tue, Oct 14 • 11:30 AM – 12:30 PM

In one of the most public battles between measure and experience, the loudness wars in music drags on through a new era of streaming, transcoding, reproduction, and international standards bodies.

While the arguments shift, how is it that years of analysis, research, and proposed standards have not changed the prevailing attitudes of working mixing and mastering engineers that, quite simply, "Loud is good?"

Could it be that they understand something missed in the lab? Is there a set of measurements that would settle the war once and for all? Might there be a conceptual framework that works for us all?

Ayan Shafqat • Talk
Tue, Oct 14 • 1:30 PM
Vectorizing IIR Filters: Making Recursive Filters Parallel
View abstract
Tue, Oct 14 • 1:30 PM – 2:30 PM

IIR filters dominate real-time audio, yet their feedback paths make straightforward SIMD implementation difficult. This talk distills practical ways to vectorize IIR filters on mainstream CPUs like ARM and x86 without sacrificing numerical stability.

We explore:

  • batch independent channels or filter instances lane-wise;
  • restructure multi-band crossovers into breadth-parallel, tree-like structure so SIMD lanes executes multiple SOS at once;
  • apply algebraic splits (e.g., partial fractions) to evaluate sections concurrently.

We detail coefficient/state packing, transposition-free layouts, and scratch-buffer scheduling that keep vector registers and cache lines full. The result is a repeatable recipe for turning scalar IIR code into high-throughput, energy-efficient SIMD pipelines.

Remington Furman • Talk
Tue, Oct 14 • 2:30 PM
Sonifying the Tides
View abstract
Tue, Oct 14 • 2:30 PM – 3:30 PM

The fascinating phenomenon of tides can be modeled and predicted relatively simply as a sum of sinusoids, with a different set of amplitudes and phases for each location. In this presentation I discuss tide prediction using mechanical and digital computers.

Tide predictions are typically plotted graphically, but they can also be sped up and converted to audible sound (a process called “sonification”) to provide a different perspective. I will discuss implementation details and play audio samples using harmonic constituent data from various NOAA tide stations.

Mark Thoren • Talk
Tue, Oct 14 • 3:30 PM
Exploring Digital Filters with Low-Cost Hardware and Open-Source Tools
View abstract
Tue, Oct 14 • 3:30 PM – 4:30 PM

This presentation and live demonstration explores a cross section of affordable hardware and open-source tools for hands-on exploration of digital filters.

Hardware & Capture

  • 40 Msps, 20-bit ADC with an internal data-capture buffer
  • Built-in digital filters: SINC1, SINC5, and brick-wall
  • No FPGA required — the internal buffer lets a basic microcontroller transfer data to a host

Data Path & Software

  • Industry-standard IIO (Industrial Input/Output) framework for streaming
  • Compatible with C, C++, C#, MATLAB, and Python

Demos & Analysis

  • Short Python scripts and Jupyter notebooks using NumPy and SciPy
  • Low-cost USB instruments generate test signals: sinewaves, steps, wavelets, and noise bands
  • Analysis verifies filter properties and behavior
  • USB sound cards explored as budget sources/sinks vs. benchtop instruments
Dan Boschen • Talk
Tue, Oct 14 • 4:30 PM
RadioSonic(TM): A Preview of a Teaching Platform Using Audio-Frequency Waveforms
View abstract
Tue, Oct 14 • 4:30 PM – 5:30 PM

RadioSonic is a new educational platform under development that uses real-time audio propagation to emulate wireless channel behavior for hands-on SDR instruction. By shifting RF waveform experimentation into the acoustic domain, the RadioSonic platform enables exploration of realistic multipath, Doppler, digital modulation, and other physical-layer effects using affordable purpose-built hardware, at a fraction of the cost of equivalent RF hardware when scaled to the speed of light.

This talk provides a first look at the platform’s architecture and draft curriculum materials. The underlying concept of scaling the speed of light to the speed of sound on low-cost devices for real-time RF emulation is patent pending and being developed to support SDR and DSP education in environments where cost, licensing, and RF complexity present barriers to learning.

The presentation will illustrate how the platform can be used for hands-on implementation of digital filtering, timing and carrier recovery, and modern modulations such as QAM and OFDM, as well as spatial techniques including diversity, beam steering and MIMO, using wavelength-consistent waveforms that reflect real-world channel effects at audio scale in real-time hardware.

Targeted for release in Spring 2026, the RadioSonic platform will include extensible software released under a permissive license and designed around common interfaces to simplify integration and modification. These design choices are intended to encourage collaboration and future community-developed functionality as the platform matures. Attendees will preview the platform’s direction and have an opportunity to provide feedback to help shape its relevance to the signal processing community.

Dan Boschen • Workshop
Wed, Oct 15 • 8:30 AM
The Space Between Samples: Subsample Precision Variable Delay
View abstract
Wed, Oct 15 • 8:30 AM – 10:30 AM

This workshop offers a practical, application-driven introduction to implementing programmable fractional delay lines with exceptional precision. We'll begin by reviewing fundamental interpolation approaches, then move to a structured design methodology for developing efficient, high-performance fractional delays, with an emphasis on **polyphase** and **Farrow filter architectures**.

Real-world use cases, including **beamforming**, **timing recovery**, and **high-precision EVM measurement**, will illustrate how these techniques can be applied in practice. The session will also feature a live Python coding demonstration to show you how to implement practical fractional delay filters.

Participants with their own laptops will have the option to follow along, while others can simply observe the walkthrough. All attendees will receive installation instructions and example scripts to support live experimentation and future use.

Jim Shima • Talk
Wed, Oct 15 • 11:00 AM
Uncommon Multirate Signal Processing for Real-Time Systems
View abstract
Wed, Oct 15 • 11:00 AM – 12:00 PM
This talk covers a not-so-well-known multirate Fourier property and it's time-domain dual. Though obscure, the judicious use of this property can open up large unexpected computational efficiencies for real-time signal processing algorithms. Efficient solutions suitable for high-speed implementations are also presented - including decimation, channelization, and sparse signal processing.
Christopher Hansen • Talk
Wed, Oct 15 • 1:00 PM
How to Design Nonlinear Approximations for DSP
View abstract
Wed, Oct 15 • 1:00 PM – 2:00 PM

Nonlinear functions, such as arctangent, logarithm, and square root are commonly used in Digital Signal Processing. In practice, a textbook approximation algorithm is often used to compute these functions. These approximations are typically of mysterious origin and optimized for a certain application or implementation. Consequently, they may not be ideal for the application at hand. This talk describes a method for designing approximations using Chebfun (www.chebfun.org), an open-source software system for numerical computation with functions. With Chebfun, it is possible to quickly determine polynomial and rational approximations for any function with as many interpolation points as needed. This talk will cover a few basic topics in approximation theory and then work through several practical examples that can be directly employed in fixed point and floating point DSP applications.

Hilmar Lehnert • Talk
Wed, Oct 15 • 2:00 PM
Frequency and Damping: a handy map of the Z-plane
View abstract
Wed, Oct 15 • 2:00 PM – 3:00 PM

Almost all discrete LTI systems can be represented as a rational function in the Z-domain. This rational function can be fully characterized by a gain and the roots of the polynomials, which are, of course, the poles and the zeros of that transfer function. Any real transfer function can be broken down into cascaded second order sections each with a pair of zeros and a pair of poles. The direct interpretation of these root pairs in terms of real or imaginary part or magnitude and phase or two real roots isn’t straight forward.

In this presentation we show an alternative interpretation where each root pair can be represented as the resonance frequency and the damping of a second order resonator. We’ll show how these parameters map to the Z-plane and that each point in the z-plane can be uniquely associated with a specific frequency & damping. In other words, we can answer “What is the Q of a pole” or at least a pole pair.

Finally, we’ll demonstrate how some popular filter types can be intuitively designed using this representation.

Bradford Watson • Talk
Wed, Oct 15 • 3:00 PM
Non-Powers-of-2 FFTs and Why You Should Use Them
View abstract
Wed, Oct 15 • 3:00 PM – 4:00 PM

The Fast Fourier Transform (FFT) is a fundamental algorithm in digital signal processing, and it comes in various forms. However, the most commonly used FFTs are those with sizes that are powers of 2 (e.g., 256, 512, 1024, etc.). This is due to their widespread availability, ease of implementation, and ability to achieve an efficiency of O(N log N), making them a popular choice among designers.

While powers-of-2 FFTs are convenient and efficient, they may not always be the best choice for every design. In some cases, using a power-of-2 FFT is not always the best choice from a system standpoint, and can lead to complications in other parts of the system in terms of inefficient use of resources and increased complexity.

Fortunately, there are alternative FFT algorithms that can handle any size of FFT, not just powers of 2.

This talk will describe traditional FFT implementation, more exotic FFT algorithms such as:

  • Rader FFT
  • Winograd FFT
  • Prime Factor FFT

It will also cover how to extend Cooley-Tukey factoring to combine powers-of-2 and non-powers-of-2 FFTs, and the Four Step FFT.

Several other algorithms will be touched upon, along with their application, advantages, and disadvantages.

Kendall Castor-Perry • Talk
Wed, Oct 15 • 4:00 PM
The Filter Wizard – Unfiltered
View abstract
Wed, Oct 15 • 4:00 PM – 5:00 PM

The Filtering Onion: Five Questions from the Filter Wizard

In a career spent chasing information hidden in all forms of signals, I've boiled down the art of filtering to five essential questions:

  1. Why are you filtering?
  2. What are you filtering?
  3. Where are you filtering?
  4. When are you filtering?
  5. How are you filtering?

These questions lead to another layer of the onion, prompting you to evaluate your approach:

  1. Is it too complex or too simple?
  2. Is it too fancy or too naive?
  3. Is it too obscure or too obvious?
  4. Is it too much or too little?
  5. Is it too early or too late?

And deep inside this onion, we might uncover some "mythconceptions" that need to be battled, in the true spirit of The Filter Wizard.

fred harris • Talk
Thu, Oct 16 • 8:30 AM
Let’s Assume the System is Synchronized
View abstract
Thu, Oct 16 • 8:30 AM – 10:30 AM

It is amazing how many papers on radio systems contain a version of the sentence “Let’s assume the system is synchronized.” Alright, let’s assume the system is synchronized. But I have a few questions: Who did it? How did they do it? Who will do it in the next decades as many of us retire from the field? An important one is: where are they acquiring the skills required to negotiate and navigate future physical layers?

This brings us to the question of “What do we mean by synchronize”? Its etymology starts with Chronos, the ancient Greek Immortal Being personified in the modern age as Father Time. We thus form synchronize from the Greek prefix syn, meaning “together with,” and chronos which we interpret as “time”.

With the industrial revolution and the rise of high-speed railroads came the need to synchronize clocks in adjacent towns to maintain timetables and prevent accidents. Today, the higher-speed transport of communication signals places an even greater premium on time measurement and alignment of remote clocks and oscillators.

When discussing the importance of synchronization in my modem design class, I remind students that Momma’s middle name is synchronizer! If the radio is not synchronized, no other system can operate—

  • Not the matched filters
  • Not the equalizer
  • Not the detectors
  • Not the error correcting codes
  • Not the decryption
  • Not the source decoding

At the waveform level, synchronization entails the frequency and phase alignment of remote oscillators for carrier acquisition and tracking, symbol timing, and chip alignment in spread spectrum systems.

What have we missed by assuming the system is synchronized? We’ve skipped a challenging and fascinating part of the process: estimating unknown parameters of a known signal in noise. We’ve replaced the task of processing a noisy waveform with processing a binary stream, and we’ve skipped making Momma happy!

Digital Signal Processing is now standard in modulators and demodulators of modern systems. These require acquiring carrier and clock from signals where neither is explicitly present. Synchronization info must be extracted from:

  • Implicit side information in the modulated signal
  • Explicit side information in pilot signals

Many ad-hoc analog synchronization methods predate optimal techniques. As DSP emerged, these analog schemes were often digitized and reused. But returning to first principles in DSP can offer performance gains.

In this presentation, we’ll review:

  • Receiver structures and parameters to estimate
  • Eye and constellation diagrams
  • Phase-locked loops and their digital counterparts
  • Timing and carrier recovery methods (with and without data decisions)
  • Real-world synchronization in modern systems

MATLAB demos will illustrate key ideas. We’ll also highlight historical and modern synchronization techniques from the Digital Communications: Fundamentals and Applications text by Bernie Sklar and Fred harris. The tutorial’s goal is to equip DSP radio designers with modern synchronization methods made possible by DSP maturity—enhancing performance, reducing cost, and improving design satisfaction.

Ric Losada • Talk
Thu, Oct 16 • 11:00 AM
Channelizers in Digital Receivers
View abstract
Thu, Oct 16 • 11:00 AM – 12:00 PM

As A/D converters move closer to the antenna of a digital receiver, the need to handle broadband signals efficiently in the digital domain is increasing. Efficient Polyphase/FFT Filter Banks (aka Channelizers) are a natural extension to traditional Polyphase Decimators that can handle broadband multi-channel signals efficiently.

This talk will cover:

  • How channelizers are derived and implemented
  • Considerations in designing a prototype filter
  • Oversampled channelizers
  • Showcase simulations demonstrating using channelizers with broadband signals
Spencer Markowitz • Workshop
Thu, Oct 16 • 1:00 PM
When Perfect Isn’t Optimal: Rethinking Matched Filtering in Radar
View abstract
Thu, Oct 16 • 1:00 PM – 3:00 PM

Matched filtering is often introduced as the gold standard for radar detection, the optimal solution when the signal is known and the noise is white and Gaussian. However, in many real-world scenarios, matched filtering can become more of a constraint than a solution, such as when strong targets hide the returns of weaker targets. Enter, mismatched filtering.

In this workshop, participants will gain hands-on experience with designing and evaluating their own mismatched filters. Through prepared MATLAB coding examples and in-person explanations, participants will learn how filter design parameters impact detection performance in challenging scenarios. We’ll focus on quantifying the benefits and limitations of certain designs and utilize powerful visualization tools to help in decision making.

Attendees will learn many different strategies in this workshop, from choosing the right window/taper to finding optimal solutions with custom constraints. Throughout all of this, we will make sure to play close attention to the costs of each design decision to make sure our solution remains robust to the challenges of the field.

Nir Regev • Talk
Thu, Oct 16 • 3:00 PM
AI in Radar Signal Processing
View abstract
Thu, Oct 16 • 3:00 PM – 5:00 PM

This lecture offers a comprehensive overview of the evolution, principles, and cutting-edge applications of radar technology. We trace radar's development from its early days to modern advancements, emphasizing the integration of digital and statistical signal processing with artificial intelligence (AI). Key topics include the history of radar, modern techniques such as FMCW (Frequency Modulated Continuous Wave) and Pulse Doppler radars, and AI's transformative role in detection, tracking, classification, and decision-making.

We delve into the technical foundations of radar signal processing, explaining concepts like frequency modulation, signal mixing, and range-Doppler processing. The lecture also covers the significant AI applications in radar, such as clutter suppression, target classification, and adaptive waveform optimization. Challenges like the need for large training datasets, model interpretability, and robust AI systems are discussed alongside solutions like data augmentation and generative models.

Through detailed explanations and high-level block diagrams, the lecture aims to provide a solid understanding of radar systems' operational principles and AI's role in enhancing their capabilities. This foundational knowledge prepares participants for deeper exploration of these technologies in subsequent modules.