Purdue’s AGILE3D stabilizes real-time lidar detection under resource contention
System adapts 3D detection strategy in real time to match GPU load, scene complexity
Purdue University researcher Somali Chaterji leads a team that has developed AGILE3D. It is the first adaptive, contention- and content-aware 3D object detection system tailored for embedded GPUs. (Purdue University photo)
WEST LAFAYETTE, Ind. — Companies that manufacture autonomous vehicles, industrial robotics, delivery robots and drones may benefit from a patent-pending Purdue University innovation that outperforms 3D lidar perception pipelines during resource contention.
Somali Chaterji leads a team that has developed AGILE3D, a cutting-edge 3D object detection system. She is an associate professor of agricultural and biological engineering in Purdue’s College of Agriculture and College of Engineering and holds a courtesy appointment in the Elmore Family School of Electrical and Computer Engineering.
“AGILE3D is the first adaptive, contention- and content-aware 3D object detection system tailored for embedded GPUs, or graphics processing units,” she said. “The system can dynamically adjust detection strategies based on real-time hardware constraints and varying input data.”
Baseline methods introduced at the Conference on Neural Information Processing Systems (NeurIPS), the European Conference on Computer Systems (EuroSys) and the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) provide reference points showing that across datasets and platforms, AGILE3D meets stringent latency objectives while delivering up to +3% accuracy over adaptive controllers and up to +7% over widely used static 3D detectors.
Chaterji said AGILE3D is broadly applicable wherever a robot or vehicle needs fast 3D perception on a tight onboard computer budget. She said the strongest fit is autonomous driving, where lidar frames must be processed in real time and perception is critical to safety.
“Beyond cars, AGILE3D can benefit delivery robots and drones, industrial/mobile robotics, augmented reality/virtual reality perception, and outdoor autonomy in digital agriculture and forestry, especially when the platform relies on an embedded GPU and must keep latency predictable for smoother, safer operation,” she said. “That matters most when multiple onboard workloads run at once, such as perception, tracking and planning, alongside in-cabin infotainment or driver-monitoring features that can also draw on GPU resources.”
A video demonstration of AGILE3D is available online. Research about AGILE3D was published at ACM MobiSys 2025, the International Conference on Mobile Systems, Applications and Services.
Chaterji disclosed AGILE3D to the Purdue Innovates Office of Technology Commercialization, which has applied for a patent from the U.S. Patent and Trademark Office to protect the intellectual property.
Industry partners interested in developing or commercializing the system should contact Abhijit Karve, director of business development and licensing, at aakarve@prf.org about track code 71001.
The problem of resource contention
Chaterji said resource contention occurs when multiple workloads share the same embedded GPU and memory system at the same time. An example is a ride-hailing robotaxi where camera perception, lidar processing, tracking, mapping and planning run together on the same embedded GPU.
“They contend for both compute (GPU processing time) and memory bandwidth (how fast data can move between memory and the GPU), and when either becomes saturated, perception can slow down or become jittery,” she said. “Another example is an advanced driver-assistance vehicle, where autonomy workloads may share the same GPU with driver-monitoring and in-cabin visualization or infotainment tasks.”
Chaterji said one of 3D lidar’s key constraints is its update rate, or how often the sensor delivers a new point cloud frame, which is a fresh 3D snapshot of surroundings. Each frame is a set of distance points that outlines nearby objects and surfaces. She said many automotive lidars run at 10 to 20 hertz, meaning the system receives 10 to 20 frames each second.
“Under contention, 3D lidar pipelines are hit hard because stages like voxelization, spatial encoding and sparse 3D computation can become jittery,“ she said. “Some frames finish quickly but others take much longer depending on point density and memory pressure.”
When a frame takes longer than the sensor period, or the time between lidar frames, the perception stack cannot keep pace with incoming frames.
“So it either delivers stale detections, sometimes drops frames or accumulates delay,” Chaterji said. “The practical consequence is less stable real-time perception in complex scenes, especially when the GPU is already busy.”
How AGILE3D responds during resource contention
Chaterji said AGILE3D maintains performance under resource contention through two coordinated layers: its multibranch execution framework (MEF) and its contention- and content-aware reinforcement learning (CARL) controller.
“The MEF builds a portfolio of pretrained 3D detectors by varying five control knobs, so there is always a feasible option across a wide latency range,” she said.
The portfolio of detectors includes:
- Encoding format, or how lidar points are turned into a grid-like representation
- Spatial resolution, or how fine the grid is — smaller voxels and pillars capture more detail
- Spatial encoding method, or how points are assigned to cells, such as hard voxelization with fixed caps or dynamic voxelization that adapts to scene density
- 3D feature extractor, which is the backbone of the network, such as a sparse 3D convolutional neural network (CNN) or a transformer
- Detection head, or how the model outputs boxes
“The CARL controller selects the best branch per frame using both content signals and measured hardware contention,” she said. “When contention rises or the scene becomes more complex, AGILE3D switches to a branch that best matches the latency budget. The system stays within the target while sacrificing as little accuracy as possible, which reduces deadline misses and reduces latency jitter.”
Validation, support and next development steps
During comprehensive evaluations, AGILE3D achieved state-of-the-art performance, maintaining high accuracy across varying hardware contention levels and latency budgets of 100 to 500 milliseconds.
“On NVIDIA Orin and Xavier GPUs, AGILE3D consistently leads the Pareto frontier, outperforming existing methods for robust, efficient 3D object detection,” she said.
Chaterji continues to develop the technology by smoothing the latency spikes around voxelization and dynamic voxelization. The same improvements can also help enable dense scene understanding on onboard computers, where 3D semantic segmentation must run reliably under tight compute and memory budgets.
“Even incremental improvements that make voxelization more predictable, such as more stable kernel time and fewer synchronization stalls, would directly translate into fewer latency violations,” she said. “That would also enable a tighter controller that can safely push closer to the service level objective.”
Chaterji and her team received funding to develop AGILE3D through her National Science Foundation CAREER grant and a National Science Foundation grant for their CHORUS center.
About Purdue Innovates Office of Technology Commercialization
The Purdue Innovates Office of Technology Commercialization operates one of the most comprehensive technology transfer programs among leading research universities in the U.S. Services provided by this office support the economic development initiatives of Purdue University and benefit the university’s academic activities through commercializing, licensing and protecting Purdue intellectual property. In fiscal year 2025, the office reported 161 deals executed with 269 technologies licensed, 479 invention disclosures received, and 267 U.S. and international patents received. The office is managed by the Purdue Research Foundation, a private, nonprofit foundation created to advance the mission of Purdue University. Contact otcip@prf.org for more information.
About Purdue University
Purdue University is a public research university leading with excellence at scale. Ranked among top 10 public universities in the United States, Purdue discovers, disseminates and deploys knowledge with a quality and at a scale second to none. More than 106,000 students study at Purdue across multiple campuses, locations and modalities, including more than 57,000 at our main campus locations in West Lafayette and Indianapolis. Committed to affordability and accessibility, Purdue’s main campus has frozen tuition 14 years in a row. See how Purdue never stops in the persistent pursuit of the next giant leap — including its integrated, comprehensive Indianapolis urban expansion; the Mitch Daniels School of Business; Purdue Computes; and the One Health initiative — at https://www.purdue.edu/president/strategic-initiatives.
Media contact: Steve Martin, sgmartin@prf.org