W375 Westgate Building
 2:00pm
                        A next frontier in intelligent systems lies in enabling machines to perceive and interact with the physical world—powering applications from robotics and self-driving cars to AR/VR and content creation. These systems must perceive and represent complex 3D environments, render photorealistic content, and generate interactive outputs—all under tight constraints on latency, memory, and scalability. In this talk, I will explore the systems challenges that arise in emerging visual computing pipelines, and how they push the limits of today’s abstractions for memory, compute, and programmability. I will then discuss some of our recent research on building across-the-stack frameworks that offer better primitives for 3D vision, differentiable rendering, and generative pipelines—spanning hardware architecture support, compiler and runtime design, and memory and storage hierarchies.
                        Additional Information: 
Nandita Vijaykumar is an Assistant Professor in the Department of Computer Science at the University of Toronto, where she leads the embARC research group. She is also a faculty member at the Vector Institute for Artificial Intelligence and a Research Scholar at Amazon. She received her Ph.D. from Carnegie Mellon University and has previously worked at AMD, Intel, Microsoft, and Nvidia. Her research explores the intersection of computer systems/architecture with visual computing, including computer vision, robotics, and machine learning. She is particularly interested in building efficient, scalable, and programmable systems that enable machines to perceive, interpret, and interact with the physical world.
                        Details...