showSidebars ==
showTitleBreadcrumbs == 1
node.field_disable_title_breadcrumbs.value ==

MOSAIC: Spatially-Multiplexed Edge AI Optimization over Multiple Concurrent Video Sensing Streams

Please click here if you are unable to view this page.

 

MOSAIC: Spatially-Multiplexed Edge AI Optimization over Multiple Concurrent Video Sensing Streams

Speaker (s):

GOKARN Ila Nitin
PhD Candidate
School of Computing and Information Systems
Singapore Management University

Date:

Time:

Venue:

 

26 May 2023, Friday

11:30am - 12:30pm

Meeting room 5.1, Level 5
School of Computing and
Information Systems 1,
Singapore Management University,
80 Stamford Road
Singapore 178902

Please register by 25 May 2023.

About the Talk

Sustaining high fidelity and high throughput of perception tasks over vision sensor streams on edge devices remains a formidable challenge, especially given the continuing increase in image sizes (e.g., generated by 4K cameras) and complexity of DNN models. One promising approach involves criticality-aware processing, where the computation is directed selectively to ``critical" portions of individual image frames. We introduce MOSAIC, a novel system for such criticality-aware concurrent processing of multiple vision sensing streams that provides a multiplicative increase in the achievable throughput with negligible loss in perception fidelity. MOSAIC determines critical regions from images received from multiple vision sensors and spatially bin-packs these regions using a novel multi-scale Mosaic Across Scales (MoS) tiling strategy into a single `canvas frame’, sized such that the edge device can retain sufficiently high processing throughput. Experimental studies using benchmark datasets for two tasks, Automatic License Plate Recognition and Drone-based Pedestrian Detection, shows that MOSAIC, executing on a Jetson TX2 edge device, can provide dramatic gains in the throughput vs. fidelity tradeoff. For instance, for drone-based pedestrian detection, for a batch size of 4, MOSAIC can pack input frames from 6 cameras to achieve (a) 4.75× (475%) higher throughput (23 FPS per camera, cumulatively 138FPS) with ≤ 1% accuracy loss, compared to a First Come First Serve (FCFS) processing paradigm.

This is a Pre-Conference talk for the 14th ACM Multimedia Systems Conference (MMSys 2023).

About the Speaker

GOKARN Ila Nitin is a fourth-year PhD candidate in Computer Science at Singapore Management University's School of Computing and Information Systems. Advised by Prof. Archan Misra, she works primarily in the field of pervasive systems and sensing with a focus on cognitive edge computing paradigms and platforms.