School of Technology and Computer Science Seminars

Large-scale Situation Awareness with Camera Networks and Multimodal Sensing

by Prof. Kishore Ramachandran (Georgia)

Monday, February 20, 2012 from to (Asia/Kolkata)
at Colaba Campus ( AG-80 )
Description
Sensors of various modalities and capabilities, especially cameras, have become ubiquitous in our environment. Their intended use is wide ranging and encompasses surveillance, transportation, entertainment, education, healthcare, emergency response, disaster recovery, and the like. Technological advances and the low cost of such sensors enable deployment of large-scale camera networks in large metropolis such as London and New York. Multimedia algorithms for analyzing and drawing inferences from video and audio have also matured tremendously in recent times. Despite all these advances, large-scale reliable systems for media-rich sensor-based applications, often classified as situation awareness applications, are yet to become commonplace. Why is that? There are several forces at work here. First of all, the system abstractions are just not at the right level for quickly prototyping such applications in the large. Second, while Moore’s law has held true for predicting the growth of processing power, the volume of data that applications are called upon to handle is growing similarly, if not faster. Enormous amount of sensing data is continually generated for real-time analysis in such applications. Further, due to the very nature of the application domain, there are dynamic and demanding resource requirements for such analyses.

The lack of right set of abstractions for programming such applications coupled with their data intensive nature have hitherto made realizing reliable large-scale situation awareness applications difficult. Incidentally, situation awareness is a very popular but ill-defined research area that has attracted researchers from many different fields. In this talk, I will adopt a systems perspective and consider the components that are essential in realizing a fully functional situation awareness system.

Bio: Kishore Ramachandran received his Ph. D. in Computer Science from the University of Wisconsin, Madison in 1986, and has been on the faculty of Georgia Tech since then. Currently, he is the Director of Samsung Tech Advanced Research (STAR) center, Director of Korea Programs, and Professor in the College of Computing at the Georgia Institute of Technology. For two years (July 2003 to August 2005) he served as the Chair of the Core Computing Division within the College of Computing. His fields of interest include parallel and distributed systems, computer architecture, and operating systems. He has authored over 100 technical papers and is best known for his work in Distributed Shared Memory (DSM) in the context of the Clouds operating system; and more recently for his work in stream-based distributed programming in the context of the Stampede system. Currently, he is leading two inter-related projects, ASAP/TC that deals with camera sensor networks for situation awareness and Web on Demand that deals with system software stack for enabling transient social networking on mobile platforms. He has so far graduated 22 Ph.D. students who are well placed in academia and industries. He is currently advising 8 Ph.D. students. He is the recipient of an NSF PYI Award in 1990, the Georgia Tech doctoral thesis advisor award in 1993, the College of Computing Outstanding Senior Research Faculty award in 1996, the College of Computing Dean's Award in 2003, the College of Computing William ``Gus'' Baird Teaching Award in 2004, and the 2009 “Freeman Award” for entrepreneurship in the College of Computing.