Seminar Series

double-arrowRecent Advances

October 4, 2022

Creating the Internet of Biological and Bio-inspired Things

Shyam Gollakota
Department of Computer Science
University of Washington

Abstract

Living organisms can perform incredible feats. Plants like dandelions can disperse their seeds over a kilometer in the wind, and small insects like bumblebees can see, smell, communicate, and fly around the world, despite their tiny size. Enabling some of these capabilities for the Internet of things (IoT) and cyber-physical systems would be transformative for applications ranging from large-scale sensor deployments to micro-drones, biological tracking, and robotic implants. In this talk, I will explain how by taking an interdisciplinary approach spanning wireless communication, sensing, and biology, we can create programmable devices for the internet of biological and bio-inspired things. I will present the first battery-free wireless sensors, inspired by dandelion seeds, that can be dispersed by the wind to automate deployment of large-scale sensor networks. I will then discuss how integrating programmable wireless sensors with live animals like bumblebees can enable mobility for IoT devices, and how this technique has been used for real-world applications like tracking invasive “murder” hornets. Finally, I will present an energy-efficient insect-scale steerable vision system inspired by animal head motion that can ride on the back of a live beetle and enable tiny terrestrial robots to see.

Biography

Shyam Gollakota is a Washington Research Foundation Endowed Professor at the Paul G. Allen School of Computer Science & Engineering in the University of Washington. His work has been licensed and acquired by multiple companies and is in use by millions of users. His lab also worked closely with the Washington Department of Agriculture to wireless track the invasive “murder” hornets, which resulted in the destruction of the first nest in the United States. He is the recipient of the ACM Grace Murray Hopper Award in 2020 and recently named as a Moore Inventor Fellow in 2021. He was also named in MIT Technology Review’s 35 Innovators Under 35, Popular Science ‘brilliant 10’ and twice to the Forbes’ 30 Under 30 list. His group’s research has earned Best Paper awards at MOBICOM, SIGCOMM, UbiComp, SenSys, NSDI and CHI, appeared in interdisciplinary journals like Nature, Nature Communications, Nature Biomedical Engineering, Science Translational Medicine, Science Robotics and Nature Digital Medicine as well as named as a MIT Technology Review Breakthrough technology of 2016 as well as Popular Science top innovations in 2015. He is an alumni of MIT (Ph.D., 2013, winner of ACM doctoral dissertation award) and IIT Madras.

October 20, 2022

Reinforcement Learning with Robustness and Safety Guarantees

Dileep Kalathil
Department of Electrical and Computer Engineering
Texas A&M University

Abstract

Reinforcement Learning (RL) is the class of machine learning that addresses the problem of learning to control unknown dynamical systems. RL has achieved remarkable success recently in applications like playing games and robotics. However, most of these successes are limited to very structured or simulated environments. When applied to real-world systems, RL algorithms face two fundamental sources of fragility. First, the real-world system parameters can be very different from that of the nominal values used for training RL algorithms. Second, the control policy for any real-world system is required to maintain some necessary safety criteria to avoid undesirable outcomes. Most deep RL algorithms overlook these fundamental challenges which often results in learned policies that perform poorly in the real-world settings. In this talk, I will present two approaches to overcome these challenges. First, I will present an RL algorithm that is robust against the parameter mismatches between the simulation system and the real-world system. Second, I will discuss a safe RL algorithm to learn policies such that the frequency of visiting undesirable states and expensive actions satisfies the safety constraints. I will also briefly discuss some practical challenges due to the sparse reward feedback and the need for rapid real-time adaptation in real-world systems, and the approaches to overcome these challenges.

Biography

Dileep Kalathil is an Assistant Professor in the Department of Electrical and Computer Engineering at Texas A&M University (TAMU). His main research area is reinforcement learning theory and algorithms, and their applications in communication networks and power systems. Before joining TAMU, he was a post-doctoral researcher in the EECS department at UC Berkeley. He received his Ph.D. from University of Southern California (USC) in 2014, where he won the best Ph.D. Dissertation Prize in the Department of Electrical Engineering. He received his M. Tech. from IIT Madras, where he won the award for the best academic performance in the Electrical Engineering Department. He received the NSF CRII Award in 2019 and the NSF CAREER award in 2021. He is a senior member of IEEE.

double-arrowPast Talks and Presentations

September 13, 2022

Decoding Hidden Worlds: Wireless & Sensor Technologies for Oceans,
Health, and Robotics

Fadel Adib
Department of Electrical Engineering and Computer Science
MIT

Abstract

As humans, we crave to explore hidden worlds. Yet, today’s technologies remain far from allowing us to perceive most of the world we live in. Despite centuries of seaborne voyaging, more than 95% of our ocean has never been observed or explored. And, at any moment in time, each of us has very little insight into the biological world inside our own bodies. The challenge in perceiving hidden worlds extends beyond ourselves: even the robots we build are limited in their visual perception of the world. In this talk, I will describe new technologies that allow us to decode areas of the physical world that have so far been too remote or difficult to perceive. First, I will describe a new generation of underwater sensor networks that can sense, compute, and communicate without requiring any batteries; our devices enable real-time and ultra-long-term monitoring of ocean conditions (temperature, pressure, coral reefs) with important applications to scientific exploration, climate monitoring, and aquaculture (seafood production). Next, I will talk about new wireless technologies for sensing the human body, both from inside the body (via batteryless micro-implants) as well as from a distance (for contactless cardiovascular and stress monitoring), paving the way for novel diagnostic and treatment methods. Finally, I will highlight our work on extending robotic perception beyond line-of-sight, and how we designed new RF-visual primitives for robotics - including sensing, servoing, navigation, and grasping - to enable new manipulation tasks that were not possible before. The talk will cover how we have designed and built these technologies, and how we work with medical doctors, climatologists, oceanographers, and industry practitioners to deploy them in the real world. I will also highlight the open problems and opportunities for these technologies, and how researchers and engineers can build on our open-source tools to help drive them to their full potential in addressing global challenges in climate, health, and automation.

Biography

Fadel Adib is an Associate Professor in the MIT Media Lab and the Department of Electrical Engineering and Computer Science. He is the founding director of the Signal Kinetics group which invents wireless and sensor technologies for networking, health monitoring, robotics, and ocean IoT. He is also the founder & CEO of Cartesian Systems, a spinoff from his lab that focuses on mapping indoor environments using wireless signals. Adib was named by Technology Review as one of the world’s top 35 innovators under 35 and by Forbes as 30 under 30. His research on wireless sensing (X-Ray Vision) was recognized as one of the 50 ways MIT has transformed Computer Science, and his work on robotic perception (Finder of Lost Things) was named as one of the 103 Ways MIT is Making a Better World. Adib’s commercialized technologies have been used to monitor thousands of patients with Alzheimer’s, Parkinson’s, and COVID19, and he has had the honor to demo his work to President Obama at the White House. Adib is also the recipient of various awards including the NSF CAREER Award(2019), the ONR Young Investigator Award (2019), the ONR Early Career Grant (2020), the Google Faculty Research Award (2017), theSloan Research Fellowship (2021), and the ACM SIGMOBILE Rockstar Award (2022), and his research has received Best Paper/Demo Awards at SIGCOMM, MobiCom, and CHI. Adib received his Bachelors from the American University of Beirut (2011) and his PhD from MIT (2016), where his thesis won the Sprowls award for Best Doctoral Dissertation at MIT and the ACM SIGMOBILE Doctoral Dissertation Award.

May 12, 2022

Tackling Computational Heterogeneity in Federated Learning

Gauri Joshi
Department of Electrical and Computer Engineering
Carnegie Mellon University

Abstract

The future of machine learning lies in moving both data collection as well as model training to the edge. The emerging area of federated learning seeks to achieve this goal by orchestrating distributed model training using a large number of resource-constrained mobile devices that collect data from their environment. Due to limited communication capabilities as well as privacy concerns, the data collected by these devices cannot be sent to the cloud for centralized processing. Instead, the nodes perform local training updates and only send the resulting model to the cloud. A key aspect that sets federated learning apart from data-center-based distributed training is the inherent heterogeneity in data and local computation at the edge clients. In this talk, I will present our recent work on tackling computational heterogeneity in federated optimization, firstly in terms of heterogeneous local updates made by the edge clients, and secondly in terms of intermittent client availability.

Biography

Gauri Joshi is an assistant professor in the ECE department at Carnegie Mellon University since September 2017. Previously, she worked as a Research Staff Member at IBM T. J. Watson Research Center. Gauri completed her Ph.D. from MIT EECS in June 2016, advised by Prof. Gregory Wornell. She received her B.Tech and M.Tech in Electrical Engineering from the Indian Institute of Technology (IIT) Bombay in 2010. Her awards and honors include the NSF CAREER Award (2021), ACM Sigmetrics Best Paper Award (2020), NSF CRII Award (2018), IBM Faculty Research Award (2017), Best Thesis Prize in Computer science at MIT (2012), and Institute Gold Medal of IIT Bombay (2010).

April 13, 2022

Trustworthy Machine Learning for Systems Security

Lorenzo Cavallaro
Department of Computer Science
University College London

Abstract

No day goes by without reading machine learning (ML) success stories across different application domains. Systems security is no exception, where ML’s tantalizing results leave one to wonder whether there are any unsolved problems left. However, machine learning has no clairvoyant abilities and once the magic wears off, we’re left in uncharted territory.

We, as a community, need to understand and improve the effectiveness of machine learning methods for systems security in the presence of adversaries. One of the core challenges is related to the representation of problem space objects (e.g., program binaries) in a numerical feature space, as the semantic gap makes it harder to reason about attacks and defences and often leaves room for adversarial manipulation. Inevitably, the effectiveness of machine learning methods for systems security are intertwined with the underlying abstractions, e.g., program analyses, used to represent the objects. In this context, is trustworthy machine learning possible?

In this talk, I will first illustrate the challenges in the context of adversarial ML evasion attacks against malware classifiers. The classic formulation of evasion attacks is ill-suited for reasoning about how to generate realizable evasive malware in the problem space. I’ll provide a deep dive into recent work that provides a theoretical reformulation of the problem and enables more principled attack designs. Implications are interesting, as the framework facilitates reasoning around end-to-end attacks that can generate real-world adversarial malware, at scale, that evades both vanilla and hardened classifiers, thus calling for novel defenses.

Next, I’ll broaden our conversation to include not just robustness against specialized attacks, but also drifting scenarios, in which threats evolve and change over time. Prior work suggests adversarial ML evasion attacks are intrinsically linked with concept drift and we will discuss how drift affects the performance of malware classifiers, and what role the underlying feature space abstraction has in the whole process.

Ultimately, these threats would not exist if the abstraction could capture the ’Platonic ideal’ of interesting behavior (e.g., maliciousness), however, such a solution is still out of reach. I’ll conclude by outlining current research efforts to make this goal a reality, including robust feature development, assessing vulnerability to universal perturbations, and forecasting of future drift, which illustrate what trustworthy machine learning for systems security may eventually look like.

Biography

Lorenzo grew up on pizza, spaghetti, and Phrack, first. Underground and academic research interests followed shortly thereafter. He is a Full Professor of Computer Science at UCL Computer Science, where he leads the Systems Security Research Lab (https://s2lab.cs.ucl.ac.uk) within the Information Security Research Group. He speaks, publishes at, and sits on the technical program committees of top-tier and well-known international conferences including IEEE S&P, USENIX Security, ACM CCS, ACSAC, and DIMVA, as well as emerging thematic workshops (e.g., Deep Learning for Security at IEEE S&P, and AISec at ACM CCS), and received the USENIX WOOT Best Paper Award in 2017. Lorenzo is Program Co-Chair of Deep Learning and Security 2021-22, DIMVA 2021-22, and he was Program Co-Chair of ACM EuroSec 2019-20 and General Co-Chair of ACM CCS 2019. He holds a PhD in Computer Science from the University of Milan (2008), held Post-Doctoral and Visiting Scholar positions at Vrije Universiteit Amsterdam (2010-2011), UC Santa Barbara (2008-2009), and Stony Brook University (2006-2008), worked in the Department of Informatics at King’s College London (2018-2021), where he held the Chair in Cybersecurity (Systems Security), and the Information Security Group at Royal Holloway, University of London (Assistant Professor, 2012; Associate Professor, 2016; Full Professor, 2018). He’s definitely never stopped wondering and having fun throughout.

March 25, 2022

Deep Convolutional Neural Networks: Enabling and Experimentally Validating Secure and High-Bandwidth Links

Kaushik Chowdhury
Electrical and Computer Engineering Department
Northeastern University

Abstract

The future NextG wireless standard must be able to sustain ultra-dense networks with trillions of untrusted devices, many of which will be mobile and require assured high-bandwidth links. This talk explores how deep learning, specifically deep convolutional neural networks (CNNs), will play a critical role to enable secure, high-bandwidth links while minimizing complex, upper layer processing and exhaustive search of the state space. First, we describe how device identification can be performed at the physical layer by learning subtle but discriminative distortions present in the transmitted signal, also called as RF fingerprints. We present accuracy results for the large radio populations as well as datasets collected from community-scale NSF PAWR platforms. Second, we show how beam selection for millimeter-wave links in a vehicular scenario can be expedited using out-of-band multi-modal data collected from an actual autonomous vehicle equipped with sensors like LiDAR, camera images, and GPS. We propose individual modality and distributed fusion-based CNN architectures that can execute locally as well as at a mobile edge computing center, with a study of associated tradeoffs.

Biography

Prof. Chowdhury is Professor in the Electrical and Computer Engineering Department and Associate Director of the Institute for the Wireless IoT at Northeastern University, Boston. He is the winner of the U.S. Presidential Early Career Award for Scientists and Engineers (PECASE) in 2017, the Defense Advanced Research Projects Agency Young Faculty Award in 2017, the Office of Naval Research Director of Research Early Career Award in 2016, and the National Science Foundation (NSF) CAREER award in 2015. He is the recipient of best paper awards at IEEE GLOBECOM’19, DySPAN’19, INFOCOM’17, ICC’13,’12,’09, and ICNC’13. He serves as area editor for IEEE Trans. on Mobile Computing, Elsevier Computer Networks Journal, IEEE Trans. on Networking, and IEEE Trans. on Wireless Communications. He co-directs the operations of Colosseum RF/network emulator, as well as the Platforms for Advanced Wireless Research project office. Prof. Chowdhury has served in several leadership roles, including Chair of the IEEE Technical Committee on Simulation, and as Technical Program Chair for IEEE INFOCOM 2021, IEEE CCNC 2021, IEEE DySPAN 2021, and ACM MobiHoc 2022. His research interests are in large-scale experimentation, applied machine learning for wireless communications and networks, networked robotics, and self-powered Internet of Things.

March 9, 2022

Transparent Computing in the AI Era

Shiqing Ma
Department of Computer Science
Rutgers University

Abstract

Recent advances in artificial intelligence (AI) have shifted how modern computing systems work, raising new challenges and opportunities for transparent computing. On the one hand, many AI systems are black boxes and have dense connections among their computing units, which makes existing techniques like dependency analysis fail. Such a new computing system calls for new methods to improve its transparency to defend against attacks against AI-powered systems such as Trojan attacks. On the other hand, it provides a brand-new computation abstraction, which features data-driven computation-heavy applications. It potentially enables new applications in transparent computing, which typically involves large-scale data processing. In this talk, I will present my work in these two directions. Specifically, I will discuss the challenges in analyzing the deep neural network for security inspection and introduce our novel approach in examining Trojan behaviors. Later, I will talk about AI can help increase the information entropy of large security audit logs to enable efficient lossless compressed storage.

Biography

Shiqing Ma is an Assistant Professor in the Department of Computer Science at Rutgers University, the State University of New Jersey. He received his Ph.D. in Computer Science from Purdue University in 2019. His research focuses on program analysis, software and system security, adversarial machine learning, and software engineering. He is the recipient of Distinguished Paper Awards from NDSS 2016 and USENIX Security 2017.

 

March 3, 2022

Resource Allocation through Learning in Emerging Wireless Networks

Sanjay Shakkottai
Department of Electrical and Computer Engineering
The University of Texas at Austin

Abstract

In this talk, we discuss learning-inspired algorithms for resource allocation in emerging wireless networks (5G and beyond to 6G). We begin with an overview of opportunities for wireless and ML at various time-scales in network resource allocation. We then present two specific instances to make the case that learning-assisted resource allocation algorithms can significantly improve performance in real wire-less deployments. First, we study co-scheduling of ultra-low-latency traffic (URLLC) and broadband traffic (eMBB) in a 5G system, where we need to meet the dual objectives of maximizing utility for eMBB traffic while immediately satisfying URLLC demands. We study iterative online algorithms based on stochastic approximation to achieve these objectives. Next, we study online learning (through a bandit framework) of wireless capacity regions to assist in downlink scheduling, where these capacity regions are “maps” from each channel-state to the corresponding set of feasible transmission rates. In practice, these maps are hand-tuned by operators based on experiments, and these static maps are chosen such that they are good across several base-station deployment scenarios. Instead, we propose an epoch-greedy bandit algorithm for learning scenario-specific maps. We derive regret guarantees, and also empirically validate our approach on a high-fidelity 5G New Radio (NR) wireless simulator developed within AT&T Labs. This is based on joint work with Gustavo de Veciana, Arjun Anand, Isfar Tariq, Rajat Sen, Thomas Novlan, Salam Akoum, and Milap Majmundar.

Biography

Sanjay Shakkottai received his Ph.D. from the ECE Department at the University of Illinois at Urbana-Champaign in 2002. He is with The University of Texas at Austin, where he is a Professor in the Department of Electrical and Computer Engineering, and holds the Cockrell Family Chair in Engineering #15. He received the NSF CAREER award in 2004 and was elected as an IEEE Fellow in 2014. He was a co-recipient of the IEEE Communications Society William R. Bennett Prize in 2021. His research interests lie at the intersection of algorithms for resource allocation, statistical learning and networks, with applications to wireless communication networks and online platforms.