Skip to main content

Seminar Series

Upcoming Talks and Presentations

April 25, 2024@2:30pm EST

CT Scan for Your Network: Topology Inference from End-to-end Measurements

Ting He
School of Electrical Engineering and Computer Science
Pennsylvania State University

Zoom: osu.zoom.us/j/96171324710?pwd=UDVuOXd4cno5cFlYb0w0OEVlbEF3QT09

Abstract

Network topology information is needed for a number of management functions at network layer and above. However, the traditional ways of obtaining this information require either access to the control plane (e.g., SNMP, OpenFlow) or cooperation from internal nodes (e.g., traceroute). In this talk, I will give an overview of an inference-based technique for obtaining topology information based on end-to-end measurements in the data plane, known as network topology inference. As a branch of a broader family of inversion problems known as network tomography, topology inference tries to jointly infer the routing topology and the link metrics of a blackbox communication network from end-to-end performance measurements (e.g., delays, losses). The talk will cover the basic idea in topology inference, the state of the art and limitations, and an example of how the inferred information can facilitate upper-layer applications in the context of overlay networks.

Biography

Ting He is an Associate Professor in the School of Electrical Engineering and Computer Science at the Pennsylvania State University. She received the Ph.D. degree in Electrical and Computer Engineering from Cornell University. Her interests reside at the intersection of computer networking, performance evaluation, and machine learning. Dr. He has served as Associate Editor for IEEE Transactions on Communications and IEEE/ACM Transactions on Networking, General Co-Chair of IEEE RTCSA, TPC Co-Chair of IEEE ICCCN, and Area TPC Chair of IEEE INFOCOM. She received multiple top contributor awards from IBM and ITA, and multiple paper awards from IEEE Communications Society, ICDCS, SIGMETRICS, and ICASSP.

April 26, 2024@2:30pm EST

Towards Private Computation and Private Learning

Alex Sprintson
Department of Electrical and Computer Engineering
Texas A&M University

Zoom: osu.zoom.us/j/96171324710?pwd=UDVuOXd4cno5cFlYb0w0OEVlbEF3QT09

Abstract

In many practical applications, computational workloads are outsourced to the remote server to minimize delays and leverage the hardware capabilities available at the edge cloud. In addition, many applications involve computation on data stored remotely. Many of these applications leverage Machine Learning (ML) algorithms for data analysis and decision-making. In these settings, protecting the content and identity of the data items used in the computation is critical. This talk presents a general framework for private computation and covers several important use cases. First, we will discuss the Private Linear Computation (PLC) problem and the Private Linear Transformation (PLT) problem, whose goal is to compute a single linear combination and multiple linear combinations of a subset of data items, respectively. Several ML applications motivate these problems, including a linear transformation for dimensionality reduction and training linear models for regression or classification. Next, we will discuss the problems of private information retrieval, group identification, and group testing with differential privacy constraints. Finally, we discuss private learning and related problems. We conclude by presenting the directions for future research.

Biography

Alex Sprintson is a faculty member in the Department of Electrical and Computer Engineering at Texas A&M University, College Station, where he conducts research on security and privacy, network coding, and wireless networks. Dr. Sprintson received the Texas A&M College of Engineering Outstanding Contribution Award and the NSF Director’s Superior Accomplishment Award. From 2013 to 2019, he served as an Associate Editor of the IEEE Transactions on Wireless Communications. He is currently serving as a co-chair of the Technical Program Committee for the IEEE Infocom 2024. From Sep. 2018 to Sep. 2022, Dr. Sprintson was a rotating program director at the US National Science Foundation (NSF).

Past Talks and Presentations

April 12, 2024@1:00pm EST

Using Modern ML Tools to Enhance Internet Measurement and Data Processing: Three Case Studies 

Mingyan Liu
University of Michigan

Abstract

Internet measurement via IP address and port scans is the backbone of  cybersecurity research, including the detection of vulnerabilities and large-scale cyber risk analysis.  In this talk, I will present three case studies on applying modern machine learning methods to enhance the acquisition and processing of Internet measurement data. The first is a framework that constructs low-dimensional numerical fingerprints (embeddings) of discoverable hosts through the use of variational autoencoders (VAEs) on scan data. These embeddings can be used for visualizing the distribution of hosts in a particular collection/network, and for various downstream (supervised) learning tasks such as detection and prediction of malicious hosts. The second study shows how large language models (LLMs) can be used to generate text-based host fingerprints directly from raw texts contained in scan data. Compared to existing approach (manually curated fingerprints), we show that our approach can identify new IoT devices and server products that were not previously captured. The last study demonstrates that the scanning methodology itself can be made substantially more efficient (covering 99% of the active hosts at a probing rate of 14.2% compared to a standard, exhaustive scan) by learning cross-protocol correlation and using it to determine in real time what ports to scan and in what sequence. 

Biography

Mingyan Liu is the Alice L. Hunt Collegiate Professor of Engineering, a professor of Electrical Engineering & Computer Science, and the Associate Dean for Academic Affairs of the College of Engineering at the University of Michigan, Ann Arbor.  She received her Ph.D. Degree in electrical engineering from the University of Maryland, College Park, in 2000 and has been with UM ever since.  From Sept 2018 to May 2023, she was the Peter and Evelyn Fuss Chair of Electrical and Computer Engineering.  Her research interests are in optimal resource allocation, performance modeling, sequential decision and learning theory, game theory and incentive mechanisms, with applications to large-scale networked systems, cybersecurity and cyber risk quantification.  She is a Fellow of the IEEE and a member of the ACM. 

April 11, 2024@2:30pm EST

Old and New Optimization Techniques for Foundation Model Fine-Tuning and (continual) Pre-Training

Mingyi Hong
Department of Electrical and Computer Engineering
University of Minnesota

Abstract

As ChatGPT has taken the world by storm in 2023, it is clear that AI systems will soon become ubiquitous. However, these systems, particularly the foundational models they are based on, face significant challenges, such as potential privacy breaches and difficulties in reliably learning and aligning with human preferences. This talk will explore recent advancements in pretraining and fine-tuning foundational models, including vision and language models, to enhance their performance across various dimensions. We will introduce an approach that employs inverse reinforcement learning and bi-level optimization to align large language models (LLMs) with human feedback more effectively. Additionally, we will examine the application of zeroth-order optimization techniques for efficient fine-tuning of LLMs. The discussion will extend to recent advances in privacy-preserving pretraining of vision foundation models. The overarching goal of this talk is to demonstrate the critical role of sophisticated optimization modeling and algorithm design in advancing the capabilities of AI systems.

Biography

Mingyi Hong received his Ph.D. degree from the University of Virginia, Charlottesville, in 2011. He is currently an Associate professor in the Department of Electrical and Computer Engineering at the University of Minnesota, Minneapolis. His research has been focused on developing optimization theory and algorithms for applications in signal processing and machine learning. He is an Associate Editor for IEEE Transactions on Signal Processing. His work has received two IEEE SPS Best Paper Awards (2021, 2022), an International Consortium of Chinese Mathematicians Best Paper Award (2020), and a few Best Student Paper Awards in signal processing and machine learning conferences. He is an Amazon Scholar, and he is the recipient of an IBM Faculty Award, a Meta research award, and the 2022 Pierre-Simon Laplace Early Career Technical Achievement Award from IEEE SPS.

March 29, 2024@1:00pm EST

Robust and Private Learning using Hardware-Software Co-Design in Centralized and Federated Settings

Farinaz Koushanfar
University of California San Diego

Abstract

Models with deep architectures are enabling a significant paradigm shift in a diverse range of fields, including natural language processing and computer vision, as well as the design and automation of complex integrated circuits. While the deep models – and optimizations based on them, e.g., Deep Reinforcement Learning (RL) – demonstrate a superior performance and a great capability for automated representation learning, earlier works have revealed the vulnerability of DL to various attacks. On the other hand, the susceptibility of DL to potential attacks might thwart trustworthy technology transfer as well as reliable deployment. In this talk, we discuss end-to-end defense schemes for robust deep learning in both centralized and federated learning settings. We also propose novel solutions that include both robustness and privacy criteria. Our comprehensive analyses reveal an important fact that defense strategies developed using DL/software/hardware co-design outperform the DL/software only counterparts and show how they can achieve very efficient and latency-optimized defenses for real world applications.

Biography

Farinaz Koushanfar is the Henry Booker Professor of Electrical and Computer Engineering (ECE) at the University of California San Diego (UCSD), where she is the founding co-director of the UCSD Center for Machine-Intelligence, Computing & Security (MICS). She is also a research scientist at Chainlink Labs. Her research addresses several aspects of secure and efficient computing, with a focus on robust machine learning under resource constraints, AI-based optimization, hardware and system security, intellectual property (IP) protection, as well as privacy-preserving computing. Dr. Koushanfar is a fellow of the Kavli Frontiers of the National Academy of Sciences, and a fellow of IEEE / ACM. She has received a number of awards and honors including the Presidential Early Career Award for Scientists and Engineers (PECASE) from President Obama, the ACM SIGDA Outstanding New Faculty Award, Cisco IoT Security Grand Challenge Award, MIT Technology Review TR-35, Qualcomm Innovation Awards, Intel Collaborative Awards, Young Faculty/CAREER Awards from NSF, DARPA, ONR and ARO, as well as several best paper awards. 

March 28, 2024@2:30pm EST

Information-theoretic Characterizations of Generalization Error for the Gibbs Algorithm and the Over-Parameterized Regime

Yuheng Bu
Department of Electrical and Computer Engineering
University of Florida

Abstract

Information theory has guided the practical communication system design by characterizing the fundamental limits of data communication and compression. This talk will discuss how methodologies originating from information theory can provide similar benefits in learning problems. We show that information-theoretic tools can be used to understand the generalization behavior of learning algorithms, i.e., how a trained machine learning model behaves on unseen data. We provide exact characterizations of the generalization error for the Gibbs algorithm, which can be viewed as a randomized empirical risk minimization algorithm. Such an information-theoretic approach is versatile, as we can extend our analysis to some transfer learning algorithms and characterize the marginal likelihood of the Gibbs algorithm for model selection in an over-parameterized regime. We believe this analysis can provide new insights to guide the design of the learning system.

Biography

Dr. Yuheng Bu is an Assistant Professor in the Department of Electrical & Computer Engineering (ECE) at the University of Florida. Before joining the University of Florida, he was a postdoctoral research associate at the Research Laboratory of Electronics and Institute for Data, Systems, and Society (IDSS), Massachusetts Institute of Technology (MIT). He received a B.S. degree (Hons.) in EE from Tsinghua University in 2014 and a Ph.D. degree in electrical and computer engineering from the University of Illinois at Urbana Champaign in 2019. His research interests lie at the intersection of machine learning, signal processing, and information theory.

March 22, 2024@11:00am EST

Towards AIoT: Building Intelligence into Sensing, Networking, and Data Analytics of IoT

Mo Li
Hong Kong University of Science and Technology

Abstract

AIoT provides opportunities to transcending state-of-the-art technologies in both AI and IoT. On one hand the unprecedented data scale and prevalence from IoT magnifies the AI power; on the other hand the machine intelligence from AI helps excel every aspect of sensing, computing, and communication in nowadays IoT. This talk will introduce our recent research efforts into devising viable AIoT solutions that seek to address fundamental challenges due to (i) massiveness of devices, where hundreds of billions of networked sensors in the physical world may exhaust limited computing and communication resources, (ii) sensing intrusion to people and their things, where improper IoT instrumentation may impairs the harmonious co-existence of people and the machine intelligence, and (iii) plethora of data, where unprecedented scale and prevalence of the IoT data not only contributes to training powerful AI but also sets obstacles for distilling, verifying, adapting, and transferring the machine learning processes across people and different cyber or physically engineered systems.

Biography

Dr. Mo Li is a Professor from Hong Kong University of Science and Technology. His research focuses on system aspects of wireless sensing and networking, IoT for smart cities and urban informatics. Dr. Li has been on the editorial board of IEEE/ACM Transactions on Networking, IEEE Transactions on Mobile Computing, ACM Transactions on Internet of Things, and IEEE Transactions on Wireless Communications, all leading journals in the field. Dr. Li served the technical program committee member for top conferences in computer system and networking, including ACM MobiCom, ACM MobiSys, ACM SenSys, and many others. Dr. Li is a Distinguished Member of the ACM since 2019, and a Fellow of the IEEE since 2020.

February 23, 2024@1:00pm EDT

Multimodal Machine Intelligence and its Human-centered Possibilities

Shrikanth (Shri) Narayanan
University of Southern California

Abstract

Converging developments across the machine intelligence ecosystem––from multimodal sensing and signal processing to computing––are enabling new human-centered possibilities both in advancing science and in the creation of technologies for societal applications including in human health and wellbeing. This includes approaches for quantitatively and objectively understanding human communicative, affective and social behavior with applications in diagnostics and treatment across varied domains such as distressed relationships, depression, suicide, autism spectrum disorder, addiction to workplace health and wellbeing. The talk will also discuss the challenges and opportunities for creating trustworthy machine intelligence approaches that are inclusive, equitable, robust, safe, and secure e.g., with respect to protected variables such as gender/race/age/ability.

Biography

Shrikanth (Shri) Narayanan is University ProfessorNiki & C. L. Max Nikias Chair in Engineering and VP for Presidential Initiatives at the University of Southern California (USC), where he is Professor of Electrical & Computer Engineering, Computer Science, Linguistics, Psychology, Neuroscience, Pediatrics, and Otolaryngology—Head & Neck Surgery, Director of the Ming Hsieh Institute and Research Director of the Information Sciences Institute. Prior to USC, he was with AT&T Bell Labs and AT&T Research. He is a Visiting Faculty Researcher with Google Research.  His interdisciplinary research focuses on human-centered sensing/imaging, signal processing, and machine intelligence centered on human communication, interaction, emotions, and behavior.  He is a Fellow of the Acoustical Society of America, IEEE, ACM, International Speech Communication Association (ISCA), the American Association for the Advancement of Science, the Association for Psychological Science, the Association for the Advancement of Affective Computing, the American Institute for Medical and Biological Engineering, and the National Academy of Inventors. He is a Guggenheim Fellow and member of the European Academy of Sciences and Arts, and a recipient of many awards for research and education including the 2024 Edward J. McCluskey Technical Achievement Award from the IEEE Computer Society, the 2023 Claude Shannon-Harry Nyquist Technical Achievement Award from the IEEE Signal Processing Society, 2023 ISCA Medal for Scientific Achievement, and the 2023 Richard Deswarte Prize in Digital History. He has published widely and his inventions have led to technology commercialization including through startups he co-founded: Behavioral Signals Technologies focused on AI based conversational assistance and Lyssn focused on mental health care and quality assurance.

February 22, 2024@2:30pm EDT

In-context Convergence of Transformers

Yu Huang
Wharton Statistics and Data Science Department
University of Pennsylvania

Abstract

Transformers have recently revolutionized many machine learning domains and one salient discovery is their remarkable in-context learning capability, where models can capture an unseen task by utilizing task-specific prompts without further parameters fine-tuning. In this talk, I will present our recent work that aims at understanding the in-context learning mechanism of transformers. Our focus is on the learning dynamics of a one-layer transformer with softmax attention trained via gradient descent in order to in-context learn linear function classes. I will first present our characterization of the training convergence of in-context learning for data with balanced and imbalanced features, respectively. I will then discuss the insights that we obtain about attention models and training processes. I will also talk about the analysis techniques that we develop which may be useful for a broader set of problems. I will finally conclude my talk with comments on a few future directions.

Biography

Yu Huang is currently pursuing her PhD at the Wharton Statistics and Data Science Department of the University of Pennsylvania. Previously, she received her BS in Mathematics and MS in Computer Science from Tsinghua University in 2020 and 2023, respectively. She is broadly interested in the theoretical aspects of modern machine learning, with a recent focus on foundational models and transformers.

February 16, 2024@11:00am EDT

Using Graph-Based Machine Learning Algorithms for Software Analysis

Michael D. Brown
Trail of Bits

Abstract

Software analysis is a well-established research area focused on automatically determining facts about a program’s properties and behaviors and using them to improve the program. Software analysis techniques are used in many domains, most notably to achieve performance and security enhancements (compilers), identify bugs and security vulnerabilities (code scanners), simplify programming through abstraction (DSL interpreters), and reverse engineer software. The limitations of software analysis for these purposes are well understood – in general it is impossible to collect a complete set of program facts about a particular piece of software, especially for complex software used in the DoD. As a result, many software analysis tools employ heuristics or rely on humans in the loop to make meaningful advances in this space.

The recent technological leaps forward in Machine Learning (ML) have created a unique opportunity to make advances in software analysis that were previously not possible because ML-based solutions are not bound by the same computational constraints as traditional software analysis techniques. Further, these techniques excel at approximating and replicating human problem solving. As a result, there has been a dearth of new research on ML-based software analysis, however recently proposed techniques have fallen flat because they failed to exploit the natural shape and form of software: directed graphs.

Over the last three years, my team and I have researched and developed techniques to address several key challenges that researchers face when creating effective, graph-based ML software analysis tools. Specifically, we have developed techniques to aid researchers in generating realistic training data sets, converting software to a representation that graph-based ML algorithms can consume, and formulating real-world software analysis problems as graph recognition problems. Using these techniques, we have created two tools that outperform state of the art traditional software analysis tools: VulChecker and CORBIN. Vulchecker is a static application security testing (SAST) tool that excels at identifying fuzzy security vulnerabilities in source code. CORBIN is a system for lifting advanced mathematical constructs (formulas, lookup tables, PID controllers, etc.) from legacy binary software that powers cyber-physical systems like power generation and onboard vehicular control systems.

In this talk, I will first discuss the inherent challenges in using ML to create software analysis tools and how exploiting the graph-based nature of software can bring about success. Second, I will present two successful graph-based ML software analysis tools created under the DARPA AIMEE and ReMath programs: VulChecker and CORBIN. Finally, I will present a set of guiding principles and guardrails for applying ML to software based on the lessons learned from building these tools.

Biography

Michael D. Brown is a principal security engineer at Trail of Bits and a Ph.D. student at the Georgia Institute of Technology. He works on a variety of applied and fundamental research projects focused on security-oriented software analysis and transformation. Michael's primary research interest is the development of software transformation techniques to improve the security of computing systems. Prior to his work in software security, Michael was on active duty for eight years in the U.S. Army where he served as a UH-60M pilot and Aviation Mission Survivability Officer. Michael earned his M.S. in Computer Science at Georgia Tech and his B.S. in Computer Science at the University of Cincinnati.

February 9, 2024@1:00pm EDT

Intelligent Edge Services and Foundation Models for Internet of Things Applications

Tarek Abdelzaher
Department of Computer Science
University of Illinois at Urbana Champaign

Abstract

Advances in neural networks revolutionized modern machine intelligence, but important challenges remain when applying these solutions in IoT contexts; specifically, on lower-end embedded devices with multimodal sensors and distributed heterogeneous hardware. The talk discusses challenges in offering machine intelligence services to support applications in resource constrained distributed IoT environments. The intersection of IoT applications, real-time requirements, distribution challenges, and AI capabilities motivates several important research directions. For example, how to support efficient execution of machine learning components on embedded edge devices while retaining inference quality? How to reduce the need for expensive manual labeling of IoT application data? How to improve the responsiveness of AI components to critical real-time stimuli in their physical environment? How to prioritize and schedule the execution of intelligent data processing workflows on edge-device GPUs? How to exploit data transformations that lead to sparser representations of external physical phenomena to attain more efficient learning and inference? How to develop foundation models for IoT that offer extended inference capabilities from time-series data analogous to ChatGPT inference capabilities from text? The talk discusses recent advances in edge AI and foundation models and presents evaluation results in the context of different real-time IoT applications.

Biography

Tarek Abdelzaher received his Ph.D. in Computer Science from the University of Michigan in 1999. He is currently a Sohaib and Sara Abbasi Professor and Willett Faculty Scholar at the Department of Computer Science, the University of Illinois at Urbana Champaign. He has authored/coauthored more than 300 refereed publications in real-time computing, distributed systems, sensor networks, and control. He served as an Editor-in-Chief of the Journal of Real-Time Systems, and has served as Associate Editor of the IEEE Transactions on Mobile Computing, IEEE Transactions on Parallel and Distributed Systems, IEEE Embedded Systems Letters, the ACM Transaction on Sensor Networks, and the Ad Hoc Networks Journal, among others. Abdelzaher’s research interests lie broadly in understanding and influencing performance and temporal properties of networked embedded, social and software systems in the face of increasing complexity, distribution, and degree of interaction with an external physical and social environment. Tarek Abdelzaher is a recipient of the IEEE Outstanding Technical Achievement and Leadership Award in Realtime Systems (2012), the Xerox Award for Faculty Research (2011), as well as several best paper awards. He is a fellow of IEEE and ACM.

January 25, 2024@2:30pm EDT

Federated Multi-Objective Learning

Jia (Kevin) Liu
Department of Electrical and Computer Engineering
The Ohio State University

Abstract

In recent years, multi-objective optimization (MOO) emerges as a foundational problem underpinning many multi-agent multi-task learning applications. However, existing algorithms in MOO literature remain limited to centralized learning settings, which do not satisfy the distributed nature and data privacy needs of such multi-agent multi-task learning applications. This motivates us to propose a new federated multi-objective learning (FMOL) framework with multiple clients distributively and collaboratively solving an MOO problem while keeping their training data private. Notably, our FMOL framework allows a different set of objective functions across different clients to support a wide range of applications, which advances and generalizes the MOO formulation to the federated learning paradigm for the first time.

For this FMOL framework, we propose two new federated multi-objective optimization (FMOO) algorithms called federated multi-gradient descent averaging (FMGDA) and federated stochastic multi-gradient descent averaging (FSMGDA). Both algorithms allow local updates to significantly reduce communication costs, while achieving the same convergence rates as those of their algorithmic counterparts in the single-objective federated learning. Our extensive experiments also corroborate the efficacy of our proposed FMOO algorithms.

Biography

Jia (Kevin) Liu is an Assistant Professor in the Dept. of Electrical and Computer Engineering at The Ohio State University (OSU) and an Amazon Visiting Academic (AVA). From Aug. 2017 to Aug. 2020, he was an Assistant Professor in the Dept. of Computer Science at Iowa State University (ISU). He is currently the Managing Director of the NSF AI Institute for Future Edge Networks and Distributed Intelligence (AI-EDGE) at OSU. He is also a faculty investigator of the NSF TRIPODS D4 (Dependable Data-Driven Discovery) Institute at ISU, the NSF ARA Wireless Living Lab PAWR Platform between ISU and OSU, and the Institute of Cybersecurity and Digital Trust (ICDT) at OSU. He received his Ph.D. degree from the Dept. of Electrical and Computer Engineering at Virginia Tech in 2010. His research areas include theoretical machine learning, stochastic network optimization and control, and performance analysis for data analytics infrastructure and cyber-physical systems. Dr. Liu is a senior member of IEEE and a member of ACM. He has received numerous awards at top venues, including IEEE INFOCOM'19 Best Paper Award, IEEE INFOCOM'16 Best Paper Award, IEEE INFOCOM'13 Best Paper Runner-up Award, IEEE INFOCOM'11 Best Paper Runner-up Award, and IEEE ICC'08 Best Paper Award. He has also received multiple honors of long/spotlight presentations at top machine learning conferences, including ICML, NeurIPS, and ICLR. He is an NSF CAREER Award recipient in 2020 and a winner of the Google Faculty Research Award in 2020. He received the LAS Award for Early Achievement in Research at Iowa State University in 2020, and the Bell Labs President Gold Award. Dr. Liu is an Associate Editor for IEEE Transactions on Cognitive Communications and Networking. He has served as a TPC member for numerous top conferences, including ICML, NeurIPS, ICLR, ACM SIGMETRICS, IEEE INFOCOM, and ACM MobiHoc. His research is supported by NSF, AFOSR, AFRL, ONR, Google, and Cisco.

November 28, 2023@4:00pm EDT

Approximately Equivariant Graph Networks

Teresa Huang
Johns Hopkins University
Department of Applied Mathematics and Statistics

Abstract

Graph neural networks (GNNs) are commonly described as being permutation equivariant with respect to node relabeling in the graph. This symmetry of GNNs is often compared to the translation equivariance of Euclidean convolution neural networks (CNNs). However, these two symmetries are fundamentally different: The translation equivariance of CNNs corresponds to active symmetries, whereas the permutation equivariance of GNNs corresponds to passive symmetries. In this talk, we focus on the active symmetries of GNNs, by considering a learning setting where signals are supported on a fixed graph. In this case, the natural symmetries of GNNs are the automorphisms of the graph. Since real-world graphs tend to be asymmetric, we relax the notion of symmetries by formalizing approximate symmetries via graph coarsening. We propose approximately equivariant graph networks to implement these symmetries and investigate the symmetry model selection problem. We theoretically and empirically show a bias-variance tradeoff between the loss in expressivity and the gain in the regularity of the learned estimator, depending on the chosen symmetry group.

Biography

Ningyuan (Teresa) Huang is currently a Ph.D. candidate in the Department of Applied Mathematics and Statistics at Johns Hopkins University. She received her M.S. degree in Data Science from New York University in 2020, and her B.S. degree in Statistics and Economics from the University of Hong Kong in 2016. Her research interests are in the areas of representation learning and deep learning. Her current work focuses on expressivity and generalization properties of graph neural networks. She is a recipient of the CPAL Rising Stars Award 2023.

November 17, 2023@1:00pm EST

Federated Learning: New Approaches in Learning and Applications to Lifelong Learning

Dimitris Dimitriadis
Amazon

Abstract

New demands in data management are emerging nowadays. Some of these constraints are driven by the need for privacy compliance of the personal data and some of them by the need to train bigger, better, faster models. As such, increasingly more data is stored behind inaccessible firewalls or on users' devices without the option of sharing for centralized training. To this end, the Federated Learning (FL) paradigm has been proposed, addressing the privacy concerns, while still processing such inaccessible data in a continual manner. However, FL doesn't come as a free lunch and new technical challenges have emerged. Herein, this talk will present some new ways of addressing such challenges while federating heterogeneous models, dealing with the dynamic nature of learning.

Biography

Dr. D. Dimitriadis is Principal Applied Scientist in Amazon, where he is currently the technical lead for Federated Learning and Analytics across the company, Continual and Semi-supervised Learning. He has worked in Microsoft Research, IBM Research and AT&T Labs (2009-14), lecturer P.D 407/80 at the School of ECE, NTUA, Greece. He is a Senior Member of IEEE and has served as a chair in several conferences and workshops. He has published more than 100 papers in peer-reviewed scientific journals and conferences with 3000 citations. He received his PhD degree from NTUA in February 2005. His PhD Thesis Title is "Non-Linear Speech Processing, Modulation Models and Applications to Speech Recognition".

October 27, 2023@1:00pm EST

The COSMOS Testbed – a Platform for Advanced Wireless, Edge Cloud, Optical, Smart Streetscapes, and International Experimentation

Gil Zussman
Columbia University
Department of Electrical Engineering and Computer Science

Abstract

This talk will provide an overview of the COSMOS testbed, that is being deployed as part of the NSF Platforms for Advanced Wireless Research (PAWR) program, and briefly review various ongoing experiments in the areas of wireless, optical, edge cloud, and smart cities. COSMOS (Cloud-Enhanced Open Software-Defined Mobile-Wireless Testbed for City-Scale Deployment) is being deployed in West Harlem (New York City). It targets the technology "sweet spot" of ultra-high bandwidth and ultra-low latency, a capability that will enable a broad new class of applications including augmented/virtual reality and cloud-based autonomous vehicles. Realization of such high bandwidth/low latency wireless applications involves research not only on radio links, but also on the system as a whole including algorithmic aspects related to spectrum use, networking, and edge computing.  We will present an overview of COSMOS' key enabling technologies, which include mmWave radios, software-defined radios, optical/SDN x-haul network, and edge cloud. We will then discuss the deployment and outreach efforts as well as the international component (COSMOS Interconnecting Continents - COSM-IC). Finally, we will describe various experiments that have been conducted in the testbed, including in the areas of edge cloud, mmWave wireless, full-duplex wireless, smart streetspaces, and optical communications/sensing. The COSMOS testbed design and deployment is joint work with the COSMOS team

Biography

Gil Zussman received the Ph.D. degree in Electrical Engineering from the Technion in 2004. Between 2004 and 2007 he was a Postdoctoral Associate at MIT. Since 2007 he has been with Columbia University where he is a Professor of Electrical Engineering and Computer Science (affiliated faculty), and member of Data Science Institute. His research interests are in the area of networking, and in particular in the areas of wireless, mobile, and resilient networks. He has been an associate editor of IEEE/ACM Trans. on Networking, IEEE Trans. on Control of Network Systems, IEEE Trans. on Wireless Communications and the TPC Chair of IEEE INFOCOM’23 and ACM MobiHoc’15. Gil is an IEEE Fellow and received two Marie Curie fellowships, the Fulbright Fellowship, the DTRA Young Investigator Award, and the NSF CAREER Award. He is a co-recipient of seven best paper awards, including the ACM SIGMETRICS’06 Best Paper Award, the 2011 IEEE Communications Society Award for Advances in Communication, and the ACM CoNEXT’16 Best Paper Award.

October 23, 2023@1:30pm EST

Privacy in Machine Learning and Statistical Inference

Adam Smith
Boston University
Department of Computer Science

Abstract

The results of learning and statistical inference reveal information about the data they use. This talk discusses the possibilities and limitations of fitting machine learning and statistical models while protecting the privacy of individual records.

Dr. Smith will begin by explaining what makes this problem difficult, using recent examples of example memorization and other breaches. He will present differential privacy, a rigorous definition of privacy in statistical databases that is now widely studied, and increasingly used to analyze and design deployed systems.

Finally, Dr. Smith will present recent algorithmic results on a fundamental problem: differentially private mean estimation. We give an efficient and (nearly) sample-optimal algorithm for estimating the mean of ”nicely” distributed data sets. When the data come from a Gaussian or sub-Gaussian distribution, the new algorithm matches the sample complexity of the best nonprivate algorithm. 

The last part of the talk is based on Dr. Smith’s joint work with Gavin Brown and Sam Hopkins that shared the Best Student Paper award at COLT 2023.

Biography

Adam Smith is a professor of computer science at Boston University. From 2007 to 2017, he served on the faculty of the Computer Science and Engineering Department at Penn State. His research interests lie in data privacy and cryptography, and their connections to machine learning, statistics, information theory, and quantum computing. He obtained his Ph.D. from MIT in 2004 and has held postdoc and visiting positions at the Weizmann Institute of Science, UCLA, Boston University and Harvard. His work received a Presidential Early Career Award for Scientists and Engineers (PECASE) in 2009; a Theory of Cryptography Test of Time award in 2016; the Eurocrypt 2019 Test of Time award; the 2017 Gödel Prize; and the 2021 Kanellakis Theory and Practice Award. He is a Fellow of the ACM.

October 20, 2023@1:00pm EST

Vehicle Computing: Vision and Challenges

Weisong Shi
University of Delaware
Department of Computer and Information Sciences

Abstract

Vehicles have been majorly used for transportation in the last century. With the proliferation of onboard computing and communication capabilities, we envision that future connected vehicles (CVs) will serve as a mobile computing platform in addition to their conventional transportation role for the next century. In this article, we present the vision of Vehicle Computing, i.e., CVs are the perfect computation platforms, and connected devices/things with limited computation capacities can rely on surrounding CVs to perform complex computational tasks. We also discuss Vehicle Computing from several aspects, including key and enabling technologies, case studies, open challenges, and the potential business model.

Biography

Weisong Shi is a Professor and Chair of the Department of Computer and Information Sciences at the University of Delaware (UD). He leads the Connected and Autonomous Research (CAR) Laboratory. Dr. Shi is the Center Director of a recently funded NSF eCAT Industry-University Cooperative Research Center (IUCRC) (2023-2028), focusing on Electric, Connected, and Autonomous Technology for Mobility. He is an internationally renowned expert in edge computing, autonomous driving, and connected health. His pioneer paper, "Edge Computing: Vision and Challenges,” has been cited more than 6000 times.  Before joining UD, he was a professor at Wayne State University (2002-2022). He served in multiple administrative roles, including Associate Dean for Research and Graduate Studies at the College of Engineering and Interim Chair of the Computer Science Department. Dr. Shi also served as a National Science Foundation (NSF) program director (2013-2015). He was the chair of the IEEE Computer Society Special Technology Community on Autonomous Driving Technologies (ADT) and the Strategic Planning Committee member of the Autoware Foundation. He is a Fellow of IEEE and a member of NSF CISE Advisory Committee. More information can be found at weisongshi.org.

October 6, 2023@10:00am EST

Introducing Project Aria: A New Tool for Egocentric Multi-Modal AI Research

Kiran Somasundaram
Meta Reality Labs

Abstract

Egocentric, multi-modal data as available on future augmented reality (AR) devices provides unique challenges and opportunities for machine perception. These future devices will need to be all-day wearable in a socially acceptable form-factor to support always available, context-aware and personalized AI applications. Our team at Meta Reality Labs Research built the Aria device, an egocentric, multi-modal data recording and streaming device with the goal to foster and accelerate research in this area. In this talk, we will introduce the Aria device hardware including its sensor configuration and the corresponding software tools that enable recording and processing of such data. We will show live demos of research applications that we can enable with this device platform.

Biography

Kiran Somasundaram is a Systems Architect at Meta Reality Labs developing machine perception technologies to enable all-day wearable AR smart glasses. He received his Ph. D., in Electrical and Computer Engineering, from the University of Maryland, College Park, in 2010. Prior to joining Meta, Kiran worked at Qualcomm Research on projects across robotics and mobile AR technologies.

September 22, 2023@1:00pm EST

Training ML Models with Private Data

Murali Annavaram
Department of Electrical and Computer Engineering
University of Southern California

Abstract

Privacy and security-related concerns are growing as machine learning reaches diverse application domains. The data holders want to train or infer with private data while exploiting accelerators, such as GPUs, that are hosted in the cloud. Cloud systems are vulnerable to attackers that compromise the privacy of data and integrity of computations. Tackling such a challenge efficiently requires exploiting hardware security capabilities to reduce the cost of theoretical privacy algorithms. This talk will describe my group’s experiences in building privacy preserving machine learning systems. I will present DarKnight, a framework for large DNN training while protecting input privacy and computation integrity. DarKnight relies on cooperative execution between trusted execution environments (TEE) and accelerators, where the TEE provides privacy and integrity verification, while accelerators perform the bulk of the linear algebraic computation to optimize the performance. The second part of the talk will focus on an orthogonal approach to privacy using multi-party computing (MPC). We present detailed characterisation of MPC overheads when executing large language models in a distributed manner. We then present MPCpipe a pipelined MPC execution model that overlaps computation and communication in MPC.

Biography

Murali Annavaram is the Lloyd Hunt Chair Professor in the Ming-Hsieh Department of Electrical and Computer Engineering and in the Thomas Lord department of Computer Science (joint appointment) at the University of Southern California. He was the Rukmini Gopalakrishnachar Chair Professor at the Indian Institute of Science. He is the founding director of the REAL@USC-Meta center that is focused on research and education in AI and learning. His research group tackles a wide range of computer system design challenges, relating to energy efficiency, security and privacy. He has been inducted to the hall of fame for three of the prestigious computer architecture conferences ISCA, MICRO and HPCA. He served as a Technical Program Chair for HPCA 2021, and served as the General Co-Chair for ISCA 2018. Prior to his appointment at USC he worked at Intel Microprocessor Research Labs from 2001 to 2007. His work at Intel lead to the first 3D microarchitecture paper, and also influenced Intel’s TurboBoost technology. In 2007 he was a visiting researcher at the Nokia Research Center working on mobile phone-based wireless traffic sensing using virtual trip lines, which later become Nokia Traffic Works product. In 2020 he was a visiting faculty scientist at Facebook, where he designed the checkpoint systems for distributed training. Murali co-authored Parallel Computer Organization and Design, a widely used textbook to teach both the basic and advanced principles of computer architecture. Murali received the Ph.D. degree in Computer Engineering from the University of Michigan, Ann Arbor, in 2001. He is a Fellow of IEEE and Senior Member of ACM. 

May 1, 2023@2:00pm EST

Foundation Models and Transfer Learning for Sequential Decision-Making

Alvaro Velasquez
Innovation Information Office (I2O)
Defense Advanced Research Projects Agency (DARPA)

Abstract

Foundation models, including Chat-GPT and its many variants, have come into prominence in the natural language processing (NLP) community thanks the ubiquity of text data readily available on the internet and the design of modern transformer architectures that can effectively learn from such data. However, the development of a foundation model for sequential decision-making (e.g., reinforcement learning, planning) is faced with additional challenges not present in NLP. In this talk, we discuss some of these challenges with the hope of informing future investments that funding agencies and the academic community should engage in. The problem of transfer learning in the context of sequential decision-making is also discussed and constitutes one of the challenges that foundation models must address.  

Biography

Alvaro Velasquez is a program manager in the Innovation Information Office (I2O) of the Defense Advanced Research Projects Agency (DARPA), where he currently leads programs on neuro-symbolic AI. Before that, Alvaro oversaw the machine intelligence portfolio of investments for the Information Directorate of the Air Force Research Laboratory (AFRL). Alvaro received his PhD in Computer Science from the University of Central Florida in 2018 and is a recipient of the distinguished paper award from AAAI, best paper and patent awards from AFRL, the National Science Foundation Graduate Research Fellowship Program (NSF GRFP) award, and the University of Central Florida 30 Under 30 award. He has authored over 60 papers and two patents and serves as Associate Editor of the IEEE Transactions on Artificial Intelligence.

April 28, 2023@11:00am EST

Direct Air-Water Communication and Sensing with Light

Xia Zhou
Department of Computer Science
Columbia University

Abstract

The ability to communicate and sense across the air-water boundary is essential for efficient exploration and monitoring of the underwater world. Existing wireless solutions for communication and sensing typically focus on a single physical medium and fall short in achieving high-bandwidth communication and accurate sensing across the air-water interface without any relays on the water surface. 

In this talk, I will describe our effort in exploiting the use of laser light to enable effective communication and sensing in the air-water context. I will present our AmphiLight framework that allows bidirectional, Mbps-level communication across the air-water interface. I will focus on our design elements that overcome full-hemisphere laser steering with portable hardware, handle strong ambient light outdoors, and adapt to dynamics of water waves. I will then introduce our recent work Sunflower, which allows an aerial drone to directly locate underwater robots at centimeter-level accuracy, without any relays on the water surface. I will conclude with future work and open challenges. 

Biography

Xia Zhou is an Associate Professor in the Department of Computer Science at Columbia University, where she directs the Mobile X laboratory. Before joining Columbia in 2022, she was a faculty member in the Department of Computer Science at Dartmouth College. Her research interests lie broadly in mobile computing with recent focuses on light based communication and sensing, mobile sensing, and human-computer interactions. She is a recipient of the Presidential Early Career Award (PECASE) in 2019, SIGMOBILE RockStar Award in 2019, the Karen E. Wetterhahn Memorial Award for Distinguished Creative and Scholarly Achievement in 2018 and named as N2Women: Rising Stars in Networking and Communication in 2017. She has also won the Sloan Research Fellowship in 2017, NSF CAREER Award in 2016, and Google Faculty Research Award in 2014. She received her PhD at UC Santa Barbara in 2013 and MS at Peking University in 2007. 

April 20, 2023@11:00am EST

Practical Federated Learning: From Research to Production

Mosharaf Chowdhury
Department of Electrical Engineering and Computer Science
University of Michigan

Abstract

Although cloud computing has successfully accommodated the three "V"s of Big Data, collecting everything into the cloud is becoming increasingly infeasible. Today, we face a new set of challenges. A growing awareness of privacy among individual users and governing bodies is forcing platform providers to restrict the variety of data we can collect. Often, we cannot transfer data to the cloud at the velocity of its generation. Many cloud users suffer from sticker shock, buyer's remorse, or both as they try to keep up with the volume of data they must process. Making sense of data in the device where it is generated is more appealing than ever.

Although theoretical federated learning research grew exponentially over the past few years, the community had been far from putting those theories into practice. In this talk, I will introduce FedScale, a scalable and extensible open-source federated learning and analytics platform that we started with a systems perspective to make federated learning practical. It provides high-level APIs to implement algorithms, a modular design to customize implementations for diverse hardware and software backends, and the ease of deploying the same code at many scales. FedScale also includes a comprehensive benchmark that allows machine learning users to evaluate their ideas in realistic, large-scale settings. I will also highlight how LinkedIn is integrating FedScale in their machine learning workflows that affect 800+ million users and share some lessons we learned when migrating academic research to production. 

Biography

Mosharaf Chowdhury is an Associate Professor of CSE at the University of Michigan, Ann Arbor, where he leads the SymbioticLab. His research improves application performance and system efficiency of machine learning and big data workloads with a recent focus on optimizing energy consumption and data privacy. His group developed Infiniswap, the first scalable memory disaggregation solution; Salus, the first software-only GPU sharing system for deep learning; FedScale, a scalable federated learning and analytics platform; and Zeus, the first GPU energy-vs-training performance optimizer for DNN training. In the past, Mosharaf invented coflows and was one of the original creators of Apache Spark. He has received many individual awards, fellowships, and seven paper awards from top venues, including NSDI, OSDI, and ATC. Mosharaf received his Ph.D. from the AMPLab at UC Berkeley in 2015.

April 14, 2023@2:00pm EST

FarmVibes: Democratizing Digital Tools for Sustainable Agriculture

Ranveer Chandra
Microsoft

Abstract

Agriculture is one of the biggest contributors to climate change. Agriculture and land use degradation, including deforestation, account for about a quarter of the global GHG emissions. Agriculture consumes about 70% of the world’s freshwater resources. Agriculture is also amongst the most impacted by climate change. Farmers depend on predictable whether for their farm management practices, and unexpected weather events, e.g., high heat, floods, etc. leaves them unprepared. Agriculture could also be a potential solution to the climate problem. If farmers use the right agricultural practices, then it can help remove carbon from the atmosphere. However, making progress on any of the above challenges is difficult due to the lack of data from the farms. Existing approaches for estimating emissions or sequestered carbon are very expensive. Through this project, our goal is to build affordable digital technologies to help farmers (1) estimate the amount of emissions in their farm, (2) with climate adaptation by predicting weather variations, and (3) determine the right management practices that can be profitable, and also help sequester carbon.

Biography

Ranveer Chandra is the Managing Director for Research for Industry, and the CTO of Agri-Food at Microsoft. He also leads the Networking Research Group at Microsoft Research, Redmond. Previously, Ranveer was the Chief Scientist of Microsoft Azure Global. His research has shipped as part of multiple Microsoft products, including VirtualWiFi in Windows 7 onwards, low power Wi-Fi in Windows 8, Energy Profiler in Visual Studio, Software Defined Batteries in Windows 10, and the Wireless Controller Protocol in XBOX One. His research also led to a new product. Ranveer is active in the networking and systems research community, and has served as the Program Committee Chair of IEEE DySPAN 2012, ACM MobiCom 2013, and ACM HotNets 2022. Ranveer started Project FarmBeats at Microsoft in 2015. He also led the battery research project, and the white space networking project at Microsoft Research. He was invited to the USDA to present his research to the US Secretary of Agriculture, and this work was featured by Bill Gates in Gates-Notes, and was selected by Satya Nadella as one of 10 projects that inspired him in 2017. Ranveer has also been invited to the FCC to present his work on TV white spaces, and spectrum regulators from India, China, Brazil, Singapore and US (including the FCC chairman) have visited the Microsoft campus to see his deployment of the world’s first urban white space network. As part of his doctoral dissertation, Ranveer developed VirtualWiFi. The software has over a million downloads and was among the top 5 downloaded software released by Microsoft Research. It is shipping as a feature in Windows since 2009. Ranveer has published more than 100 papers, and holds over 150 patents granted by the USPTO. His research has been cited by the popular press, such as the Economist, MIT Technology Review, BBC, Scientific American, New York Times, WSJ, among others. He is a Fellow of the ACM and the IEEE, and has won several awards, including award papers at ACM CoNext 2008, ACM SIGCOMM 2009, IEEE RTSS 2014, USENIX ATC 2015, Runtime Verification 2016 (RV’16), ACM COMPASS 2019, and ACM MobiCom 2019, the Microsoft Research Graduate Fellowship, the Microsoft Gold Star Award, the MIT Technology Review’s Top Innovators Under 35, TR35 (2010) and Fellow in Communications, World Technology Network (2012). He was recently recognized by the Newsweek magazine as America’s 50 most Disruptive Innovators (2021). Ranveer has an undergraduate degree from IIT Kharagpur, India and a PhD from Cornell University.

April 13, 2023@2:00pm EDT

Principled Out-of-Distribution Detection in Machine Learning

Venu Veeravalli
Department of Electrical and Computer Engineering and Coordinated Science Lab
University of Illinois at Urbana-Champaign

Abstract

Out-of-Distribution (OOD) detection in machine learning refers to the problem of detecting whether the machine learning model’s output can be trusted at inference time. This problem has been described qualitatively in the literature, and a number of ad hoc tests for OOD detection have been proposed. In this talk we outline a principled approach to the OOD detection problem, by first defining the OOD detection problem through a hypothesis test that includes both the input distribution and the learning algorithm. Our definition provides insights for the construction of powerful tests for OOD detection. We also propose a multiple testing inspired procedure to systematically combine any number of different statistics from the learning algorithm using conformal p-values. We further provide strong guarantees on the probability of incorrectly classifying an in-distribution sample as OOD. In our experiments, we find that the tests proposed in prior work perform well in specific settings, but not uniformly well across different types of OOD instances. In contrast, our proposed method that combines multiple statistics performs uniformly well across different datasets and neural networks.

(This is joint work with Akshayaa Magesh, Susmit Jha and Anirban Roy.)

Biography

Prof. Veeravalli received the Ph.D. degree in Electrical Engineering from the University of Illinois at Urbana-Champaign in 1992, the M.S. degree from Carnegie-Mellon University in 1987, and the B.Tech degree from Indian Institute of Technology, Bombay (Silver Medal Honors) in 1985. He is currently the Henry Magnuski Professor in the Department of Electrical and Computer Engineering (ECE) at the University of Illinois at Urbana-Champaign, where he also holds appointments with the Coordinated Science Laboratory (CSL) and the Department of Statistics. He was on the faculty of the School of ECE at Cornell University before he joined Illinois in 2000. He served as a program director for communications research at the U.S. National Science Foundation in Arlington, VA during 2003-2005. His research interests span the theoretical areas of statistical inference, machine learning, and information theory, with applications to data science, wireless communications, and sensor networks. He is a Fellow of the IEEE. Among the awards he has received for research and teaching are the IEEE Browder J. Thompson Best Paper Award, the U.S. Presidential Early Career Award for Scientists and Engineers (PECASE), and the Abraham Wald Prize in Sequential Analysis (twice). He received the 2023 Fulbright-Nokia Distinguished Chair in Information and Communication Technologies.

April 7, 2023@2:00pm EDT

Online Learning for Network Resource Allocation

Tareq Si-Salem
Department of Electrical and Computer Engineering
Northeastern University

Abstract

The connectivity and ubiquity of computing devices enabled a wide spectrum of network applications such as content delivery, interpersonal communication, and inter-vehicular communication. New use cases (e.g., autonomous driving, augmented reality, and tactile internet) require satisfying usergenerated and machine-generated demand with stringent low-latency and high-bandwidth guarantees. Network resource allocation is utilized to provide faster service to demands and to reduce the computation or communication load on a networked system. This is achieved through optimization and appropriate placement of resources at different locations in a network. However, this remains a challenging task in many practical scenarios where different network parameters, such as latency and operating costs may vary over time and the demands arrive at the system in an unpredictable and sequential fashion.

In this talk, I will discuss several instances of the network resource allocation problem, and how the online learning framework can be leveraged to obtain strong performance guarantees even when the demand does not exhibit any statistical regularity. I will demonstrate the versatility of gradient algorithms, which normally operate on continuous spaces, on inherently combinatorial problems (e.g., the NP-Hard problem of similarity caching) when paired with opportune randomized rounding schemes. We show in all these instances that performance guarantees extend to integral (combinatorial) settings, despite the need to account for update costs. I will conclude my talk, by presenting our long-term online fairness framework for settings where the agents’ utilities are subject to unknown, time-varying, and potentially adversarial perturbations. We characterize the necessary conditions that a policy needs to satisfy to achieve vanishing fairness-regret and prove that our proposal, online horizon fair policy, attains this desirable objective for any α-fairness criterion. This, in turn, renders our framework suitable for enforcing also other important metrics, such as max-min fairness and (weighted) proportional fairness, and for tackling cooperative game problems under the symmetric and asymmetric Nash bargaining solutions.

Biography

Tareq Si Salem joined Northeastern University as a postdoctoral research associate under the supervision of Stratis Ioannidis. He received his Ph.D. (2022) in Computer Science from Inria and Universite Cote d’Azur, France. He held a long-term visiting appointment (2022) at Delft University of Technology, Netherlands. He received a Best Paper Award at ITC 33. His research interests include computer networks, modeling, and learning under networked systems’ constraints (e.g., privacy, safety, fairness, memory, communication). Before his academic pursuits he worked as an Embedded Systems engineer and as a Wireline (O&G industry) engineer.

March 24, 2023@11:00am EDT

Machine Learning and the Data Center: A Dangerous Dead End

Nicholas D. Lane
Department of Computer Science and Technology
University of Cambridge

Abstract

The vast majority of machine learning (ML) occurs today in a data center. But there is a very real possibility that in the (near?) future, we will view this situation similarly to how we now view lead paint, fossil fuels and asbestos: a technological means to an end, that was used for a time because, at that stage, we did not have viable alternatives – and we did not fully appreciate the negative externalities that were being caused. Awareness of the unwanted side effects of the current ML data center centric paradigm is building. It couples to ML an alarming carbon footprint, a reliance to biased close-world datasets, serious risks to user privacy – and promotes centralized control by large organizations due to the assumed extreme compute resources. In this talk, I will offer a sketch of preliminary thoughts regarding how a data center free future for ML might come about, and also describe how some of our recent research results and system solutions (including the Flower framework) might offer a foundation along this path.

Biography

Nic Lane is a full Professor in the department of Computer Science and Technology, and a Fellow of St. John’s College, at the University of Cambridge. He also leads the Cambridge Machine Learning Systems Lab (CaMLSys). Alongside his academic appointments, Nic is the co-founder and Chief Science Officer of Flower Labs, a venture-backed AI company (YCW23) behind the Flower framework. Nic has received multiple best paper awards, including ACM/IEEE IPSN 2017 and two from ACM UbiComp (2012 and 2015). In 2018 and 2019, he (and his co-authors) received the ACM SenSys Test-of-Time award and ACM SIGMOBILE Test-of-Time award for pioneering research, performed during his PhD thesis, that devised machine learning algorithms used today on devices like smartphones. Nic was the 2020 ACM SIGMOBILE Rockstar award winner for his contributions to “the understanding of how resource-constrained mobile devices can robustly understand, reason and react to complex user behaviors and environments through new paradigms in learning algorithms and system design.”

March 2, 2023@1:00pm EST

On Differentially Private Federated Linear Contextual Bandits

Xingyu Zhou
Department of Electrical and Computer Engineering
Wayne State University

Abstract

We consider the cross-silo federated linear contextual bandit (LCB) problem under differential privacy. In this setting, multiple silos or agents interact with the local users and communicate via a central server to realize collaboration while without sacrificing each user’s privacy. We identify two issues of the state-of-the-art algorithm in Dubey & Pentland (2020): (i) failure of claimed privacy protection and (ii) noise miscalculation in regret bound. To resolve these issues, we take a two-step principled approach. First, we design an algorithmic framework consisting of a generic federated LCB algorithm and a flexible privacy protocol. Then, leveraging the proposed framework, we study federated LCBs under two different privacy constraints. We first establish privacy and regret guarantees under silo-level local differential privacy, which fix the issues present in the state-of-the-art algorithm. To further improve the regret performance, we next consider the shuffle model of differential privacy, under which we show that our algorithm can achieve nearly “optimal” regret without a trusted server. We achieve this by proving a new amplification lemma via shuffling for DP mechanisms, which might be of independent interest.

Biography

Xingyu Zhou is currently an Assistant Professor at ECE of Wayne State University. He received his Ph.D. from Ohio State University (advised by Ness Shroff), his master’s and bachelor’s degrees from Tsinghua University, and BUPT (all with the highest honors). His research interest includes machine learning (e.g., bandits, reinforcement learning), stochastic systems, and applied probability (e.g., load balancing). Currently, he is particularly interested in online decision-making with formal privacy guarantees. His research has not only led to several invited talks at Caltech, CMU, and UCLA, but won Best Student Paper Award and Runner-up at WiOpt 2022. He is also the recipient of various awards, including the NSF CRII award, the Presidential Fellowship at OSU, the Outstanding Graduate Award of Beijing city, the National Scholarship of China, the Academic Rising Star Award at Tsinghua University, and the Dec. 9th Scholarship of Tsinghua University. He has served as a TPC member for conferences ITC 2021, and WiOpt 2021.

February 23, 2023@1:00pm EST

Model-free Constrained RL: Foundations and Applications

Arnob Ghosh
Department of Electrical and Computer Engineering
The Ohio State University

Abstract

Many constrained sequential decision-making processes such as safe AV navigation, wireless network control, caching, cloud computing, etc., can be cast as Constrained Markov Decision Processes (CMDP). Reinforcement Learning (RL) algorithms have been used to learn optimal policies for unknown unconstrained MDP. Extending these RL algorithms to unknown CMDP, brings the additional challenge of not only maximizing the reward, but also satisfying the constraints. In contrast to existing model-based approaches or model-free methods accompanied with a ‘simulator’, we aim to develop the first model-free, simulator-free RL algorithm that achieves a sublinear regret and a sublinear constraint violation with respect to the number of episodes. To this end, we consider the episodic linear CMDP where the transition probabilities and rewards are linear in the feature space. We show that our approach can achieve Õ(K1/2) regret and Õ(K1/2) constraint violation where K is the no. of episodes. Our result does not depend on the dimension of the state-space, and hence is applicable to the large state space. This is the first result which shows that Õ(K1/2) regret and Õ(K1/2) constraint violation is achievable using model-free RL even for small state space (a.k.a tabular case). We also show that we can in fact achieve zero constraint violation for large K. We propose a primal-dual adaptation of the LSVI-UCB algorithm and demonstrate how we overcome analytical challenges for infinite state-space using soft-max policy instead of greedy policy. We also show the first (model-free or model-based) results for constrained MDP in infinite-horizon setting for large state space.

In the second part of the talk, I will demonstrate how the theoretical understanding of the constrained MDP can help us to develop algorithms for practical applications. As a first application, we show how to learn to obtain optimal beam directions under time-varying interference-constrained channels. Optimal beam selection in mmWave is challenging because of its time-varying nature. In contrast to existing approaches, we model the received signal strength for a given beam direction using functions in Reproducing Kernel Hilbert Space (RKHS), which represents a rich class of functions, to model the correlation across various beam directions. We propose a primal-dual Gaussian process bandit with adaptive reinitialization in order to handle non-stationarity and interference constraints. We theoretically and empirically show the dynamic regret and constraint violation bounds achieved by our proposed algorithm are sub-linear. Further, we also demonstrate our proposed algorithm indeed learns to select optimal beam directions for a practical dataset collected at Northeastern University. As a second application, we develop a constrained deep RL approach for Adaptive Bit-Rate (ABR) selection for video streaming to handle the high throughput variability of 5G. Studies have shown that in the 5G dataset, the unconstrained RL-based approach such as Pensieve has a high stall time whereas control-based approaches have a low bit rate. Our preliminary results show that it achieves a better bit rate compared to control-based approaches and a lower stall time compared to traditional RL-based approaches such as Pensieve.

Biography

Arnob Ghosh is currently a Research Scientist at the Dept. of Electrical and Computer Engg. At the Ohio State University. He is also associated with the NSF AI-EDGE Institute. His host is Ness Shroff since June 2021. Arnob Ghosh obtained his Ph.D. degree from the University of Pennsylvania, USA in Electrical and Systems Engg. in August 2016. Prior to joining the OSU, he was an Assistant Professor at the Industrial Engg. Department (part of Mechanical Engg.) at the IIT-Delhi since August 2019. He was also associated with the School of Artificial Intelligence at IIT-Delhi. In 2020, he took an extended leave for 1 year to spend his time at the Dept. of Electrical and Electronic Engg. at the Imperial College of London working with Thomas Parisini. Arnob Ghosh was also a post-doc research Associate at the Purdue University from August 2016- July 2019. He also spent the summer of 2017 and the summer of 2014 at the Northwestern University and Inria respectively.

Arnob Ghosh has worked in diverse areas with a common theme on efficient decision making in Interconnected system. His current research interest includes Reinforcement Learning, Online Learning theory, control, decision theory and applying those tools in various engineering applications such as Cyber physical system, Intelligent transportation, wireless communication, and computer network. He has served as a TPC member for IEEE WiOpt’21-22. He has served as a reviewer for top-tier conferences such as AAAI, AISTATS, NeurIPS, ICML, ICLR, and CDC. Further, he has also served as a reviewer for toptier journals such as Transactions on Networking, Transactions on Smart Grid, Transactions on Vehicular Technology, and Transactions on Network Science and Engineering. He has published over 25 top-tier conference papers and 20 top-tier journal papers. His work on Beam selection under time varying interference constrained channels has received the Runner up for the best student paper award at the IEEE WiOpt’2022.

Google Scholar link: scholar.google.com/citations?user=aw2d6pQAAAAJ&hl=en

February 16, 2023@1:00pm EST

Accelerating Model Development for Diverse Tasks

Ameet Talwalkar
Machine Learning Department
Carnegie Mellon University

Abstract

While the field of machine learning has made significant strides in automating application development for classical modalities like computer vision and natural language processing, creating applications outside of these areas remains a painstaking, expert-driven process. In this talk, we will discuss our recent efforts tackling the problem of how to quickly and effectively attain expert-level performance on diverse learning tasks, such as solving partial differential equations, classifying electrical activity for prosthetics control, and predicting DNA function. We first define the problem and demonstrate the gaps in existing automated tools by introducing our novel benchmarking suite (NAS-Bench-360) and an associated NeurIPS competition. We next present novel automated methods targeting diverse tasks, including neural architecture search methods that yield state-of-the-art results on NAS-Bench-360. Finally, we describe a novel cross-modal transfer learning paradigm that allows us to leverage popular pretrained language and vision models to tackle downstream tasks in completely different modalities.

Biography

Ameet Talwalkar is an associate professor in the Machine Learning Department at CMU. His work is motivated by the goal of democratizing machine learning, focusing on core challenges related to automation, efficiency, and human-in-theloop learning. He co-founded Determined AI (acquired by HPE), helped create the MLlib project in Apache Spark, co-authored the textbook ’Foundations of Machine Learning,’ and created an award-winning edX MOOC on distributed machine learning. He also helped to start the MLSys conference and is currently President of the MLSys Board.

February 8, 2023@2:00pm EST

Efficiently Enabling Rich and Trustworthy Inferences at the Extreme Edge

Mani Srivastava
Department of Electrical and Computer Engineering
University of California, Los Angeles

Abstract

Computing systems intelligently performing perception-cognition-action (PCA) loops are essential to interfacing our digitized society with the analog world it is embedded in. They employ distributed edge-cloud computing hierarchies and deep learning methods to make sophisticated inferences and decisions from high-dimensional unstructured sensory data in our personal, social, and physical spaces. While the adoption of deep learning has resulted in considerable advances in accuracy and richness, they have also resulted in challenges such as generalizing to novel situations, assuring robustness in the face of uncertainty, reasoning about complex spatiotemporal events, implementing in ultra resource-constrained edge devices, and engendering trust in opaque modes. This talk presents ideas for addressing these challenges with neuro-symbolic and physics-aware models, automatic platform-aware architecture search, and sharing of edge resources, and describes our experience in applying them in varied application domains such as mobile health, agricultural robotics, etc. 

Biography

Dr. Mani Srivastava is on the faculty at UCLA where he is a Distinguished Professor in the ECE Department with a joint appointment in the CS Department and is the Vice Chair for Computer Engineering. His research is broadly in the area of human-cyber-physical and IoT systems that are learning-enabled, resource-constrained, and trustworthy. It spans problems across the entire spectrum of applications, architectures, algorithms, and technologies in the context of systems and applications for mHealth, sustainable buildings, and smart built environments. He is a Fellow of both the ACM and the IEEE.

 

January 12, 2023@1:00pm

Mitigating Data and System Heterogeneity and Taming Fat-Tailed Noise in Federated Learning

Jia (Kevin) Liu
Department of Electrical and Computer Engineering
The Ohio State University

Abstract

In this talk, I will present our recent work on mitigating data and system heterogeneity and fat-tailed noise to achieve linear convergence speedup in federated learning. Federated learning (FL) is a distributed machine learning architecture that leverages a large number of workers to jointly learn a model with decentralized data. FL has received increasing attention in recent years thanks to its data privacy protection, communication efficiency and a linear speedup for convergence in training (i.e., convergence performance increases linearly with respect to the number of workers). However, existing studies on linear speedup for convergence are only limited to the assumptions of i.i.d. datasets across workers and/or full worker participation, both of which rarely hold in practice. In the first part of the talk, we first propose a new federated learning paradigm called “anarchic federated learning” (AFL), which features a loose coupling between the server and the workers to enable the workers to participate in the learning anytime in any way they want, thus addressing the system heterogeneity in federated learning while retaining the highly desirable linear convergence speedup. In the second part of this talk, I will introduce a clipping-based method to mitigate the impacts of fat-tailed noise in FL stochastic gradients, which can also be induced from data heterogeneity in FL.

Biography

Jia (Kevin) Liu is an Assistant Professor in the Dept. of Electrical and Computer Engineering at The Ohio State University and an Amazon Visiting Academics (AVA). He received his Ph.D. degree from the Dept. of Electrical and Computer Engineering at Virginia Tech in 2010. From Aug. 2017 to Aug. 2020, he was an Assistant Professor in the Dept. of Computer Science at Iowa State University. His research areas include theoretical machine learning, stochastic network optimization and control, and performance analysis for data analytics infrastructure and cyberphysical systems. Dr. Liu is a senior member of IEEE and a member of ACM. He has received numerous awards at top venues, including IEEE INFOCOM’19 Best Paper Award, IEEE INFOCOM’16 Best Paper Award, IEEE INFOCOM’13 Best Paper Runner-up Award, IEEE INFOCOM’11 Best Paper Runner-up Award, IEEE ICC’08 Best Paper Award, and multiple honors of long/spotlight presentations at ICML, NeurIPS, and ICLR. He is an NSF CAREER Award recipient in 2020 and a winner of the Google Faculty Research Award in 2020. He received the LAS Award for Early Achievement in Research at Iowa State University in 2020, and the Bell Labs President Gold Award. His research is supported by NSF, AFOSR, AFRL, and ONR.

December 2, 2022 @noon

How Does Data Freshness Affect Real-time Supervised Learning?

Yin Sun
Department of Electrical and Computer Engineering
Auburn University

Abstract

The evolution of Artificial Intelligence and Internet technologies has engendered many networked intelligent systems, such as autonomous driving, remote surgery, real-time surveillance, video analytics, and factory automation. Real-time supervised learning is a crucial technique in these applications, where a neural network is trained to infer a time-varying target (e.g., the position of the vehicle in front) based on features (e.g., video frames) observed at a sensing node (e.g., camera or lidar). Due to the communication delay among network nodes, the features delivered to the neural network may not be fresh, impacting both the accuracy of real-time inference and the performance of networked intelligent systems. In this talk, we will discuss (i) how data freshness affects the performance of real-time supervised learning and (ii) how to design scheduling strategies to minimize inference error. The former problem is addressed using an information-theoretic analysis, with illustrations from several supervised learning experiments. To solve the second problem, we explored a connection between the Gittins index theory and Age of Information (AoI) minimization that we discovered. These results lay out a potential path toward contextual and goal-oriented status updating.

Biography

Yin Sun is an Assistant Professor in the Department of Electrical and Computer Engineering at Auburn University, Alabama. He received his B.Eng. and Ph.D. degrees in Electronic Engineering from Tsinghua University, in 2006 and 2011, respectively. He was a Postdoctoral Scholar and Research Associate at the Ohio State University from 2011-2017. His research interests include wireless networks, machine learning, robotic control, age of information, and information theory. He has been an Associate Editor of the IEEE Transactions on Network Science and Engineering, an Editor of the Journal of Communications and Networks, a Guest Editor of the IEEE Journal on Selected Areas in Communications for the special issue on “Age of Information in Real-time Systems and Networks,” a Guest Editor of Entropy for the special issue on “Age of Information: Concept, Metric and Tool for Network Control,” and a Guest Editor of Frontiers in Communications and Networks for the special issue on “Age of Information.” He has served in the organizing committees of ACM MobiHoc 2019, 2021-2023, IEEE INFOCOM 2020-2021, IEEE/IFIP WiOpt 2020, IEEE WCNC 2021, and International Teletraffic Congress 2022 (ITC 34). He co-founded the Annual Age of Information Workshop in 2018, served as the General Chair and TPC Chair of the workshop in 2018-2019, and has been a steering committee member of the workshop since 2020. His articles received the Best Student Paper Award of the IEEE/IFIP WiOpt 2013, Best Paper Award of the IEEE/IFIP WiOpt 2019, runner-up for the Best Paper Award of ACM MobiHoc 2020, and 2021 Journal of Communications and Networks (JCN) Best Paper Award. He co-authored a monograph Age of Information: A New Metric for Information Freshness, published by Morgan & Claypool Publishers in 2019. He received the Auburn Author Award of 2020. His research group has maintained an online Paper Repository on Age of Information since 2016. He is a Senior Member of the IEEE and a Member of the ACM.

December 2, 2022 @10:00am

Acoustic-Based Active & Passive Sensing and Applications

Lili Qiu
Assistant Managing Director
Microsoft Research Asia

Abstract

Video games, Virtual Reality (VR), Augmented Reality (AR), Smart appliances (e.g., smart TVs and drones) and online meetings all call for a new way for users to interact and control them. Motivated by this observation, we have developed a series of novel acoustic sensing technologies by transmitting specifically designed signals and/or using signals naturally arising from the environments. We further develop a few interesting applications on top of our motion tracking technology.

Biography

Dr. Lili Qiu is Assistant Managing Director of Microsoft Research Asia and is mainly responsible for overseeing the research, as well as the collaboration with industries, universities, and research institutes, at Microsoft Research Asia (Shanghai). Before joining Microsoft Research Asia, Dr. Qiu was a professor of computer science at the University of Texas at Austin. Dr. Qiu is a world-leading expert in the field of Internet and mobile wireless networks. She obtained her MS and PhD degrees in computer science from Cornell University, and had worked at Microsoft Research Redmond as a researcher in the System & Networking Group from 2001-2004. Dr. Qiu is an ACM Fellow, IEEE Fellow and serves as the ACM SIGMOBILE chair. She was also the recipient of the NSF CAREER award, Google Faculty Research Award, and best paper awards at ACM MobiSys’18 and IEEE ICNP’17.

December 1, 2022 @1:30pm

Rethinking the Security and Privacy of Bluetooth Low Energy

Zhiqiang Lin
Department of Computer Science and Engineering
The Ohio State University

Abstract

Being near range wireless communication technology, Bluetooth Low Energy (BLE) has been widely used in numerous Internet-of-Things (IoT) devices from healthcare, fitness, wearables, to smart homes, because of its extremely lower energy consumption. Unfortunately, the past several years have also witnessed numerous security flaws that have rendered billions of Bluetooth devices vulnerable to attacks. While luckily these flaws have been discovered, there is no reason to believe that current BLE protocols and implementations are free from attacks, since BLE consists of multiple layers with various sub-protocols and components.

In this talk, Dr. Lin will talk about a line of recent efforts for BLE security and privacy from his research group. In particular, he will first discuss the protocol-level downgrade attack, an attack that can force the secure BLE channels into insecure ones to break the data integrity and confidentiality of BLE traffic. Then, he will introduce Bluetooth Address Tracking (BAT) attack, a new protocol-level attack, which can track randomized Bluetooth MAC addresses by using a novel allowlist-based side channel. Next, he will discuss the lessons learned, root causes of the attack, and its countermeasures. Finally, he will conclude his talk by discussing future directions in Bluetooth security and privacy.

Biography

Dr. Zhiqiang Lin is a Distinguished Professor of Engineering at The Ohio State University. His research interests center around systems and software security, with a key focus on (1) developing automated binary analysis techniques for vulnerability discovery and malware analysis, (2) hardening the systems and software from binary code rewriting, virtualization, and trusted execution environment, and (3) the applications of these techniques in Mobile, IoT, Bluetooth, and Connected and Autonomous Vehicles. He has published over 100 papers, many of which appeared in the top venues in cybersecurity. He is a recipient of Harrison Faculty Award for Excellence in Engineering Education, NSF CAREER award, AFOSR Young Investigator award, and Outstanding Faculty Teaching Award. He received his Ph.D. in Computer Science from Purdue University.

November 15, 2022 @2:00pm

Edge Video Services on 5G Infrastructure

Ganesh Ananthanarayanan
Principal Researcher
Microsoft

Abstract

Creating a programmable software infrastructure for telecommunication operations promises to reduce both the capital expenditure and the operational expenses of the 5G telecommunications operators. The convergence of telecommunications, the cloud, and edge infrastructures will open up opportunities for new innovations and revenue for both the telecommunications industry and the cloud ecosystem. This talk will focus on video, the dominant traffic type on the Internet since the introduction of 4G networks. With 5G, not only will the volume of video traffic increase, but there will also be many new solutions for industries, from retail to manufacturing to healthcare and forest monitoring, infusing deep learning and AI for video analytics scenarios. The talk will touch up on various advances in edge video analytics systems including real-time inference over edge hierarchies, continuous learning of models, and privacy preserving video analytics.

Biography

Ganesh Ananthanarayanan is a Principal Researcher at Microsoft. His research interests are broadly in systems & networking, with recent focus on live video analytics, cloud computing & large scale data analytics systems, and Internet performance. He has published over 30 papers in systems & networking conferences such as USENIX OSDI, ACM SIGCOMM and USENIX NSDI, which have been recognized with the Best Paper Award at ACM Symposium on Edge Computing (SEC) 2020, CSAW 2020 Applied Research Competition Award (runner-up), ACM MobiSys 2019 Best Demo Award (runner-up), and highest-rated paper at ACM Symposium on Edge Computing (SEC) 2018. His work on “Video Analytics for Vision Zero” on analyzing traffic camera feeds won the Institute of Transportation Engineers 2017 Achievement Award as well as the “Safer Cities, Safer People” US Department of Transportation Award. He has collaborated with and shipped technology to Microsoft’s cloud and online products like the Azure Cloud, Cosmos (Microsoft’s big data system), Azure Live Video Analytics, and Skype. He was a founding member of the ACM Future of Computing Academy. Prior to joining Microsoft Research, he completed his Ph.D. at UC Berkeley in Dec 2013, where he was also a recipient of the UC Berkeley Regents Fellowship, and prior to his Ph.D., he was a Research Fellow at Microsoft Research India.

November 10, 2022 @2:30pm

Learning with Feature Geometry

Lizhong Zheng
Department of Electrical Engineering and Computer Science
Massachusetts Institute of Technology

Abstract

In this talk, we view statistical learning as choosing feature functions that carries the useful information, and develop the corresponding metrics and operations for the design of the process to find such features. In particular, we show that deep neural networks is one particular method of achieving this goal. Based on that observation, we propose more flexible ways to connect neural networks for more complex learning tasks, for example with multi-modal data and coordination with remote terminals. This talk offers an overview of the method of information geometry used to describe feature function spaces, and is designed to propose some new research directions in incorporating external knowledge in the use of deep neural networks.

Biography

Lizhong Zheng received the B.S and M.S. degrees, in 1994 and 1997 respectively, from the Department of Electronic Engineering, Tsinghua University, China, and the Ph.D. degree, in 2002, from the Department of Electrical Engineering and Computer Sciences, University of California, Berkeley. Since 2002, he has been working at MIT, where he is currently a professor of Electrical Engineering. His research interests include information theory, statistical inference, communications, and networks theory. He received Eli Jury award from UC Berkeley in 2002, IEEE Information Theory Society Paper Award in 2003, and NSF CAREER award in 2004, and the AFOSR Young Investigator Award in 2007. He served as an associate editor for IEEE Transactions on Information Theory, and the general co-chair for the IEEE International Symposium on Information Theory in 2012. He is an IEEE fellow.

November 10, 2022 @1:00pm

The Neural Balance Theorem and its Consequences

Robert D. Nowak
Department of Electrical and Computer Engineering
University of Wisconsin-Madison

Abstract

Rectified Linear Units (ReLUs) are the most common activation function in deep neural networks. Weight decay is the most prevalent form of regularization used in deep learning. Together ReLUs and weight decay lead to an interesting effect known as “Neural Balance”: the norms of the input and output weights of each ReLU are automatically equalized at a global minima of the training objective. Neural Balance has a number of important consequences ranging from characterizations of the function spaces naturally associated to neural networks, their immunity to the curse of dimensionality, and to new and more effective architectures and training strategies. I will also discuss applications to out-of-distribution detect and active learning.

Biography

Robert Nowak holds the Nosbusch Professorship in Electrical and Computer Engineering at the University of Wisconsin-Madison, where he directs the AFOSR/AFRL University Center of Excellence on Data Efficient Machine Learning. His research focuses on signal processing, machine learning, optimization, and statistics. He serves on the editorial boards of the SIAM Journal on the Mathematics of Data Science and the IEEE Journal on Selected Areas in Information Theory.

November 1, 2022

Smart Surfaces for NextG and Satellite mmWave and Ku-Band Wireless Networks

Kyle Jamieson
Electrical and Computer Engineering
Princeton University

Abstract

To support faster and more efficient networks, mobile operators and service providers are bringing 5G millimeter wave (mmWave) networks indoors. However, due to their high directionality, mmWave links are extremely vulnerable to blockage by walls and obstacles. Meanwhile, the first low earth orbit satellite networks for internet service have recently been deployed and are growing in size, yet will face deployment challenges in many practical circumstances of interest. To address both such challenges, the Princeton Advanced Wireless Systems lab is exploiting advances in artificially-engineered metamaterials to enable steerable wireless mmWave and Ku band beam reflection and refraction. Our approaches fall under the category of Huygens metamaterials, but our advances enable practical electronic control and simultaneous use in multiple frequency bands in such materials, for the first time. We have specified our designs in RF simulators and prototyped in hardware, and our experimental evaluation demonstrates up to 20 dB SNR gains over environmental paths in an indoor office environment.

Biography

Kyle Jamieson is Professor of Computer Science and Associated Faculty in Electrical and Computer Engineering at Princeton University. His research focuses on mobile and wireless systems for sensing, localization, and communication, and on massively-parallel classical, quantum, and quantum-inspired computational structures for NextG wireless communications systems. He received the B.S. (Mathematics, Computer Science), M.Eng. (Computer Science and Engineering), and Ph.D. (Computer Science, 2008) degrees from the Massachusetts Institute of Technology. He then received a Starting Investigator fellowship from the European Research Council, a Google Faculty Research Award, and the ACM SIGMOBILE Early Career Award. He served as an Associate Editor of IEEE Transactions on Networking from 2018 to 2020. He is a Senior Member of the ACM and the IEEE.

October 20, 2022

Reinforcement Learning with Robustness and Safety Guarantees

Dileep Kalathil
Department of Electrical and Computer Engineering
Texas A&M University

Abstract

Reinforcement Learning (RL) is the class of machine learning that addresses the problem of learning to control unknown dynamical systems. RL has achieved remarkable success recently in applications like playing games and robotics. However, most of these successes are limited to very structured or simulated environments. When applied to real-world systems, RL algorithms face two fundamental sources of fragility. First, the real-world system parameters can be very different from that of the nominal values used for training RL algorithms. Second, the control policy for any real-world system is required to maintain some necessary safety criteria to avoid undesirable outcomes. Most deep RL algorithms overlook these fundamental challenges which often results in learned policies that perform poorly in the real-world settings. In this talk, I will present two approaches to overcome these challenges. First, I will present an RL algorithm that is robust against the parameter mismatches between the simulation system and the real-world system. Second, I will discuss a safe RL algorithm to learn policies such that the frequency of visiting undesirable states and expensive actions satisfies the safety constraints. I will also briefly discuss some practical challenges due to the sparse reward feedback and the need for rapid real-time adaptation in real-world systems, and the approaches to overcome these challenges.

Biography

Dileep Kalathil is an Assistant Professor in the Department of Electrical and Computer Engineering at Texas A&M University (TAMU). His main research area is reinforcement learning theory and algorithms, and their applications in communication networks and power systems. Before joining TAMU, he was a post-doctoral researcher in the EECS department at UC Berkeley. He received his Ph.D. from University of Southern California (USC) in 2014, where he won the best Ph.D. Dissertation Prize in the Department of Electrical Engineering. He received his M. Tech. from IIT Madras, where he won the award for the best academic performance in the Electrical Engineering Department. He received the NSF CRII Award in 2019 and the NSF CAREER award in 2021. He is a senior member of IEEE.

October 4, 2022

Creating the Internet of Biological and Bio-inspired Things

Shyam Gollakota
Department of Computer Science
University of Washington

Abstract

Living organisms can perform incredible feats. Plants like dandelions can disperse their seeds over a kilometer in the wind, and small insects like bumblebees can see, smell, communicate, and fly around the world, despite their tiny size. Enabling some of these capabilities for the Internet of things (IoT) and cyber-physical systems would be transformative for applications ranging from large-scale sensor deployments to micro-drones, biological tracking, and robotic implants. In this talk, I will explain how by taking an interdisciplinary approach spanning wireless communication, sensing, and biology, we can create programmable devices for the internet of biological and bio-inspired things. I will present the first battery-free wireless sensors, inspired by dandelion seeds, that can be dispersed by the wind to automate deployment of large-scale sensor networks. I will then discuss how integrating programmable wireless sensors with live animals like bumblebees can enable mobility for IoT devices, and how this technique has been used for real-world applications like tracking invasive “murder” hornets. Finally, I will present an energy-efficient insect-scale steerable vision system inspired by animal head motion that can ride on the back of a live beetle and enable tiny terrestrial robots to see.

Biography

Shyam Gollakota is a Washington Research Foundation Endowed Professor at the Paul G. Allen School of Computer Science & Engineering in the University of Washington. His work has been licensed and acquired by multiple companies and is in use by millions of users. His lab also worked closely with the Washington Department of Agriculture to wireless track the invasive “murder” hornets, which resulted in the destruction of the first nest in the United States. He is the recipient of the ACM Grace Murray Hopper Award in 2020 and recently named as a Moore Inventor Fellow in 2021. He was also named in MIT Technology Review’s 35 Innovators Under 35, Popular Science ‘brilliant 10’ and twice to the Forbes’ 30 Under 30 list. His group’s research has earned Best Paper awards at MOBICOM, SIGCOMM, UbiComp, SenSys, NSDI and CHI, appeared in interdisciplinary journals like Nature, Nature Communications, Nature Biomedical Engineering, Science Translational Medicine, Science Robotics and Nature Digital Medicine as well as named as a MIT Technology Review Breakthrough technology of 2016 as well as Popular Science top innovations in 2015. He is an alumni of MIT (Ph.D., 2013, winner of ACM doctoral dissertation award) and IIT Madras.

September 13, 2022

Decoding Hidden Worlds: Wireless & Sensor Technologies for Oceans, Health, and Robotics

Fadel Adib
Department of Electrical Engineering and Computer Science
MIT

Abstract

As humans, we crave to explore hidden worlds. Yet, today’s technologies remain far from allowing us to perceive most of the world we live in. Despite centuries of seaborne voyaging, more than 95% of our ocean has never been observed or explored. And, at any moment in time, each of us has very little insight into the biological world inside our own bodies. The challenge in perceiving hidden worlds extends beyond ourselves: even the robots we build are limited in their visual perception of the world. In this talk, I will describe new technologies that allow us to decode areas of the physical world that have so far been too remote or difficult to perceive. First, I will describe a new generation of underwater sensor networks that can sense, compute, and communicate without requiring any batteries; our devices enable real-time and ultra-long-term monitoring of ocean conditions (temperature, pressure, coral reefs) with important applications to scientific exploration, climate monitoring, and aquaculture (seafood production). Next, I will talk about new wireless technologies for sensing the human body, both from inside the body (via batteryless micro-implants) as well as from a distance (for contactless cardiovascular and stress monitoring), paving the way for novel diagnostic and treatment methods. Finally, I will highlight our work on extending robotic perception beyond line-of-sight, and how we designed new RF-visual primitives for robotics - including sensing, servoing, navigation, and grasping - to enable new manipulation tasks that were not possible before. The talk will cover how we have designed and built these technologies, and how we work with medical doctors, climatologists, oceanographers, and industry practitioners to deploy them in the real world. I will also highlight the open problems and opportunities for these technologies, and how researchers and engineers can build on our open-source tools to help drive them to their full potential in addressing global challenges in climate, health, and automation.

Biography

Fadel Adib is an Associate Professor in the MIT Media Lab and the Department of Electrical Engineering and Computer Science. He is the founding director of the Signal Kinetics group which invents wireless and sensor technologies for networking, health monitoring, robotics, and ocean IoT. He is also the founder & CEO of Cartesian Systems, a spinoff from his lab that focuses on mapping indoor environments using wireless signals. Adib was named by Technology Review as one of the world’s top 35 innovators under 35 and by Forbes as 30 under 30. His research on wireless sensing (X-Ray Vision) was recognized as one of the 50 ways MIT has transformed Computer Science, and his work on robotic perception (Finder of Lost Things) was named as one of the 103 Ways MIT is Making a Better World. Adib’s commercialized technologies have been used to monitor thousands of patients with Alzheimer’s, Parkinson’s, and COVID19, and he has had the honor to demo his work to President Obama at the White House. Adib is also the recipient of various awards including the NSF CAREER Award(2019), the ONR Young Investigator Award (2019), the ONR Early Career Grant (2020), the Google Faculty Research Award (2017), theSloan Research Fellowship (2021), and the ACM SIGMOBILE Rockstar Award (2022), and his research has received Best Paper/Demo Awards at SIGCOMM, MobiCom, and CHI. Adib received his Bachelors from the American University of Beirut (2011) and his PhD from MIT (2016), where his thesis won the Sprowls award for Best Doctoral Dissertation at MIT and the ACM SIGMOBILE Doctoral Dissertation Award.

May 12, 2022

Tackling Computational Heterogeneity in Federated Learning

Gauri Joshi
Department of Electrical and Computer Engineering
Carnegie Mellon University

Abstract

The future of machine learning lies in moving both data collection as well as model training to the edge. The emerging area of federated learning seeks to achieve this goal by orchestrating distributed model training using a large number of resource-constrained mobile devices that collect data from their environment. Due to limited communication capabilities as well as privacy concerns, the data collected by these devices cannot be sent to the cloud for centralized processing. Instead, the nodes perform local training updates and only send the resulting model to the cloud. A key aspect that sets federated learning apart from data-center-based distributed training is the inherent heterogeneity in data and local computation at the edge clients. In this talk, I will present our recent work on tackling computational heterogeneity in federated optimization, firstly in terms of heterogeneous local updates made by the edge clients, and secondly in terms of intermittent client availability.

Biography

Gauri Joshi is an assistant professor in the ECE department at Carnegie Mellon University since September 2017. Previously, she worked as a Research Staff Member at IBM T. J. Watson Research Center. Gauri completed her Ph.D. from MIT EECS in June 2016, advised by Prof. Gregory Wornell. She received her B.Tech and M.Tech in Electrical Engineering from the Indian Institute of Technology (IIT) Bombay in 2010. Her awards and honors include the NSF CAREER Award (2021), ACM Sigmetrics Best Paper Award (2020), NSF CRII Award (2018), IBM Faculty Research Award (2017), Best Thesis Prize in Computer science at MIT (2012), and Institute Gold Medal of IIT Bombay (2010).

April 13, 2022

Trustworthy Machine Learning for Systems Security

Lorenzo Cavallaro
Department of Computer Science
University College London

Abstract

No day goes by without reading machine learning (ML) success stories across different application domains. Systems security is no exception, where ML’s tantalizing results leave one to wonder whether there are any unsolved problems left. However, machine learning has no clairvoyant abilities and once the magic wears off, we’re left in uncharted territory.

We, as a community, need to understand and improve the effectiveness of machine learning methods for systems security in the presence of adversaries. One of the core challenges is related to the representation of problem space objects (e.g., program binaries) in a numerical feature space, as the semantic gap makes it harder to reason about attacks and defences and often leaves room for adversarial manipulation. Inevitably, the effectiveness of machine learning methods for systems security are intertwined with the underlying abstractions, e.g., program analyses, used to represent the objects. In this context, is trustworthy machine learning possible?

In this talk, I will first illustrate the challenges in the context of adversarial ML evasion attacks against malware classifiers. The classic formulation of evasion attacks is ill-suited for reasoning about how to generate realizable evasive malware in the problem space. I’ll provide a deep dive into recent work that provides a theoretical reformulation of the problem and enables more principled attack designs. Implications are interesting, as the framework facilitates reasoning around end-to-end attacks that can generate real-world adversarial malware, at scale, that evades both vanilla and hardened classifiers, thus calling for novel defenses.

Next, I’ll broaden our conversation to include not just robustness against specialized attacks, but also drifting scenarios, in which threats evolve and change over time. Prior work suggests adversarial ML evasion attacks are intrinsically linked with concept drift and we will discuss how drift affects the performance of malware classifiers, and what role the underlying feature space abstraction has in the whole process.

Ultimately, these threats would not exist if the abstraction could capture the ’Platonic ideal’ of interesting behavior (e.g., maliciousness), however, such a solution is still out of reach. I’ll conclude by outlining current research efforts to make this goal a reality, including robust feature development, assessing vulnerability to universal perturbations, and forecasting of future drift, which illustrate what trustworthy machine learning for systems security may eventually look like.

Biography

Lorenzo grew up on pizza, spaghetti, and Phrack, first. Underground and academic research interests followed shortly thereafter. He is a Full Professor of Computer Science at UCL Computer Science, where he leads the Systems Security Research Lab within the Information Security Research Group. He speaks, publishes at, and sits on the technical program committees of top-tier and well-known international conferences including IEEE S&P, USENIX Security, ACM CCS, ACSAC, and DIMVA, as well as emerging thematic workshops (e.g., Deep Learning for Security at IEEE S&P, and AISec at ACM CCS), and received the USENIX WOOT Best Paper Award in 2017. Lorenzo is Program Co-Chair of Deep Learning and Security 2021-22, DIMVA 2021-22, and he was Program Co-Chair of ACM EuroSec 2019-20 and General Co-Chair of ACM CCS 2019. He holds a PhD in Computer Science from the University of Milan (2008), held Post-Doctoral and Visiting Scholar positions at Vrije Universiteit Amsterdam (2010-2011), UC Santa Barbara (2008-2009), and Stony Brook University (2006-2008), worked in the Department of Informatics at King’s College London (2018-2021), where he held the Chair in Cybersecurity (Systems Security), and the Information Security Group at Royal Holloway, University of London (Assistant Professor, 2012; Associate Professor, 2016; Full Professor, 2018). He’s definitely never stopped wondering and having fun throughout.

March 25, 2022

Deep Convolutional Neural Networks: Enabling and Experimentally Validating Secure and High-Bandwidth Links

Kaushik Chowdhury
Electrical and Computer Engineering Department
Northeastern University

Abstract

The future NextG wireless standard must be able to sustain ultra-dense networks with trillions of untrusted devices, many of which will be mobile and require assured high-bandwidth links. This talk explores how deep learning, specifically deep convolutional neural networks (CNNs), will play a critical role to enable secure, high-bandwidth links while minimizing complex, upper layer processing and exhaustive search of the state space. First, we describe how device identification can be performed at the physical layer by learning subtle but discriminative distortions present in the transmitted signal, also called as RF fingerprints. We present accuracy results for the large radio populations as well as datasets collected from community-scale NSF PAWR platforms. Second, we show how beam selection for millimeter-wave links in a vehicular scenario can be expedited using out-of-band multi-modal data collected from an actual autonomous vehicle equipped with sensors like LiDAR, camera images, and GPS. We propose individual modality and distributed fusion-based CNN architectures that can execute locally as well as at a mobile edge computing center, with a study of associated tradeoffs.

Biography

Prof. Chowdhury is Professor in the Electrical and Computer Engineering Department and Associate Director of the Institute for the Wireless IoT at Northeastern University, Boston. He is the winner of the U.S. Presidential Early Career Award for Scientists and Engineers (PECASE) in 2017, the Defense Advanced Research Projects Agency Young Faculty Award in 2017, the Office of Naval Research Director of Research Early Career Award in 2016, and the National Science Foundation (NSF) CAREER award in 2015. He is the recipient of best paper awards at IEEE GLOBECOM’19, DySPAN’19, INFOCOM’17, ICC’13,’12,’09, and ICNC’13. He serves as area editor for IEEE Trans. on Mobile Computing, Elsevier Computer Networks Journal, IEEE Trans. on Networking, and IEEE Trans. on Wireless Communications. He co-directs the operations of Colosseum RF/network emulator, as well as the Platforms for Advanced Wireless Research project office. Prof. Chowdhury has served in several leadership roles, including Chair of the IEEE Technical Committee on Simulation, and as Technical Program Chair for IEEE INFOCOM 2021, IEEE CCNC 2021, IEEE DySPAN 2021, and ACM MobiHoc 2022. His research interests are in large-scale experimentation, applied machine learning for wireless communications and networks, networked robotics, and self-powered Internet of Things.

March 9, 2022

Transparent Computing in the AI Era

Shiqing Ma
Department of Computer Science
Rutgers University

Abstract

Recent advances in artificial intelligence (AI) have shifted how modern computing systems work, raising new challenges and opportunities for transparent computing. On the one hand, many AI systems are black boxes and have dense connections among their computing units, which makes existing techniques like dependency analysis fail. Such a new computing system calls for new methods to improve its transparency to defend against attacks against AI-powered systems such as Trojan attacks. On the other hand, it provides a brand-new computation abstraction, which features data-driven computation-heavy applications. It potentially enables new applications in transparent computing, which typically involves large-scale data processing. In this talk, I will present my work in these two directions. Specifically, I will discuss the challenges in analyzing the deep neural network for security inspection and introduce our novel approach in examining Trojan behaviors. Later, I will talk about AI can help increase the information entropy of large security audit logs to enable efficient lossless compressed storage.

Biography

Shiqing Ma is an Assistant Professor in the Department of Computer Science at Rutgers University, the State University of New Jersey. He received his Ph.D. in Computer Science from Purdue University in 2019. His research focuses on program analysis, software and system security, adversarial machine learning, and software engineering. He is the recipient of Distinguished Paper Awards from NDSS 2016 and USENIX Security 2017.

 

March 3, 2022

Resource Allocation through Learning in Emerging Wireless Networks

Sanjay Shakkottai
Department of Electrical and Computer Engineering
The University of Texas at Austin

Abstract

In this talk, we discuss learning-inspired algorithms for resource allocation in emerging wireless networks (5G and beyond to 6G). We begin with an overview of opportunities for wireless and ML at various time-scales in network resource allocation. We then present two specific instances to make the case that learning-assisted resource allocation algorithms can significantly improve performance in real wire-less deployments. First, we study co-scheduling of ultra-low-latency traffic (URLLC) and broadband traffic (eMBB) in a 5G system, where we need to meet the dual objectives of maximizing utility for eMBB traffic while immediately satisfying URLLC demands. We study iterative online algorithms based on stochastic approximation to achieve these objectives. Next, we study online learning (through a bandit framework) of wireless capacity regions to assist in downlink scheduling, where these capacity regions are “maps” from each channel-state to the corresponding set of feasible transmission rates. In practice, these maps are hand-tuned by operators based on experiments, and these static maps are chosen such that they are good across several base-station deployment scenarios. Instead, we propose an epoch-greedy bandit algorithm for learning scenario-specific maps. We derive regret guarantees, and also empirically validate our approach on a high-fidelity 5G New Radio (NR) wireless simulator developed within AT&T Labs. This is based on joint work with Gustavo de Veciana, Arjun Anand, Isfar Tariq, Rajat Sen, Thomas Novlan, Salam Akoum, and Milap Majmundar.

Biography

Sanjay Shakkottai received his Ph.D. from the ECE Department at the University of Illinois at Urbana-Champaign in 2002. He is with The University of Texas at Austin, where he is a Professor in the Department of Electrical and Computer Engineering, and holds the Cockrell Family Chair in Engineering #15. He received the NSF CAREER award in 2004 and was elected as an IEEE Fellow in 2014. He was a co-recipient of the IEEE Communications Society William R. Bennett Prize in 2021. His research interests lie at the intersection of algorithms for resource allocation, statistical learning and networks, with applications to wireless communication networks and online platforms.

SUBSCRIPTION

To subscribe to the AI-EDGE Seminar Series, please email SubscribetoAI-EDGESeminars@osu.edu.

Upcoming Events

  • April 25, 2024: Ting He
  • April 26, 2024: Alex Sprintson

Past Events

  • April 12, 2024: Mingyan Liu
  • April 11, 2024: Mingyi Hong
  • March 29, 2024: Farinaz Koushanfar
  • March 28, 2024: Yuheng Bu
  • March 22, 2024: Mo Li
  • February 23, 2024: Shrikanth (Shri) Narayanan
  • February 22, 2024: Yu Huang
  • February 16, 2024: Michael D. Brown
  • February 9, 2024: Tarek Abdelzaher
  • January 25, 2024: Jia (Kevin) Liu
  • November 28, 2023: Teresa Huang
  • November 17, 2023: Dimitris Dimitriadis
  • October 27, 2023: Gil Zussman
  • October 23, 2023: Adam Smith
  • October 20, 2023: Weisong Shi
  • October 6, 2023: Kiran Somasundaram
  • September 22, 2023: Murali Annavaram
  • May 1, 2023: Alvaro Velasquez
  • April 28, 2023: Xia Zhou
  • April 20, 2023: Mosharaf Chowdhury
  • April 14, 2023: Ranveer Chandra
  • April 13, 2023: Venu Veeravalli
  • April 7, 2023: Tareq Si-Salem
  • March 24, 2023: Nicholas D. Lane
  • March 2, 2023: Xingyu Zhou
  • February 23, 2023: Arnob Ghosh
  • February 16, 2023: Ameet Talwalkar
  • February 8, 2023: Mani Srivastava
  • January 12, 2023: Jia (Kevin) Liu
  • December 2, 2022: Lili Qiu
  • December 2, 2022: Yin Sun
  • December 1, 2022: Zhiqiang Lin
  • November 15, 2022: Ganesh Ananthanarayanan
  • November 10, 2022: Lizhong Zheng
  • November 10, 2022: Robert D. Nowak
  • November 1, 2022: Kyle Jamieson
  • October 20, 2022: Dileep Kalathil
  • October 4, 2022: Shyam Gollakota
  • September 13, 2022: Fadel Adib
  • May 12, 2022: Gauri Joshi
  • April 13, 2022: Lorenzo Cavallaro
  • March 25, 2022: Kaushik Chowdhury
  • March 9, 2022: Shiqing Ma
  • March 3, 2022: Sanjay Shakkottai