Research

double-arrowOverview

Networking and AI are two of the most transformative IT technologies --- helping to better people’s lives, contributing to national economic competitiveness, national security, and national defense. The Institute will exploit the synergies between networking and AI to design the next generation of edge networks (6G and beyond) that are highly efficient, reliable, robust, and secure. A new distributed intelligence plane will be developed to ensure that these networks are self-healing, adaptive, and self-optimized. The future of AI is distributed because AI will increasingly be implemented across a diverse set of edge devices. These intelligent and adaptive networks will in turn unleash the power of collaboration to solve long-standing distributed AI challenges, making AI more efficient, interactive, and privacy preserving. The Institute will develop the key underlying technologies for distributed and networked intelligence to enable a host of future transformative applications such as intelligent transportation, remote healthcare, distributed robotics, and smart aerospace. 

double-arrowThrust 1: Re-engineering the Physics/Constraints

In Thrust 1, our focus is to develop new AI tools and techniques to help us understand the physical environment better and still leverage the age-old domain knowledge to develop significantly better networking solutions. Then we explore how to engineer the physical environment itself to further improve performance itself. For example, a major effort that will be entailed will be to use physics guided AI models that leverage domain knowledge and control the environment to expand the capacity of the system itself. This is a fundamental shift in the definition of a networking problem as traditional approaches assume the physical environment as a given; we propose to treat it as a controllable entity.

double-arrowThrust 2: AI Based Network Resource Allocation

Control and allocation of network resources lie at the core of every network. Leveraging recent breakthroughs in online learning and reinforcement learning, our goal is to develop efficient, fair, and safe data-driven AI-empowered mechanisms that tackle  non-convex and high-dimensional network control and allocation problems. The challenges are manifold, and include non-stationary dynamics at multiple timescales (ranges from microseconds at the physical layer to months at the deployment level) and non-convex and combinatorial constraints on resource availability. Our focus will be on combining the strengths of classic adaptive network design with the strengths of AI solutions in a single-agent setting (i.e., agents do not explicitly talk to each other, however, they could however interact with each other through implicit network feedback that is affected by all the agents).

double-arrowThrust 3: Multi-agent Network Control

Sharing resources is central in networks that comprise a large and dynamic population of often competing, self-interested users. Compounding the issue, different parts of the global network may be subject to different local conditions, different organizational control and regulation, as well as potential adversaries with bounded resources. It is therefore crucial to develop AI-aided non-cooperative and distributed networks where resources are shared efficiently and fairly among self-interested users with dynamic demands, and with privacy protection. Thrust 3 aims to go beyond the challenges addressed in Thrust 2 to include a multi-agent distributed setting. Our work will combine efforts at an abstract level, model the global network as a multi-agent RL system, and then leverage and further advance powerful distributed AI and ML techniques to address the network resource allocation and control problems.

double-arrowThrust 4: AI Powered Network Security

Resourceful adversaries (e.g., nation-states, terrorists, cyber criminals) can create havoc in network ecosystems and carry out malicious activities by exploiting network vulnerabilities. New wireless networks, like 5G, WiFi 6 and the envisioned 6G, bring certain security advantages. However, securing next-generation networks, such as 6G, is a formidable challenge because of inherent security requirements, increased network complexity, huge numbers of connected devices, and low latency requirements. Thrust 4 has three interrelated goals: (a) to design AI-enhanced techniques for comprehensive network security; (b) to design security-aware network controllers for the distributed and secure intelligent network plane; (c) to design techniques to protect against data poisoning attacks --- these techniques are critical for AI techniques designed in other thrusts (e.g., Thrusts 1, 2, and 3).

double-arrowThrust 5: Network Aware AI Operation

Distributed AI algorithms presently run on centrally partitioned datasets over high-performance, reliable data centers, connected by fast communication links. In future edge networks we envision that edge nodes will have limited memory and computation, there will be straggler (slower jobs being processed) occurring when parallel or distributed AI jobs are executed, communication links are slow and unreliable, and data collected are imbalanced, highly heterogeneous, and come with privacy constraints. This thrust provides scalable and network-aware distributed AI/ML algorithms, accounting for computation, communication, and data constraints at the edge. Our contributions will be both fundamental, providing novel algorithms with provable guarantees, as well as experimental covering various use cases.

double-arrowThrust 6: Network Operations for Distributed AI

Networks have traditionally been designed for the purposes of communication, and the main functionality is to make sure that these networks are reliable “bit pipelines.” Yet, future networks, as observed in our use-case scenarios, are evolving beyond this traditional functionality to support increasing AI capabilities in networks. Accordingly, future networks need to be re-engineered to better serve the new needs of distributed AI. This motivates us to ask in this thrust how the network should be operated to perform AI efficiently, which has to include intelligent sensing, data processing and learning/inference. In particular,  the central question of  this thrust is how networks should adaptively allocate communication, computing, and storage resources to optimize information freshness, diversity, fidelity, etc., for distributed and diverse AI applications. Accordingly, our key objective is to develop theories and algorithms for AI-aware networks that deliver the right information at the right time and place (AI application interface) to support distributed AI in dynamic, heterogeneous, and non-stationary wireless edge networks.

double-arrowThrust 7: Humans, AI and Network Research at the Interface

A core function of the Internet is to connect people and the data they produce/observe. Increasingly, AI mediates these interactions in ways ranging from AI content/product-recommendation engines to semi-automatic AI systems that help operators manage networks.  Current AI systems have two serious limitations: 1) AI services do not dynamically scale capabilities in response to client/user network resources; 2) AI systems are black-boxes to humans  and  human operators are black-boxes to AI systems.   We tackle these issues at the interface of humans, AI, and networks. At present, AI services are usually one-size-fits-all solutions that do not dynamically scale QoS in response to client/user network resources, raising additional concerns about fairness and equity. On the AI systems side, existing solutions typically involve model updating (retraining) periodically at slow-time scales (e.g., days or weeks), in part because network infrastructure does not support fast model updating and interaction. An additional complicating factor is that the AI systems are black-boxes to human operators and human operators are black-boxes to AI systems. This thrust focuses on addressing these shortcomings at the interface between humans, AI and networked-systems.

double-arrowThrust 8: Security and Privacy of Network Users

In distributed learning over edge networks, information about the models being learned flows across the network. This opens the door to a host of attacks that aim at inferring users' private and sensitive data from the observed (intermediate) model. In this thrust, we focus on privacy threats that arise from such scenarios. The last decade has witnessed the rise of a rich theory to deal with privacy threats. This theory is centered around a sound and rigorous definition for privacy, known as differential privacy (DP). A powerful algorithmic framework for differential privacy has been developed over the years, and led to numerous practical algorithms with strong and provable privacy guarantees. However, there are several major challenges facing the design and implementation of differentially private AI algorithms in edge networks. Some of these challenges require developing new concepts, some require new algorithmic and networking techniques, and others require building secure and trusted execution environments (TEEs) to perform differentially private computations.

double-arrowSynergy and Virtuous Cycles

The following picture shows the synergistic relationships among the “AI for Networks” and “Networks for AI” research thrusts, identifying specific tasks (red dotted-decimal numbers) that have inter-dependent challenges, which will bring together researchers from these different tasks, with progress on tasks in one thrust enabling or informing the other. It is precisely this web of inter-dependent research relationships and activities that make our Institute so much more than the sum of its parts.

As can be seen in the picture, “virtuous cycle” (explicitly called out in NSF 20-503) will be a core component of our efforts that ensures a tight coupling between foundational and use-inspired research. New algorithms, approaches, and system design/function emerging from the eight foundational research thrusts will be instantiated in three use case testbeds; experience and insights gained in these testbeds will inform further innovations in foundational research. An important second “virtuous cycle” exists between “AI for Networks” and “Networks for AI”. The foundational advances in AI – motivated and designed for, as well as constrained by, real-world system needs – will result in enhanced underlying network and computational infrastructure for the AI algorithms.

In the leftmost virtuous cycle, use-inspired research informs the set of challenges (and constraints) undertaken by fundamental research. For example, AI-driven network control algorithms must operate in a network setting where organizational boundaries (e.g., backbone service providers, edge wireless networks, various sensor owner/operators, and fleet owners/operators) are important – data and state information may only be minimally-shared across such boundaries or only shared when there is economic gain (e.g., as in BGP policy-driven routing). This suggests that a new architecture will be needed, beyond the traditional Internet IP “hourglass”, for AI-driven networks. Driven by our use cases, the eight research trusts will help define and evolve this architecture, and operate in the context of this architecture. They will identify new “thin waist(s)” where higher-level notions of data and information and lower-level notions of link abstraction, rather than ubiquitous end-end network connectivity alone, are likely to be critical. Using open source software implementations we will maintain a shareable reference architecture and design incorporating the waists and support experiments with use cases as well as collaborative knowledge transfer to our partners.