Machine Learning & Reinforcement Learning Algorithms for Adversarial Agents in a Collaborative Network
Distributed Nash equilibrium problems occur in many multi-agent engineering systems including autonomous vehicle networks and smart power grids. Equilibrium-seeking algorithms can create network-wide stability by making only local decisions to benefit each agent. In practical networks, agents do not have perfect information and must rely on noisy data relayed to them by others. Under sufficient information sparsity, the network becomes prone to adversarial attacks that can undermine the integrity of the system for the self-interest of a single agent.
In this project, we will develop new algorithms that seek convergence to a Nash equilibrium in networks with partial information and potential adversaries. The first step is to abstract an arbitrary environment into a system of two graphs that represent communication channels and observation pathways. Our algorithms will be based on an original research algorithm and on two recent algorithms developed by a previous design project team. The project's goals and specifications will be defined in terms of convergence speed, robustness to uncertainty, and scalability within large networks. The goal is to leverage statistics and machine learning for faster convergence so as to improve upon the existing algorithms.