(1) Machine learning by open dynamical systems, for open dynamical systems


Recurrent neural networks (RNNs) are a class of powerful open system models in machine learning that are specially designed to learn sequential data. They have also enjoyed synergies with disciplines such as signal processing, optimization, systems and control theory, neuroscience and network science. The explosion of real-time data (physical or not) and the promising potential of using dynamical systems (physically realizable or not) for computation and to learn the data are opening up a wide range of foundational and practical problems. However, the dearth of rigorous analysis limits the usefulness of RNNs in addressing scientific questions. Therefore, a deep understanding of the working mechanism of RNNs and related models is pivotal to shed light on the properties of large and adaptive architectures, and to facilitate systematic design of the next generation of networks.

On one hand, we are primarily exploring and working on various mathematical/statistical aspects (implicit bias/regularization, theories inspired by neuroscience and statistical mechanics, etc.) of RNNs and related models in the context of machine learning theory. On the other hand, we are always looking to seek principled network-based solutions to problems concerning complex dynamical systems arising in science and engineering. The theory and applications go hand in hand for us. For theoretical analysis, we use tools and techniques from stochastic analysis, statistical learning, among others. For applications, we are primarily inspired by ideas and insights from statistical mechanics and nonlinear science. Check out the relevant research outputs here.