An update on some recent papers in machine learning applications to the sciences and algorithm developments.
- OnsagerNet: Learning stable and interpretable dynamics using a generalized Onsager principle: In this paper, we propose a method for learning reduced order representations of dynamical systems from observed trajectory data. The key idea is to use a modelling approach, where we build the parameterization based on the Onsager principle – one of the first systematic studies on non-equilibrium physics. The unknown parameters are then learned from data. We demonstrate that this is an effective approach to learn stable dynamics for dissipative systems, e.g. the Rayleigh–Bénard convection problem, which forms the basis for Lorenz’ investigation into chaotic dynamics. This work is published in Physical Review Fluids.
- An invertible crystallographic representation for general inverse design of inorganic crystals with targeted properties: In this paper, we study the representation of crystals for machine-learning based materials property prediction and inverse design. This paper is published in Matter.
- Distributed optimization for degenerate loss functions arising from over-parameterization: In this paper, we study how distributed optimization changes in the degenerate optimization setting where the empirical minimizers of loss functions form a connected manifold. This is often the case in deep learning, where the model is highly over-parameterized. We focus on the linear case where we show that such structures in the loss can alter how distributed optimization performs with respect to the number of local updates. This gives rise to efficient distributed optimization algorithms that balance costs of local updates and global communication. This paper is published in Artificial Intelligence.