Recent Papers

Here are some recent published papers.

  • On stability and regularisation for data-driven solution of parabolic inverse source problems: In this paper, we investigate the stability properties on using neural networks for inverse problems involving parabolic equations with source terms, and appropriate regularisation strategies such as gradient penalisation, that improves the solution of these inverse problems. This work is published in the Journal of Computational Physics.
  • Computing high-dimensional invariant distributions from noisy data: In this paper, we further develop methods to compute invariant distributions of SDEs from noisy sampled data. The key idea is to use a quasi-potential type decomposition of the force field and using neural networks as surrogates for the decomposition. This allows us to compute invariant distributions of high-dimensional systems, especially when the noise is small (so that the distribution becomes quite singular). This work is published in the Journal of Computational Physics.
  • Accelerating the Discovery of Rare Materials with Bounded Optimization Techniques: In this paper, we propose a modified Bayesian optimisation method that are shown to be effective for a class of optimisation problems involving the inverse design for materials discovery. This work is presented as a spotlight at the NeurIPS 2022 Workshop on AI for Accelerated Materials Design.
  • Dynamic Modeling of Intrinsic Self-Healing Polymers Using Deep Learning: In this paper, we develop a methodology to learn interpretable dynamics from experimental data on self-healing polymers. We show that even with a small dataset and in the presence of noise, an appropriate physics-based model ansatz combined with machine learning enables prediction of the dynamics of healing, and the study of toughness evolutions as functions of time. We envision this as a first step towards a principled combination of self-healing polymer analysis and design using deep learning. This work is published in ACS Applied Materials and Interfaces.
  • From Optimization Dynamics to Generalization Bounds via Łojasiewicz Gradient Inequality: In this work, we investigate the connection between optimisation and generalisation in machine learning. The main idea is that one can relate the length of an optimisation path with generalisation estimates. Moreover, a LG type inequality can be used to control the optimisation trajectory length, thereby obtaining generalisation bounds. We show that this method allows for the establishment of a number of generalisation estimates (both known and new) for machine learning models. This work is published in Transactions on Machine Learning Research (TMLR).
  • Adaptive sampling methods for learning dynamical systems: In this work, we develop numerical methods for sampling trajectories that are used to train dynamical models using machine learning. Unlike static supervised learning problems, here there involves a distribution shift caused by disparity between sampling probability measures and target probability measures. The sampling method proposed in this work makes use of this disparity to design efficient sampling strategies. This work is published in the proceedings of Mathematical and Scientific Machine Learning (MSML).
  • Personalized Algorithm Generation: A Case Study in Learning ODE Integrators: In this work, we develop a machine learning method for the acceleration of RK-type integrators using ideas from multi-task learning. In particular, we show that if one solves repeatedly problems of a similar type, it is advantageous to adapt the integrator structure (here the RK coefficients) to obtain speedups. This work is published in SIAM Journal on Scientific Computing.