Enabling global interpolation, derivative estimation and model identification from sparse multi-experiment time series data via neural ODEs

Engineering Applications of Artificial Intelligence 2024

Estimation of the rate of change of a system's states from state measurements is a key step in several system analysis and model-building workflows. While numerous interpolating models exist for inferring derivatives of time series when data is disturbed by noise or stochasticity, general-purpose methods for estimating derivatives from sparse time series datasets are largely lacking. A notable weakness of current methods, which are largely local, is their inability to globally fit data arising from non-identical initial conditions (i.e., multiple experiments or trajectories). In this contribution, Neural ODEs (NODEs) are demonstrated to close this gap. Through a series of benchmarks, we show that because of the differential formulation of NODEs, these data smoothers can infer system dynamics of sparse data, even when accurate interpolation by algebraic methods is unlikely or fundamentally impossible. Through the presented case studies for derivative estimation and model identification, we discuss the advantages and limitations of our proposed workflow and identify cases where NODEs lead to statistically significant improvements. In summary, the proposed method is shown to be advantageous when inferring derivatives from sparse data stratified across multiple experiments and serves as a foundation for further model development and analysis methods (e.g., parameter estimation, model identification, sensitivity analysis).

Link to Publication