Machine-Learning Regression for EIS Parameters
Introduction
Parameter fitting is a common technique in the analysis of EIS-Spectra and thus is a useful tool when examining the output of machine-learning systems working on these spectra, whether for use in a loss function or for network evaluation or for filtering at inference time.
While current implementations of parameter fitting based around global optimization algorithms are of more than adequate performance to provide for the use cases encountered by researchers when analyzing a couple to hundreds of spectra, the usage of fitting techniques in loss function design raises the issue that optimization is computationally expensive, greatly increasing the time required to train machine-learning systems imparted with such loss functions. Additionally, while implementing the traditional workflow on GPUs is by no means impossible, current solutions are CPU based causing expensive device to host and host to device copies.
Implementation
Using machine-learning regression via a neural-network and a Gaussian process regression offers an, in principal, straight forward solution to the above problem by replacing the optimization with a fast-on-inference approximation while also allowing the great work in AI specific GPU kernels to be leveraged to accelerate the execution while allowing all data to remain vram resident.
Although the current networks are of sufficient quality to reduce the time spent in classical optimization by a large amount, we still require some gradient decent steps to converge on the final parameters. The current architecture of this system can be examined in the figure below:
Through advances in architecture we hope to eliminate the final step completely