Resource-Efficient Model for Deep Kernel Learning

keywords: Parallel machine learning, parallel and distributed deep learning, GPU parallelism, domain decomposition, problem and model reduction
According to Hughes phenomenon, the major challenges encountered in computations with learning models come from the scale of complexity, e.g. the so-called curse of dimensionality. Approaches for accelerated learning computations range from model- to implementation-level. The first type is rarely used in its basic form. Perhaps, this is due to the theoretical understanding of mathematical insights. We describe a model-level decomposition approach that combines both the decomposition of the objective function and of data. We perform a feasibility analysis of the resulting algorithm, both in terms of accuracy and scalability.
mathematics subject classification 2000: 68T07, 65K05
reference: Vol. 44, 2025, No. 1, pp. 1–25