Infusing invariances in neural representations

Abstract

It has been observed that inner representations learned by different neural networks conceal structural similarities when the networks are trained under similar inductive biases. Exploring the geometric structure of latent spaces within these networks offers insights into the underlying similarity among different neural models and facilitates reasoning about the transformations that connect them. Identifying and estimating these transformations presents a challenging task, but it holds significant potential for various downstream tasks, including merging and stitching different neural architectures for model reuse. In this study, drawing on the geometrical structure of latent spaces, we show how it is possible to define representations that incorporate invariances to the targeted transformations in a single framework. We experimentally analyze how inducing different invariances in the representations affects downstream performances on classification and reconstruction tasks, suggesting that the classes of transformations that relate independent latent spaces depend on the task at hand. We analyze models in a variety of settings including different initializations, architectural changes, and trained on multiple modalities (e.g., text, images), testing our framework on 8 different benchmarks.

Publication
2nd Annual TAG in Machine Learning @ ICML 2023
Irene Cannistraci
Irene Cannistraci
Ph.D. Student in Computer Science, Sapienza University of Rome
GLADIA Research Group

Visiting Researcher Student, Helmholtz Munich AIDOS Lab

I am a Ph.D. student in Computer Science passionate about Deep Learning.