This work was done during one weekend by research workshop participants and does not represent the work of Apart Research.
ApartSprints
Interpretability Hackathon
Accepted at the 
Interpretability Hackathon
 research sprint on 
November 15, 2022

Alignment Jam : Gradient-based Interpretability of Quantum-inspired neural networks

In the thriving world of physics and quantum computing, researchers have elaborated a multitude of methods to simulate complex quantum systems since the 90s. Among these techniques, tensor networks have an important role in the way they make sense of the compression of data to fit complex quantum systems in the memory of classical computers. The rapid expansion of tensor networks is due to the need to visualize and store physical structures. In this paper, a tensor train decomposition of the linear layers of a simple Convolutional Neural Network has been implemented and trained on the dataset Cifar10. The observations show that the various attention images inferred on both a neural network and its tensor network equivalent has radically different, and the models focus on different parts. Secondly, I proposed some considerations on miscellaneous gradient descent methods that can be used to specifically optimise tensor networks. Tensor networks evolve in a smooth Riemannian manifold, using Riemannian optimisation (RO) techniques to perform gradient descent geometrically. Indeed, the projections implied by the RO allow the traceability of the gradients and thus, easily reverse engineer a backpropagation. The code implemented can be foud as well as the pdf version of all cited papers and images used in my github : https://github.com/antoine311200/Hackaton-Interpretability. Its usage is quite straight-forward, simply run all cells in both notebooks after having installed the requirements.

By 
Antoine Debouchage
🏆 
4th place
3rd place
2nd place
1st place
 by peer review
Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.

This project is private