Low-memory stochastic backpropagation for convolutional layers

Dataemia
2 Min Read


View a PDF of the paper titled XConv: Low-memory stochastic backpropagation for convolutional layers, by Anirudh Thatipelli and Jeffrey Sam and Mathias Louboutin and Ali Siahkoohi and Rongrong Wang and Felix J. Herrmann

View PDF
HTML (experimental)

Abstract:Training convolutional neural networks at scale demands substantial memory, largely due to storing intermediate activations for backpropagation. Existing approaches — such as checkpointing, invertible architectures, or gradient approximation methods like randomized automatic differentiation — either incur significant computational overhead, impose architectural constraints, or require non-trivial codebase modifications. We propose XConv, a drop-in replacement for standard convolutional layers that addresses all three limitations: it preserves standard backpropagation, imposes no architectural constraints, and integrates seamlessly into existing codebases. XConv exploits the algebraic structure of convolutional layer gradients, storing highly compressed activations and approximating weight gradients via multi-channel randomized trace estimation. We establish convergence guarantees and derive error bounds for the proposed estimator, showing that the variance of the resulting gradient errors is comparable to that of stochastic gradient descent. Empirically, XConv achieves performance comparable to exact gradient methods across classification, generative modeling, super-resolution, inpainting, and segmentation — with gaps that narrow as the number of probing vectors increases — while reducing memory usage by a factor of two or more and remaining computationally competitive with optimized convolution implementations.

Submission history

From: Ali Siahkoohi [view email]
[v1]
Sun, 13 Jun 2021 13:54:02 UTC (5,718 KB)
[v2]
Wed, 16 Jun 2021 16:02:56 UTC (5,718 KB)
[v3]
Tue, 10 Mar 2026 15:52:42 UTC (15,431 KB)



Source link

Share This Article
Leave a Comment

Leave a Reply

Your email address will not be published. Required fields are marked *

error: Content is protected !!