arXiv:2603.06601v1 Announce Type: new
Abstract: Deep neural networks, and more recently large-scale generative models such as large language models (LLMs) and large vision-action models (LVAs), achieve remarkable performance across diverse domains, yet their prohibitive computational cost hinders deployment in resource-constrained environments. Existing efficiency techniques offer only partial remedies: dropout improves regularization during training but leaves inference unchanged, while pruning and low-rank factorization compress models post hoc into static forms with limited adaptability. Here we introduce SWAN (Switchable Activation Networks), a framework that equips each neural unit with a deterministic, input-dependent binary gate, enabling the network to learn when a unit should be active or inactive. This dynamic control mechanism allocates computation adaptively, reducing redundancy while preserving accuracy. Unlike traditional pruning, SWAN does not simply shrink networks after training; instead, it learns structured, context-dependent activation patterns that support both efficient dynamic inference and conversion into compact dense models for deployment. By reframing efficiency as a problem of learned activation control, SWAN unifies the strengths of sparsity, pruning, and adaptive inference within a single paradigm. Beyond computational gains, this perspective suggests a more general principle of neural computation, where activation is not fixed but context-dependent, pointing toward sustainable AI, edge intelligence, and future architectures inspired by the adaptability of biological brains.
Source link
Switchable Activation Networks
Leave a Comment