[Submitted on 28 Oct 2025]
DualGLU: Enhancing Transformer Feedforward Networks Through\ Dynamic Activation Mixing
View PDFAbstract:We present DualGLU, a novel feedforward network architecture that dynamically combines complementary activation functions within transformer models. By parallel processing of SwiGLU and GELU-gated pathways with learned input-dependent mixing weights, DualGLU achieves more expressive feature representations while maintaining computational efficiency. Comprehensive experiments on language modeling demonstrate consistent improvements over standard feedforward variants, with a 0.8\% reduction in validation perplexity compared to SwiGLU baselines. Our analysis reveals that dynamic mixing provides particular benefits for modeling diverse linguistic patterns, with different activation pathways specializing in distinct feature types.
Submission history
[v1] Tue, 28 Oct 2025 22:20 UTC