Banik, Mridul, authorACM, publisher2025-12-222025-12-222025-12-09Mridul Banik. 2025. Novel Tensor Norm Optimization for Neural Network Training Acceleration. In 2025 International Conference on Artificial Intelligence and its Applications (ICARTI 2025), December 09-10, 2025, Port Louis, Mauritius. ACM, New York, NY, USA, 7 pages. https://doi.org/10.1145/ 3774791.3774805https://hdl.handle.net/10217/242556This paper introduces an advanced optimization algorithm designed to enhance the training efficiency of neural networks, particularly focusing on the intricate weight matrices prevalent in large language models. Diverging from prior spectral norm-based approaches, our method leverages the nuclear norm to formulate a novel update rule, yielding a distinct optimization technique called Neon. We provide rigorous theoretical guarantees concerning its convergence properties through convex optimization and Karush-Kuhn-Tucker conditions. Performance evaluations across multilayer perceptrons, convolutional neural networks, and generative models such as NanoGPT demonstrate computational advantages over existing optimizers including Muon and AdamW. The Frobenius-based Neon variant achieves comparable or superior convergence while maintaining significantly lower per-iteration overhead of O(mn) FLOPs compared to Muon's O(mn ยท min {m, n}) for m x n matrices. This work advances more robust and faster training methodologies for complex AI systems.born digitalarticlesenghttps://creativecommons.org/licenses/by/4.0neural network optimizationnuclear normlow-rank updatesgradient descentdeep learningNovel tensor norm optimization for neural network training accelerationTextThis work is licensed under a Creative Commons Attribution 4.0 International License.https://doi.org/10.1145/ 3774791.3774805