Banik, Mridul, authorACM, publisher2025-12-222025-12-222025-12-09Mridul Banik. 2025. LLM Tuning: Neural Language Persistence through Adaptive Mixture. In 2025 International Conference on Artificial Intelligence and its Applications (ICARTI 2025), December 09-10, 2025, Port Louis, Mauritius. ACM, New York, NY, USA, 6 pages. https://doi.org/10.1145/3774791.3774803https://hdl.handle.net/10217/242555This paper presents a novel architectural paradigm addressing knowledge degradation in large language models during continual fine-tuning. The framework leverages a Mixture-of-Experts-style approach, integrating multiple low-rank adapters governed by an intelligent routing mechanism. By freezing core model parameters and dynamically allocating task-specific expertise, this method preserves inherent world knowledge while enhancing performance across diverse downstream applications. The proposed Dynamic LoRA-Experts with Prototype-Ensemble Matching (DLEPM) framework demonstrates superior performance on sequential NLP benchmarks, achieving 89.2% average accuracy with only 5.4% forgetting—outperforming existing continual learning methods. Empirical evaluations validate the framework's efficacy in maintaining large language model fidelity during continuous adaptation.born digitalarticlesenghttps://creativecommons.org/licenses/by/4.0continual learningcatastrophic forgettingparameter-efficient finetuninglarge language modelslow-rank adaptationLLM tuning: neural language persistence through adaptive mixtureTextThis work is licensed under a Creative Commons Attribution 4.0 International License.https://doi.org/10.1145/3774791.3774803