Nikhil Shivakumar Nayak
Nikhil Nayak is a researcher specializing in large language models, with a focus on parameter-efficient adaptation, fine-tuning strategies, and trustworthy machine learning. He holds a Master’s in Data Science from Harvard University and has been working on advancing continual learning and interpretability in LLMs.
Research Engineer
Company or affiliation –Red Hat
Session
Large Language Models (LLMs) are increasingly used in real-world applications, but continually adapting them to new tasks without catastrophic forgetting remains a major challenge. In this talk, I introduce a novel full-parameter continual learning method that leverages adaptive Singular Value Decomposition (SVD) to dynamically identify and protect task-critical subspaces. By constraining updates to orthogonal low-rank directions, our method enables models to retain previous knowledge without adding extra parameters. Compared to strong baselines like O-LoRA, our approach achieves up to 7% higher accuracy while maintaining general language capabilities and safety. This talk will present the methodology, theoretical foundations, and extensive empirical results, demonstrating a scalable path toward continually evolving LLMs.