Publications
Research Interests
- Formal Reasoning and Verification: Developing rigorous formal logic methodologies, leveraging SMTāLIB encodings and solver frameworks to verify, interpret, and enhance the correctness of LLM-generated reasoning.
- Fine-Tuning LLMs and Vision Models: Leveraging techniques such as LoRA to optimize large language and vision models for specific tasks while maintaining computational efficiency.
- Redundancy Mitigation in LLMs: Investigating approaches to reduce redundancy in large language models, enhancing performance and efficiency.
- Model Optimization: Developing strategies for optimizing machine learning models, including pruning and hyperparameter tuning, to improve both accuracy and resource utilization.
- Explainable AI: Advancing interpretability in AI models, focusing on enhancing transparency and providing actionable insights for users.