Paper2Summary
Scientific paper summarization via LoRA fine-tuning
Quick Summary
- Developed a scientific paper summarization system by LoRA fine-tuning Llama-3.2-1B-Instruct on 20K arXiv papers, training only 0.07% of parameters (~850K) with 10K token context support (~28 hours on single RTX A6000)
- Achieved +51% ROUGE-2 and +37% ROUGE-3 improvement over base model on 6,440-sample test set
Tools: Python, PyTorch, PEFT, Hugging Face, Weights & Biases
Video Demo
For more detailed project information, please visit the GitHub repo: https://github.com/gabe-zhang/paper2summary