2025 33rd Signal Processing and Communications Applications Conference (SIU), İstanbul, Türkiye, 25 - 28 Haziran 2025, ss.1-4, (Tam Metin Bildiri)
In this paper, Low-Rank Adaptation (LoRA) fine-tuning of two different large language models (DeepSeek R1 Distill 8B and Llama3.1 8B) was performed using the Turkish dataset. Training was performed on Google Colab using A100 40 GB GPU, while the testing phase was carried out on Runpod using L4 24 GB GPU. The 64.6 thousand row dataset was transformed into question-answer pairs from the fields of agriculture, education, law and sustainability. In the testing phase, 40 test questions were asked for each model via Ollama web UI and the results were supported with graphs and detailed tables. It was observed that the performance of the existing language models improved with the fine-tuning method.