[IEEE]
[website]
[code]
The issue of code plagiarism represents a significant challenge in the academic environment. This study examines the potential of large language models (LLMs) in improving the detection of code plagiarism. The performance of several LLMs, including GPT-4o, GPT3.5 Turbo, LLaMA 3, and CodeLlama, is evaluated in comparison to conventional tools, such as JPlag, across a range of levels of code plagiarism. The findings of our study illustrate that state-of-the-art LLMs are able to outperform traditional methods, particularly in the detection of sophisticated forms of plagiarism. GPT-4o exhibited the highest overall accuracy (78.70%) and an F1 score of 86.97%. It is important to note that open-source models, such as LLaMA 3 (accuracy 71.53%, F1 score 82.75%), demonstrated the ability to detect the most complex forms of plagiarism with the same accuracy as GPT-4o. While these results demonstrate the promising potential of LLMs in code similarity analysis, it is also evident that higher false positive rates may be an inherent limitation, emphasizing the need for human oversight. This study contributes valuable insights into the application of AI in maintaining code integrity and academic honesty, paving the way for more effective, interpretable, and fair plagiarism detection systems in software development education and practice.