ISSN 2228-1932

Using AI detection software: The University of Tartu position statement

In 2024, a new plagiarism detection system, StrikePlagiarism, was introduced at the University of Tartu (UT), initially including an AI detection module. However, on 19 March 2024, the Ministry of Education and Research removed the AI detection module from the StrikePlagiarism platform based on the recommendations of a working group convened by the ministry. The group’s article, “Text generator use cannot be detected”, was published in Õpetajate Leht (‘Teachers’ Newspaper’) on 19 March by Andres Karjus and Kaspar Kruup. 

The views of the working group of the UT development fund’s project “The use of AI in teaching” on AI detection systems align with the ministry’s working group. Our working group considers that using AI detection software to check students’ work is unjustified and not recommended. The arguments for our position are listed below. Our arguments are partly used, with the permission of the authors, in the arguments formulated in the article by Andres Karjus and Kaspar Kruup.  

  1. The working group believes AI should not be forbidden in teaching, including for written assignments and thesis writing (see also the use of artificial intelligence (AI) in thesis writing). It is unclear when it is justified to control the use of AI with a technical application at all. 
  2. For now, the working principles of AI detection applications are not clear and transparent.   
  3. It is technically impossible to determine if an AI text generator was used to write the text, as the output is essentially unique, and there is no database to base a reliable decision.
  4. The AI usage detection application shows the user a probability percentage or score. It is uninterpretable because it is unclear which training dataset the score is based on.  
  5. Skilled users can easily deceive the AI recognition application by instructing a text generator to write in a way that significantly reduces the program’s recognition performance (and there was no such text in the training material). 
  6. Since AI detection applications do not exclude false positives, the use of such applications significantly increases the risk of false accusations (i.e., there is a risk a student is being accused of using a text generator based on an application’s calculation when the student has not used it).   
  7. Large language models can be used in various ways, and it is difficult to draw a clear line between what is allowed and what is not. Therefore, an AI detection application may not determine whether using a text robot is part of a fraud or a normal part of the workflow. 
  8. We find it problematic to assume students use text generators before submitting their work. Such negative attitudes are not conducive to students’ motivation to learn. 

Good practices and research integrity must be communicated clearly, systematically, and continuously to students in learning and research, along with the consequences of plagiarism. The work should be shared with the students, and the problematic use of AI should be explained.

If the lecturer suspects unauthorized or excessive use of AI in a student’s written work, they should first talk to the student to see if they can explain and justify the points made in the text.  

If the interview raises suspicion of academic fraud, it will be treated in the same way as other cases of academic fraud (https://ut.ee/en/content/academic-fraud). 

Teile võivad meeldida ka need artiklid