Staff member
Mar 15, 2023
Inverted benefit of ChatGPT: A real threat to scientific research

During a conversation with a student about the "falsification of sources created by ChatGPT," the student admits to using this method at least once, but has a different opinion on the matter. The student claims that they input the title of the source into Google Scholar and used the first result as the true source for the information provided by ChatGPT, replacing the fake source with the real one. In this way, the student claims to have benefited from ChatGPT inversely.

While some may see this as a logical approach, in reality, it constitutes academic dishonesty and is far from ethical research practices. This method involves attributing information to a researcher or author who did not actually produce it, or at least has a very high degree of uncertainty (up to 50%). This method is difficult to detect using any detection tool, unless a human supervisor knows the source well and knows that it cannot contain such information or that it is not presented in that way. Even then, the level of uncertainty remains low.

The only solution to this problem is self-integrity and adherence to scientific research ethics. While using artificial intelligence can aid in obtaining quality ideas and methods and save time and effort in research planning, it should not be used for deception and fabrication.