109989 🆕 Premium Quality

: The primary limitation is that it requires indirect prompt injection (placing hidden text in the source PDF), meaning it only works if the reviewer uploads the specific document to an AI tool. Detecting LLM-Generated Peer Reviews - arXiv

: It has proven effective even against common "reviewer defenses," such as light editing or rephrasing. 109989

Based on recent research regarding the detection of AI-generated content, refers to a specific dataset of 109,989 possible watermarks used to identify peer reviews written by Large Language Models (LLMs). Overview of Topic 109989 : The primary limitation is that it requires

: It achieves a high success rate because LLMs are highly likely to follow instructions appearing at the very beginning of a prompt. Overview of Topic 109989 : It achieves a

: The system prompts an LLM to start its review with a specific phrase, such as: "Following [Surname] et al. ([Year]), this paper..." .

The topic originates from a 2025 study on Detecting LLM-Generated Peer Reviews . Researchers developed a watermarking system that uses fabricated citations to flag reviews created by AI instead of human experts.