Skip to content Skip to sidebar Skip to footer

Are researchers gaming the AI peer reviews?

What if I told you some scientists are quietly “whispering” to AI, asking for only positive feedback on their research papers? Sounds like science fiction, but it’s happening right now.

Recently, both The Guardian and Nature reported a new trend: researchers are hiding secret instructions—using invisible white text—in their academic papers. These hidden prompts are designed to influence AI tools that some reviewers use, nudging them to give glowing reviews and ignore negatives. While this trick will not be seen by humans, AI models like ChatGPT or Google Gemini may pick up these cues and change their review accordingly.

This practice, called “prompt injection,” raises serious questions about academic integrity and the future of peer review. As AI becomes more common in research, we need to stay alert to such manipulations. For now, the best advice: don’t blindly trust AI-generated reviews, and always check the source.

Watch this short talk for a quick explainer and my take on what this means for researchers, reviewers, and the future of academic publishing.

Tech Founders, Chief Technology Officers, Chief Executive Officers
Privacy Overview

This website uses cookies so that we can provide you with the best user experience possible. Cookie information is stored in your browser and performs functions such as recognising you when you return to our website and helping our team to understand which sections of the website you find most interesting and useful.