Why ICML’s New LLM Policies Matter for Peer Review
The International Conference on Machine Learning (ICML) has introduced new policies regarding the use of large language models (LLMs) in the peer review process. These changes come as AI tools become more integrated into research workflows. However, improper use of these tools can undermine the integrity of peer reviews. ICML aims to adapt to this challenge by enforcing strict rules and taking disciplinary actions against violations.

This year, ICML rejected a significant number of papers due to violations of its LLM usage policy. Specifically, they desk-rejected 497 papers, which is about 2% of all submissions. This action was taken against reviewers who had agreed not to use LLMs but were found to have done so.
Understanding ICML’s Two Policies
ICML has established two distinct policies for reviewers regarding LLM usage: Policy A prohibits any use of LLMs, while Policy B allows their use for understanding papers and refining reviews. This dual approach reflects a divide within the academic community about how best to incorporate AI tools into the review process.
Reviewers were given a choice between these two policies during the selection process. Those who opted for Policy A were strictly prohibited from using LLMs, while those who preferred Policy B could utilize them as needed. Importantly, no reviewer who strongly favored Policy B was assigned to Policy A.

Consequences of Violating Policies
Out of the reviewers assigned to Policy A, approximately 795 reviews were flagged for using LLMs despite their agreement not to do so. This resulted in the rejection of all related submissions. The method used for detection involved watermarking submission PDFs with hidden instructions that only an LLM could recognize.
This technique aimed to ensure compliance with the policy but was not foolproof. Reviewers could potentially circumvent these measures if they were aware of the
m. Nevertheless, ICML took strong action against those who violated their agreements, emphasizing the importance of trust in the peer review system.
Impact on Peer Review Integrity
The introduction of these policies is crucial for maintaining academic integrity within peer reviews. By enforcing strict consequences for violations, ICML sends a clear message about the importance of adhering to agreed-upon standards when using AI tools.
For researchers and reviewers alike, this serves as a reminder to be transparent about tool usage and adhere strictly to established guidelines. Maintaining trust is essential as AI continues to evolve within academic circles.
Key takeaways
- ICML has implemented strict rules on LLM usage in peer reviews.
- Two distinct policies reflect differing opinions within the academic community.
- Violations resulted in significant paper rejections this year.
- The integrity of peer reviews relies heavily on adherence to these new standards.
What You Can Do Next
If you are involved in research or reviewing papers, familiarize yourself with ICML’s new policies regarding LLM usage. Ensure that you understand which policy applies to your situation and adhere strictly to it.
Consider discussing these changes with colleagues or peers at conferences like ICML. Engaging in dialogue can help foster a culture of transparency and trust around AI tool usage in academic settings.
FAQ
- What are ICML’s new policies on LLM usage? They include a strict no-use policy (Policy A) and a permissive policy (Policy B).
- How many papers were rejected due to policy violations? A total of 497 papers were desk-rejected this year.
- Why is maintaining integrity important in peer reviews? Trust is essential for ensuring fair evaluations and advancing knowledge within academia.
For the original report, see the source article.
