JeffreyGill
New member
- Joined
- Mar 7, 2026
- Messages
- 4
I've been thinking about this for weeks now, and I need someone who understands the technology better than I do to explain it to me. Because if my hunch is correct, we're all walking into a trap we don't see. 
The Context:
We all know how Turnitin works for plagiarism. You submit a paper, it checks against their database of:
The Question:
Are they building the same kind of database for AI-generated text??
Think about it. Every time a student submits a paper that gets flagged as AI, does that paper go into a training set? Does Turnitin learn from false positives and true positives alike? Are they building a giant library of "this is what AI looks like" that gets smarter with every submission?
The Implications:
If yes, then using AI to generate anything becomes increasingly dangerous over time. The detectors learn. They adapt. They get better at spotting patterns we can't even see.
A friend of mine (CS major) told me about something called "model collapse" where AI trained on AI-generated text eventually becomes useless. But detectors trained on AI-generated text? They might become more accurate.
The Ethics:
Here's where my philosophy minor kicks in. If Turnitin is building this database without telling students, is that ethical? Are we consenting to have our papers used to train the very system that might flag us?
The terms of service probably cover it. They always do. But probably isn't the same as transparently.
The Question for Y'all:
Has anyone seen actual documentation about this? Does Turnitin publicly state they're building an AI detection database? Or is this just my paranoid brain connecting dots that don't exist?
I'm genuinely curious. And also slightly terrified.
The Context:
We all know how Turnitin works for plagiarism. You submit a paper, it checks against their database of:
- Other student papers
- Published academic work
- Websites and publications
The Question:
Are they building the same kind of database for AI-generated text??
Think about it. Every time a student submits a paper that gets flagged as AI, does that paper go into a training set? Does Turnitin learn from false positives and true positives alike? Are they building a giant library of "this is what AI looks like" that gets smarter with every submission?
The Implications:
If yes, then using AI to generate anything becomes increasingly dangerous over time. The detectors learn. They adapt. They get better at spotting patterns we can't even see.
A friend of mine (CS major) told me about something called "model collapse" where AI trained on AI-generated text eventually becomes useless. But detectors trained on AI-generated text? They might become more accurate.
The Ethics:
Here's where my philosophy minor kicks in. If Turnitin is building this database without telling students, is that ethical? Are we consenting to have our papers used to train the very system that might flag us?
The terms of service probably cover it. They always do. But probably isn't the same as transparently.
The Question for Y'all:
Has anyone seen actual documentation about this? Does Turnitin publicly state they're building an AI detection database? Or is this just my paranoid brain connecting dots that don't exist?
I'm genuinely curious. And also slightly terrified.