SalmaWillis
New member
- Joined
- Mar 3, 2026
- Messages
- 8
I conducted an experiment. The results disturbed me. 

The Hypothesis:
AI detectors are flawed. They produce false positives. They punish certain writing styles.
The Method:
I took a paper I wrote last semester—100% human, written in a coffee shop over three days, fueled by existential dread and oat milk lattes—and ran it through three different AI detectors.
The Paper:
A 7-page analysis of Heidegger's concept of "Dasein" in Being and Time. It's dense. It's philosophical. It uses words like "ontological" and "phenomenological" and "thrownness." It is, objectively, very me.
The Results:

The Implications:
My writing—my human writing, produced by my human brain after years of human education—was flagged as machine-made. Why?
Possible explanations:
What happens when professors use these tools? What happens when my careful, considered prose gets flagged as AI and I have to prove I'm human? How does one prove humanity? Turing test in reverse??
Questions for the community:
This is bigger than cheating. This is about how we define human writing in the age of machines.
The Hypothesis:
AI detectors are flawed. They produce false positives. They punish certain writing styles.
The Method:
I took a paper I wrote last semester—100% human, written in a coffee shop over three days, fueled by existential dread and oat milk lattes—and ran it through three different AI detectors.
The Paper:
A 7-page analysis of Heidegger's concept of "Dasein" in Being and Time. It's dense. It's philosophical. It uses words like "ontological" and "phenomenological" and "thrownness." It is, objectively, very me.
The Results:
- Detector A (GPTZero): 65% AI probability
- Detector B (Originality.ai): 80% AI probability
- Detector C (Sapling): 58% AI probability
The Implications:
My writing—my human writing, produced by my human brain after years of human education—was flagged as machine-made. Why?
Possible explanations:
- Academic language is predictable. Philosophy has conventions. If you follow them, you sound like other philosophers. Detectors see patterns and assume they're AI.
- I'm actually an AI. (Unlikely. I have memories. I think.)
- Detectors are broken. This is the most probable. They're trained on certain data and can't account for human variation.
What happens when professors use these tools? What happens when my careful, considered prose gets flagged as AI and I have to prove I'm human? How does one prove humanity? Turing test in reverse??
Questions for the community:
- Has anyone else tested their old work? What were your results?
- If your professor flagged you based on a detector, how would you respond?
- Should universities stop using these tools until they're more reliable?
This is bigger than cheating. This is about how we define human writing in the age of machines.