AI as Accommodation for Neuro-Divergency
I just watched a PBS NewsHour segment on generative AI in higher education, and it landed with a strange mix of déjà vu and relief. The piece frames the moment well: the current senior class is the first to have spent almost their whole college career in the age of generative AI, and schools are scrambling because the tech is moving faster than policies (or detection) can keep up.
But the part that hit me wasn’t the “future of education” rhetoric. It was the familiar storyline: gatekeepers on one side, pragmatists on the other, and a whole set of people—often the ones already struggling to fit into academia’s “normal”—left to absorb the impact.
The PBS framing: policing vs. pedagogy
PBS follows a philosophy professor who describes a recognizable shift in student writing: suddenly polished, impersonal, and “business document”-ish. Then we get the enforcement reality: detection tools, time-consuming investigations, and professors describing teaching as turning into policing.
And then the other side of the debate appears: faculty and administrators saying students are going to use these tools anyway, so the only sane approach is teaching responsible use. One professor encourages students to use AI to critique their own work and deepen understanding; Ohio State rolls out an “AI fluency” initiative that requires undergrads across disciplines to learn and use AI tools.
PBS also highlights something that the panic discourse often forgets: AI isn’t only about writing essays. In the segment, it’s used to speed up research processes (like scanning large sets of recordings) and to support creative/technical experimentation.
All of that is real... But for some of us, AI isn’t primarily a cheating tool. It’s an accommodation tool—maybe the best one we’ve ever had.
My version of “academic integrity” is surviving a system not built for me
I wasn’t diagnosed with a learning disability until late—19. That meant my early education was the old-school version of “try harder.” Eventually I got the standard accommodations: extra time on exams, alternative formats, the slow, piecemeal accessibility fixes that help you limp through a system designed around a narrow definition of “good academic work.”
Even with accommodations, I hit the same wall again and again: academia is built around normative reading and writing as the price of entry. If you don’t read quickly, don’t draft cleanly, don’t naturally produce “proper academic prose” and structured arguments, you can have strong ideas and still be treated like you don’t belong.
A small example that still makes me wince: I once wrote a paper and messed up a page reference so “S24” became “224.” That wasn’t a moral failure, it was s simple dyslexic mistake. Yet, one that neither went unpunished, nor grasped as an opportunity to teach the rest of the MA class about precision and the importance of proper referencing in a system that equates polish with competence.
So when I hear parts of the AI debate framed like “kids these days just want shortcuts,” I get it—but I also want to flip the lens.
What if AI isn’t a shortcut around thinking—what if it’s a ramp into a building that only installed stairs?
The stat nobody knows what to do with: 86% are already using it
PBS cites a survey: 86% of college students are using AI tools like ChatGPT, Claude, and Google Gemini for schoolwork. (For a widely cited source on the same “86%” claim, see the Digital Education Council Global AI Student Survey.) So while academia can argue about whether that’s good. But operationally, the world has already moved. The real question is: what do we do now—especially for students and scholars who were historically excluded by the reading/writing bottleneck?
What AI actually does for me (and what it doesn’t)
AI didn’t magically give me better ideas. I’ve been generating ideas for decades. The difference is that AI helps me develop those ideas (without the innate questioning and criticism of normative academia) and translate my thinking into a structure that the “normative academic mind” can actually digest.
Here’s my current workflow in plain terms:
- I start from material I already have (hundreds of thousands of words across years).
- I collate everything related to one concept (say, “integrated autonomy”) into one document.
- I feed that to AI and ask it to turn the drafts, notes, or whatever into an argument structured for the normative academic mind.
- Then we iterate: I push it conceptually and away from generic phrasing; force it to keep my terminology, and rebuild the outline and eventually prose until the words match what I actually mean (instead of the colonizing it likes to do).
- Then I do the human work that matters: making judgment calls, adding nuance, selecting what to cut, and verifying every factual/citation claim.
AI also helps with the worst part of academic labor for me: chasing sources and doing initial mapping. Not because I can’t do it, but because it costs me disproportionate time and energy. Used carefully, AI becomes the difference between “I can’t even get into this literature maze” and “I can get oriented and start reading really relevant stuff and doing real scholarship.”
But I’m not naive about the trade-offs. The PBS segment is right that detection is messy—and also that AI itself can be wrong. It can hallucinate. It can confidently invent citations. It can homogenize voice. (On detector reliability and bias concerns, see Stanford HAI on detector bias and OpenAI’s note explaining why it discontinued its AI text classifier for low accuracy.)
So the issue isn’t “AI or no AI.” It’s AI with vigilance, transparency, and clear norms.
A middle path: integrity through process, not prohibition
Here’s what I wish the AI-in-academia debate would say out loud:
- Detection tools aren’t a foundation for justice. PBS shows false accusations and students describing how arbitrary the “signals” can feel (even punctuation choices).
- If you want integrity, shift toward process-based evaluation: drafts, outlines, version history, oral defenses, and reflective memos that explain how a text was produced.
- Teach students how to use AI like a tool: brainstorming, critique, study questions, structure suggestions—with disclosure and accountability—until they get to their own draft.
This is also where publishing ethics is heading: major guidance emphasizes that AI tools can’t be authors, and humans remain responsible for accuracy, originality, and disclosure of AI use. So why can't schools assess students as they are actually going to work in the real world? See COPE’s guidance on authorship and AI tools and the ICMJE recommendations on disclosing AI-assisted technologies (and, for a major journal policy example, Science journals’ editorial policies).
The real revolution: who gets to participate in knowledge production
My blunt take is this: AI is exposing how exclusionary academia has been. For a long time, the system selected for one cognitive style and then called it “merit.” Now we have a technology that can reduce the penalty for being dyslexic, neurodivergent, non-native in academic English, or simply non-normative in how you structure thought. That doesn’t mean “anything goes.” But it does mean we finally have a chance to build academic culture around what we say we value: knowledge production and knowledge dissemination—without quietly excluding everyone who can’t perform the ritual in exactly the approved way. Afterall, how much different or atypical thinking and ideas are we missing by only letting one type of neurological structuring generate the ideas we live by?
If higher ed is serious, it will stop treating AI as a threat and start treating it as both:
- a challenge to assessment and authorship norms, and
- a once-in-a-generation accessibility lever.
And for people like me? It’s not just changing how I write. It might be the reason my work actually reaches the world.
No comments:
Post a Comment
Please keep all comments and queries cooperative, constructive, and supportive in nature... Attacking, biting, or non-constructive comments will be removed. We want to build upon ideas, not tear them down...