A SMARTER WAY TO WRITE
  • About
  • Blog
  • Selected Publications
  • Open Source Book

From Basement Shows to Bots: How Punk Rock Prepared Me for Teaching with AI

4/10/2025

0 Comments

 
Picture

Previewing my CCCC 2025 presentation: “Leveraging AI Technology to Support Learning in Writing: A Constructivist Approach with a DIY Ethic”
🔗 April 9–12, 2025 | Baltimore Convention Center

When I was 19 years old, I started a record label. I had no formal training, no business plan, and certainly no roadmap. What I did have was a group of friends, a local record store, and a shared belief in building something ourselves—without waiting for permission.
We booked shows, pressed vinyl, handed out flyers, and mailed zines across the country. We were messy. We were broke. But we were learning. Constantly.
Fast forward nearly three decades, and that same DIY ethic is still at the heart of my work—only now, I apply it in the writing classroom, where I teach students to write, revise, and reflect with the help of another unexpected collaborator: artificial intelligence.

This April, I’ll be presenting at the Conference on College Composition and Communication (CCCC) in Baltimore. My session, “Leveraging AI Technology to Support Learning in Writing: A Constructivist Approach with a DIY Ethic,” explores how we can rethink AI not as a shortcut or threat—but as a scaffold that supports heutagogical, student-centered writing instruction.

From the Invention of Writing to the Calculator: A History of Resistance to Tools 
My presentation opens with a look back.

Plato thought writing would destroy memory. Teachers in the 1800s feared the eraser would make students careless. In the 1970s, the calculator sparked concerns that students would stop thinking.
AI is just the latest in this long history of tool panic.

But in punk—and in pedagogy—the message was always: Don’t wait for permission. Experiment. Figure it out. Mess up. Try again. That’s the ethos I bring to my classroom today.

AI as Audience, Partner, and Provocation
In my writing classes, students don’t use AI to generate essays. They use it to test them. I encourage them to simulate an audience—prompting the AI to respond to their résumé, a job description, or a cover letter. With the right guidance and heuristics, they learn to ask better questions and get more useful answers.
The result? AI becomes a stand-in audience—helping them practice, revise, and reflect.

As Louise Rosenblatt reminds us in her Transactional Theory of Reading, meaning happens in the interaction between reader and text. AI lets students simulate that transaction—anytime, anywhere.

Heutagogy and the Zone of Proximal Development
I draw heavily from Vygotsky’s concept of the Zone of Proximal Development—the idea that students grow best when supported just beyond their current ability.
AI, when used with intention, can serve as that “more knowledgeable other”—providing immediate, low-stakes feedback while allowing students to experiment and revise.
This is heutagogy in action. Students are independent, not isolated.

Final Thought: Strategy Over Selling Out
Sure, sometimes teaching with AI feels like selling out. But in punk—and in teaching—it is not about purity. It is about purpose.

Used with care, AI can amplify our values, not replace them. It can extend our reach as instructors and expand our students’ capacity to think, reflect, and write.

Like punk scenes that thrived on collaboration and critique, our classrooms can be spaces of mutual support, trial and error, and growth—with AI as one more instrument in the band.

BONUS: Check Out the Slides and Resources Used for My Presentation
Hopefully, these slides and resources can help YOU better understand my topic. If you have any questions, by all means feel free to reach out: [email protected]
You can download the slides using the link below:
4cs_presentation.pptx
File Size: 33199 kb
File Type: pptx
Download File

0 Comments

AI as the “More Knowledgeable Other”

4/8/2025

0 Comments

 
Picture
Helping Students Bridge the Logic Gap in Business Writing
In business writing, students often make claims like:
“The company’s debt is healthy because its quick ratio is 2.1.”

​But what’s missing is the why. What connects the data point to the conclusion? The logical bridge—what argumentation theorist Stephen Toulmin calls the warrant—is often implied, assumed, or completely skipped.
Warrants are the invisible glue of business logic. Without them, even evidence-rich arguments fall flat.
What if we could use AI tools to help students see and strengthen that invisible glue?

Bridging the Zone of Proximal Development
This is where Vygotsky’s concept of the Zone of Proximal Development (ZPD) becomes especially relevant. The ZPD is the space between what students can do on their own and what they can do with guidance. To cross it, learners need support from a More Knowledgeable Other (MKO)—a teacher, peer, or scaffold.

Today, with the rise of conversational AI, that MKO does not have to be a person. AI tools like ChatGPT can act as digital MKOs, providing just-in-time support within a student's learning zone (Stojanov, 2023).
When students are learning how to build arguments in business contexts, AI can act as a scaffold, helping them spot where logic is implied but unspoken—and guiding them to make it explicit.
From Evidence to Explanation: AI as Reasoning Coach

Let’s return to our example:

“The company’s debt is healthy. The quick ratio is 2.1.”

A student using AI could be prompted to ask:
  • “What does a quick ratio measure?”
  • “What assumption am I making about financial health?”
  • “Would this ratio still be healthy in a different industry?”

This turns AI into a dialogic partner—not giving answers, but modeling the type of Socratic questioning that reveals gaps in logic (Sraveu & Moore, 2017). It becomes a digital tool for guided inquiry, helping students move from recall to analysis.

This also reflects Rosenblatt’s (1978) reader-response theory, where meaning is not transmitted but constructed through interaction. When students ask AI to simulate the role of the reader, they begin anticipating audience needs and adjusting their reasoning accordingly.

Toulmin Meets Vygotsky: Structuring Thought with Support
Toulmin’s model asks students to clarify:
  • Claim: What are you trying to prove?
  • Evidence: What backs it up?
  • Warrant: Why does the evidence support the claim?

But students do not always know how to generate a warrant on their own. That is where AI fits in.

​By guiding students through these steps, AI serves the constructivist function of scaffolding abstract reasoning into teachable, repeatable steps (Schunk, 2019). When prompted correctly, it pushes students beyond surface-level explanation and toward the metacognitive processes that characterize expert reasoning.

As noted in the research, “AI…can enhance writing by supporting students at different cognitive levels,” especially as they move from foundational knowledge toward higher-order thinking (Schunk, 2019; Smith, 2008).

Learning to Think Like Analysts, Not Just Writers
When students internalize the logic behind business metrics like the quick ratio, they move from writing to reasoning. They no longer just summarize—they analyze. They no longer just report—they persuade.

And with AI acting as a more knowledgeable other, they get to practice those moves in a low-stakes, high-feedback environment.

As Bruffee (1984) and Myers (1986) argue, tools that simulate collaborative dialogue—like AI—can extend students’ thinking and improve their capacity to reflect, revise, and clarify meaning.

Let’s Rethink What Writing Instruction Can Be
If we train AI to simulate not just grammar checkers but critical readers—to engage students in reasoning, not just revision—we can create writing classrooms where students learn how to think, not just what to write.
Toulmin gave us the model. Vygotsky gave us the pedagogy. AI can help us bring both to life.


References
Bruffee, K. A. (1984). Collaborative learning and the “conversation of mankind.” College English, 46(7), 635–652. https://doi.org/10.2307/376924
Myers, G. (1986). Reality, consensus, and reform in the rhetoric of composition teaching. College English, 48(2), 154–174. https://doi.org/10.2307/376397
Rosenblatt, L. M. (1978). The reader, the text, the poem: The transactional theory of the literary work. Southern Illinois University Press.
Schunk, D. H. (2019). Learning theories: An educational perspective (8th ed.). Pearson.
Smith, M. K. (2008). Bloom’s taxonomy. The encyclopedia of pedagogy and informal education. https://infed.org/mobi/blooms-taxonomy/
Sraveu, C., & Moore, K. (2017). The Socratic method and cognitive growth in learning communities. Oxford Academic Press.
Stojanov, G. (2023). The PAH continuum and digital learning: Pedagogy, andragogy, and heutagogy in the AI era. Journal of Educational Technology & Society, 26(1), 15–28.
Toulmin, S. (1958). The uses of argument. Cambridge University Press.
0 Comments

“AI Doesn’t Get What Matters in Student Writing”—But It Can

4/2/2025

0 Comments

 
Picture
One of the most common—and frankly, most frustrating—questions I hear from colleagues is this:

“How can I use AI to support my feedback without it just correcting commas or sounding robotic?”

And I get it. I’ve been there too.

You assign a paper. You’re staring down 50 drafts. You think, “Maybe ChatGPT can help.” And then the feedback it offers?
Polite, vague, utterly useless.

“Great flow.” “Consider being more clear.” “Nice tone.”

Meanwhile, the student has no thesis. No organization. No argument. But every sentence is grammatically correct, so the AI shrugs and gives it a thumbs up.

Why That Happens—and Why It Doesn’t Have to
Researchers like Chiu (2023) and Lo (2023) have helped me understand why this happens. Large language models like ChatGPT are trained on probabilities and patterns. They excel at fluency and correctness, but they do not “understand” argument, logic, or rhetorical intent. That’s not a design flaw—it’s just the reality of how these tools work.

Aebi et al. (2024) take this further by showing how AI feedback is often driven by what’s easiest to automate: grammar, syntax, and sentence-level cohesion. But these are just the surface features of writing. As educators, we care about what’s beneath: reasoning, structure, and purpose.

That’s where the mismatch begins.

Enter Heuristics: A Simple but Game-Changing Shift
What changed for me was reading work by Bonner et al. (2023). They argue that the solution isn’t to expect more from AI out of the box—but to give it better scaffolding. Specifically, they recommend using heuristics: simple, structured prompts that help guide the AI’s attention toward meaningful writing features.

Think about it like this: Instead of letting ChatGPT decide what matters, we tell it. We give it a checklist. We align it with our rubrics. We embed our intent into its responses.

Want it to look for a thesis? Ask: “Is there a clear claim in the opening paragraph?”

Want it to check paragraph relevance? Ask: “Does this paragraph support the main argument?”

Loem et al. (2023) tested this with GPT-generated feedback and found that when they gave the AI structured heuristics, the feedback became more aligned with how real instructors evaluate writing. It wasn’t just grammar-checking—it started to sound like a teacher.

Why Rubrics Help AI Sound Like You
Another turning point for me came from Shin et al. (2024), who ran a comparative study of rubric-aligned AI feedback for L2 writers. Their findings? When AI tools are guided by writing rubrics—especially ones tied to organization, clarity, and argument—they produce feedback that mirrors what experienced instructors say.

It’s not just more accurate—it’s more useful. Students know what to fix and why.

Utami et al. (2023) underscore this point from the student side: when learners receive feedback tied to specific rubric language (“Your argument is clear but lacks supporting evidence”), they’re more likely to revise with purpose. It builds trust—and it builds skills.

Designing AI to Reflect Your Discipline
But let’s not stop at general writing instruction. One of the best pieces I’ve read lately is by González-Calatayud et al. (2023). They show how AI feedback gets better when it’s trained to recognize disciplinary norms. In a business communication class, for example, we want clarity, actionable tone, and proper formatting—not just “good grammar.”

And Kohnke et al. (2023) take it a step further. They argue that instructors should be designing their own heuristics, tailored to their field and their students. This makes AI a customizable assistant—not a generic copyeditor.

The Real Magic: Student Autonomy
All of this aligns beautifully with what Baidoo-Anu, Owusu-Agyeman, and Wood (2023) describe as a shift from automation to scaffolding. When feedback is structured—through rubrics, heuristics, or even guided AI prompts—students begin to internalize the criteria. They start revising more strategically. They reflect.

Ma and Slater (2023) capture this perfectly when they describe AI as a tool that can “trace the causal path” of rhetorical decisions—if we teach it to. And more importantly, if we teach students how to engage with it critically.

​So Here’s My Takeaway
If AI feedback feels superficial, it’s not because AI is “bad.” It’s because it hasn’t been taught what matters.

But we can teach it. Or rather—we can design it.
  • Use rubrics.
  • Embed heuristics.
  • Align with your disciplinary goals.
The best AI feedback systems won’t replace us. But they can echo us—scaling our intent, supporting revision, and freeing up time for the kinds of conversations only humans can have.
If you’ve been burned by AI before, I get it. But with the right structure, it can do more than correct. It can actually coach.


References
Aebi, A. A., Roca, F., & Morin, A. (2024). An application of fuzzy logic to evaluate AI in education: Toward multidimensional ethical frameworks. International Journal of Artificial Intelligence in Education, 34(1), 1–23.

Baidoo-Anu, D., Owusu-Agyeman, Y., & Wood, E. (2023). Education in the era of artificial intelligence: Charting new frontiers for ethical and pedagogical integration. AI and Ethics, 3, 1–14.

Chiu, T. K. F. (2023). The impact of generative AI on practices, policies, and research directions in education. In P. Kaliraj & T. Devi (Eds.), Industry 4.0 technologies for education: Transformative technologies and applications (pp. 145–160). Auerbach Publications.

González-Calatayud, V., Esteban-Millat, I., & Mas-Tur, A. (2024). Artificial intelligence for student writing feedback in higher education: A systematic review. Computers and Education: Artificial Intelligence, 5, 100186.

Kohnke, L., Zou, D., & Zhang, R. (2023). Exploring generative artificial intelligence in writing education: Teacher perspectives and user-defined heuristics. Education and Information Technologies, 28, 11973–11994.

Lo, C. K. (2023). What is the impact of ChatGPT on education? A rapid review of the literature. Education Sciences, 13(4), 410.

Loem, M., Kaneko, M., Takase, S., & Okazaki, N. (2023). Exploring effectiveness of GPT-3 in grammatical error correction: A study on performance and controllability in prompt-based methods. arXiv preprint arXiv:2305.18156.

Ma, H., & Slater, T. (2015). Using the developmental path of cause to bridge the gap between AWE scores and writing teachers’ evaluations. Writing & Pedagogy, 7(2–3), 395–422.

Utami, S. P. T., Andayani, Winarni, R., & Sumarwati. (2023). Utilization of artificial intelligence technology in an academic writing class: How do Indonesian students perceive? Contemporary Educational Technology, 15(4), ep450.

Zhu, C., Sun, M., Luo, J., Li, T., & Wang, M. (2023). How to harness the potential of ChatGPT in education? Knowledge Management & E-Learning, 15(2), 133–152.
0 Comments

    Archives

    April 2025
    February 2019
    January 2019

    Categories

    All

    RSS Feed

                                                                Unless Noted, All Content Copyright 2024. A Smarter Way to Write                                                                          
  • About
  • Blog
  • Selected Publications
  • Open Source Book