|
One of the most common—and frankly, most frustrating—questions I hear from colleagues is this:
“How can I use AI to support my feedback without it just correcting commas or sounding robotic?” And I get it. I’ve been there too. You assign a paper. You’re staring down 50 drafts. You think, “Maybe ChatGPT can help.” And then the feedback it offers? Polite, vague, utterly useless. “Great flow.” “Consider being more clear.” “Nice tone.” Meanwhile, the student has no thesis. No organization. No argument. But every sentence is grammatically correct, so the AI shrugs and gives it a thumbs up. Why That Happens—and Why It Doesn’t Have to Researchers like Chiu (2023) and Lo (2023) have helped me understand why this happens. Large language models like ChatGPT are trained on probabilities and patterns. They excel at fluency and correctness, but they do not “understand” argument, logic, or rhetorical intent. That’s not a design flaw—it’s just the reality of how these tools work. Aebi et al. (2024) take this further by showing how AI feedback is often driven by what’s easiest to automate: grammar, syntax, and sentence-level cohesion. But these are just the surface features of writing. As educators, we care about what’s beneath: reasoning, structure, and purpose. That’s where the mismatch begins. Enter Heuristics: A Simple but Game-Changing Shift What changed for me was reading work by Bonner et al. (2023). They argue that the solution isn’t to expect more from AI out of the box—but to give it better scaffolding. Specifically, they recommend using heuristics: simple, structured prompts that help guide the AI’s attention toward meaningful writing features. Think about it like this: Instead of letting ChatGPT decide what matters, we tell it. We give it a checklist. We align it with our rubrics. We embed our intent into its responses. Want it to look for a thesis? Ask: “Is there a clear claim in the opening paragraph?” Want it to check paragraph relevance? Ask: “Does this paragraph support the main argument?” Loem et al. (2023) tested this with GPT-generated feedback and found that when they gave the AI structured heuristics, the feedback became more aligned with how real instructors evaluate writing. It wasn’t just grammar-checking—it started to sound like a teacher. Why Rubrics Help AI Sound Like You Another turning point for me came from Shin et al. (2024), who ran a comparative study of rubric-aligned AI feedback for L2 writers. Their findings? When AI tools are guided by writing rubrics—especially ones tied to organization, clarity, and argument—they produce feedback that mirrors what experienced instructors say. It’s not just more accurate—it’s more useful. Students know what to fix and why. Utami et al. (2023) underscore this point from the student side: when learners receive feedback tied to specific rubric language (“Your argument is clear but lacks supporting evidence”), they’re more likely to revise with purpose. It builds trust—and it builds skills. Designing AI to Reflect Your Discipline But let’s not stop at general writing instruction. One of the best pieces I’ve read lately is by González-Calatayud et al. (2023). They show how AI feedback gets better when it’s trained to recognize disciplinary norms. In a business communication class, for example, we want clarity, actionable tone, and proper formatting—not just “good grammar.” And Kohnke et al. (2023) take it a step further. They argue that instructors should be designing their own heuristics, tailored to their field and their students. This makes AI a customizable assistant—not a generic copyeditor. The Real Magic: Student Autonomy All of this aligns beautifully with what Baidoo-Anu, Owusu-Agyeman, and Wood (2023) describe as a shift from automation to scaffolding. When feedback is structured—through rubrics, heuristics, or even guided AI prompts—students begin to internalize the criteria. They start revising more strategically. They reflect. Ma and Slater (2023) capture this perfectly when they describe AI as a tool that can “trace the causal path” of rhetorical decisions—if we teach it to. And more importantly, if we teach students how to engage with it critically. So Here’s My Takeaway If AI feedback feels superficial, it’s not because AI is “bad.” It’s because it hasn’t been taught what matters. But we can teach it. Or rather—we can design it.
If you’ve been burned by AI before, I get it. But with the right structure, it can do more than correct. It can actually coach. References Aebi, A. A., Roca, F., & Morin, A. (2024). An application of fuzzy logic to evaluate AI in education: Toward multidimensional ethical frameworks. International Journal of Artificial Intelligence in Education, 34(1), 1–23. Baidoo-Anu, D., Owusu-Agyeman, Y., & Wood, E. (2023). Education in the era of artificial intelligence: Charting new frontiers for ethical and pedagogical integration. AI and Ethics, 3, 1–14. Chiu, T. K. F. (2023). The impact of generative AI on practices, policies, and research directions in education. In P. Kaliraj & T. Devi (Eds.), Industry 4.0 technologies for education: Transformative technologies and applications (pp. 145–160). Auerbach Publications. González-Calatayud, V., Esteban-Millat, I., & Mas-Tur, A. (2024). Artificial intelligence for student writing feedback in higher education: A systematic review. Computers and Education: Artificial Intelligence, 5, 100186. Kohnke, L., Zou, D., & Zhang, R. (2023). Exploring generative artificial intelligence in writing education: Teacher perspectives and user-defined heuristics. Education and Information Technologies, 28, 11973–11994. Lo, C. K. (2023). What is the impact of ChatGPT on education? A rapid review of the literature. Education Sciences, 13(4), 410. Loem, M., Kaneko, M., Takase, S., & Okazaki, N. (2023). Exploring effectiveness of GPT-3 in grammatical error correction: A study on performance and controllability in prompt-based methods. arXiv preprint arXiv:2305.18156. Ma, H., & Slater, T. (2015). Using the developmental path of cause to bridge the gap between AWE scores and writing teachers’ evaluations. Writing & Pedagogy, 7(2–3), 395–422. Utami, S. P. T., Andayani, Winarni, R., & Sumarwati. (2023). Utilization of artificial intelligence technology in an academic writing class: How do Indonesian students perceive? Contemporary Educational Technology, 15(4), ep450. Zhu, C., Sun, M., Luo, J., Li, T., & Wang, M. (2023). How to harness the potential of ChatGPT in education? Knowledge Management & E-Learning, 15(2), 133–152.
0 Comments
Leave a Reply. |
Archives
April 2025
Categories |
RSS Feed