This week’s readings made me rethink what people actually mean when they say “AI.” Annette Vee’s argument stood out to me because it frames AI as something we collaborate with rather than something that replaces us. In that sense, using AI is not that different from using spell-check, Google, or grammar tools. It becomes part of the writing process, but the human is still responsible for the ideas, structure, and meaning. AI can assist, but it does not take over authorship.
At the same time, Josh Sharp challenges the way we even label these systems. He argues that calling them “AI” is misleading because they do not think or understand language. Instead, they predict likely word sequences based on patterns in massive datasets. The lecture slides reinforce this idea by showing that these systems generate language statistically rather than cognitively. This distinction matters because AI outputs can sound polished and confident while still being inaccurate or misleading.
This creates an important tension. On one hand, AI tools can be useful for brainstorming, organizing ideas, and drafting text quickly. On the other hand, their tendency to hallucinate information shows that they cannot be treated as trustworthy sources of knowledge on their own. The Georgia Tech article adds another layer by connecting AI to hyperreality. As AI-generated images, videos, and writing become more realistic, it becomes harder to tell what is human-made and what is machine-generated. Overall, the key issue is not whether AI can write, but how humans should use these systems responsibly while staying aware of their limits.
Leave a comment on this post. (Comments are saved locally in your browser — no login required.)
No comments yet — be the first.