Teachers are using AI to grade essays. But some experts are raising ethical concerns

News Room
By News Room 9 Min Read

When Diane Gayeski, a professor of strategic communications at Ithaca College, receives an essay from one of her students, she runs part of it through ChatGPT, asking the AI tool to critique and suggest how to improve the work.

“The best way to look at AI for grading is as a teaching assistant or research assistant who might do a first pass … and it does a pretty good job at that,” she told CNN.

She shows her students the feedback from ChatGPT and how the tool rewrote their essay. “I’ll share what I think about their intro, too, and we’ll talk about it,” she said.

Gayeski requires her class of 15 students to do the same: run their draft through ChatGPT to see where they can make improvements.

The emergence of AI is reshaping education, presenting real benefits, such as automating some tasks to free up time for more personalized instruction, but also some big hazards, from issues around accuracy and plagiarism to maintaining integrity.

Both teachers and students are using the new technology. A report by strategy consultant firm Tyton Partners, sponsored by plagiarism detection platform Turnitin, found half of college students used AI tools in Fall 2023. Meanwhile, while fewer faculty members used AI, the percentage grew to 22% of faculty members in the fall of 2023, up from 9% in spring 2023.

Teachers are turning to AI tools and platforms — such as ChatGPT, Writable, Grammarly and EssayGrader — to assist with grading papers, writing feedback, developing lesson plans and creating assignments. They’re also using the burgeoning tools to create quizzes, polls, videos and interactives to up the ante” for what’s expected in the classroom.

Students, on the other hand, are leaning on tools such as ChatGPT and Microsoft CoPilot — which is built into Word, PowerPoint and other products.

But while some schools have formed policies on how students can or can’t use AI for schoolwork, many do not have guidelines for teachers. The practice of using AI for writing feedback or grading assignments also raises ethical considerations. And parents and students who are already spending hundreds of thousands of dollars on tuition may wonder if an endless feedback loop of AI-generated and AI-graded content in college is worth the time and money.

“If teachers use it solely to grade, and the students are using it solely to produce a final product, it’s not going to work,” said Gayeski.

The time and place for AI

How teachers use AI depends on many factors, particularly when it comes to grading, according to Dorothy Leidner, a professor of business ethics at the University of Virginia. If the material being tested in a large class is largely declarative knowledge — so there is a clear right and wrong — then a teacher grading using the AI “might be even superior to human grading,” she told CNN.

AI would allow teachers to grade papers faster and more consistently and avoid fatigue or boredoms, she said.

But Leidner noted when it comes to smaller classes or assignments with less definitive answers, grading should remain personalized so teachers can provide more specific feedback and get to know a student’s work, and, therefore, progress over time.

“A teacher should be responsible for grading but can give some responsibility to the AI,” she said.

She suggested teachers use AI to look at certain metrics — such as structure, language use and grammar — and give a numerical score on those figures. But teachers should then grade students’ work themselves when looking for novelty, creativity and depth of insight.

Leslie Layne, who has been teaching ChatGPT best practices in her writing workshop at the University of Lynchburg in Virginia, said she sees the advantages for teachers but also sees drawbacks.

“Using feedback that is not truly from me seems like it is shortchanging that relationship a little,” she said.

She also sees uploading a student’s work to ChatGPT as a “huge ethical consideration” and potentially a breach of their intellectual property. AI tools like ChatGPT use such entries to train their algorithms on everything from patterns of speech to how to make sentences to facts and figures.

Ethics professor Leidner agreed, saying this should particularly be avoided for doctoral dissertations and master’s theses because the student might hope to publish the work.

“It would not be right to upload the material into the AI without making the students aware of this in advance,” she said. “And maybe students should need to provide consent.”

Some teachers are leaning on software called Writable that uses ChatGPT to help grade papers but is “tokenized,” so essays do not include any personal information, and it’s not shared directly with the system.

Teachers upload essays to the platform, which was recently acquired by education company Houghton Mifflin Harcourt, which then provides suggested feedback for students.

Other educators are using platforms such as Turnitin that boast plagiarism detection tools to help teachers identify when assignments are written by ChatGPT and other AI. But these types of detection tools are far from foolproof; OpenAI shut down its own AI-detection tool last year due to what the company called a “low rate of accuracy.”

Setting standards

Some schools are actively working on policies for both teachers and students. Alan Reid, a research associate in the Center for Research and Reform in Education (CRRE) at Johns Hopkins University, said he recently spent time working with K-12 educators who use GPT tools to create end-of-quarter personalized comments on report cards.

But like Layne, he acknowledged the technology’s ability to write insightful feedback remains “limited.”

He currently sits on a committee at his college that’s authoring an AI policy for faculty and staff; discussions are ongoing, not just for how teachers use AI in the classroom but how it’s used by educators in general.

He acknowledges schools are having conversations about using generative AI tools to create things like promotion and tenure files, performance reviews, and job postings.”

Nicolas Frank, an associate professor of philosophy at University of Lynchburg, said universities and professors need to be on the same page when it comes to policies but need to stay cautious .

“There is a lot of danger in making policies about AI at this stage,” he said.

He worries it’s still too early to understand how AI will be integrated into everyday life. He is also concerned that some administrators who don’t teach in classrooms may craft policy that misses nuances of instruction.

“That may create a danger of oversimplifying the problems with AI use in grading and instruction,” he said. “Oversimplification is how bad policy is made.”

To start, he said educators can identify clear abuses of AI and begin policy-making around those.

Leidner, meanwhile, said universities can be very high level with their guidance, such as making transparency a priority — so students have a right to know when AI is being used to grade their work — and identifying what types of information should never be uploaded into an AI or asked of an AI.

But she said universities must also be open to “regularly reevaluating as the technology and uses evolve.”

Read the full article here

Share This Article
Leave a comment

Leave a Reply

Your email address will not be published. Required fields are marked *