Plagiarism is, by its nature, a complex and nuanced topic. The word covers everything from poorly paraphrasing a few sentences to copying and pasting a work wholesale.
This nuance can make discussing plagiarism very difficult, especially in environments like political ones, where nuance is easily lost. However, it’s important because that nuance dictates how we should respond to plagiarism.
However, this problem is as old as plagiarism itself. It gets even more complicated when you look at the varying standards of citation, how attitudes about plagiarism have changed over time, and how technology has changed how we share information.
That said, we are currently facing a similar complexity crisis regarding the use of AI. Generative AI is not a monolith. There are a myriad of different systems that work with (or replace) human authors in various ways.
To be clear, this technology is so new that the social standards for what is and is not acceptable is still being sorted. It may be decades before we fully understand what is and is not acceptable with AI.
In the meantime, it’s difficult enough to discuss what it means to “Use AI” or for something to be “AI Generated.” As AI finds itself in more spaces, deciding how important AI is in a work’s creation becomes challenging.
To expedite this. I’ve decided to create a rough “AI Gradient.” The goal isn’t to create an ultimate determination as to what is “right” or “wrong”; it is just a unified way to talk about the use of AI.
To that end, here is the gradient of AI usage as I see it today.
The AI Usage Gradient
Note: For this article, I’m focusing on writing. However, you can likely define similar tiers for other kinds of generative AI.
- Entirely Human Written: A work is completely human-written without automated assistance, including spell-checking or grammar correction. Humans do all the writing and editing. This is extremely rare today.
- Grammar and Spell Checking: Automated tools check for grammar and spelling errors. However, the corrections are limited to fixing words, adding/removing punctuation, and minor sentence changes. This is what most modern but non-AI word processors do.
- Sentence Rewriting: At this level, automated tools are used to rewrite sentences or passages to improve clarity or fix grammatical errors. Typically, this is done by rearranging the human-written words without changing the passage’s meaning. Advanced grammar checkers, such as Grammarly, commonly do this, though they do not consider it an “AI” feature. This can also include some use of autocomplete or predictive text as long as it is solely used as a typing aid, not a writing aid.
- Paragraph or Section Rewriting: Now automated tools are rewriting lengthier passages for clarity, brevity, or to change the tone. This is also a feature of tools like Grammarly but is more commonly advertised as an AI feature. Though the automated tool starts with human words, the output may be very different. However, the intent is still to retain the original meaning.
- Using AI to Write Short Passages: AI has only been used in an editing capacity up to this point. However, here is where authors, though producing most of the text, use generative AI to write certain passages. This includes AI suggestions that go beyond typing aids and are, instead, writing aids.
- Using AI to Write Paragraphs: This involves more mingling of AI and human writing. This may involve using AI to “paraphrase” outside text or generate large blocks of writing. Much of the work may but large chunks were generated using AI.
- Using AI to Write Large Sections: In larger works, AI can be used to write large sections. Those sections may be human-edited, but the original verbiage and meaning were generated using AI. Other portions are still human-written.
- AI-Generated Work with Heavy Human Editing: Here, the roles are reversed, the AI is the primary author and the human takes on the role of editor. However, the editing here is significant to correct errors and issues with the AI-generated text.
- AI-Generated Work with Limited Human Editing: The work is almost entirely AI-generated with only minor or limited human editing. This editing is often done to either fix glaring issues with the AI work or to mask the fact that it is AI-generated.
- Entirely AI-Generated: Finally, the work is entirely AI-generated with no human editing. This is purely a work copied and pasted from an AI system.
Limitations and Issues
To be clear, this is not a precise gradient. Many of the numbers overlap, and there’s a great deal of room to debate where a particular case falls. The goal isn’t to be definitive but to have a way to discuss these cases.
That said, there are a few things that I deliberately omitted.
First, it mentions nothing about using an AI system to generate ideas for work or to aid with research. Ideas can come from almost anywhere, and using an AI brainstorm isn’t much different than searching for ideas on the Internet or using a human. As long as the expression of that idea was done by a human, I don’t see using an AI for idea generation as problematic.
Nonetheless, some find that problematic. For those, consider this a gradient focused on the expression of the idea, not the idea itself.
Research is similar. Using an AI to find sources isn’t much different than using a search engine. Humans still need to examine, evaluate and interpret those sources. It’s that evaluation and interpretation that are most important. If the author does those tasks well, how they found the sources is less important.
Ultimately, this is meant as a starting point, a place to begin the conversation. If there ever is a definitive gradient for AI usage, this will not be it. Most likely, this will need to change and adapt over time.
My Thoughts
Looking at the gradient, I don’t think many people will have issues with one through three. The technology for much of this existed well before generative AI. All AI has done is improve the quality of those editing tools.
Things get much murkier with four and five. At some point, AI goes from being a writing aid to replacing the human author. It stops sitting in the editor’s chair and becomes a co-author of the piece.
I think this is where most humans would begin to have an issue with AI, especially in academic settings, and where AI detectors would likely begin to be triggered.
Six through ten are where the AI goes from being a co-author to being the primary author. These make the human the editor, not the author of the piece. For that reason, these raise serious questions of authorship.
However, as I said, the social norms around AI are not set in stone. I expect we’ll become more comfortable with the middle of the spectrum as time passes.
Whether there will be widespread acceptance of fully AI-generated work is a different matter. But it’s unlikely that there will ever be if the work is presented as a human creation.
Bottom Line
The point isn’t to draw a bright line where two is good or seven is bad. The goal is to create a common gradient for discussing these matters.
For example, a teacher or editor could explain that you can be a three or a four on this scale but no higher with a particular assignment. It can also be used in evaluating works suspected to have been written with the aid of AI.
I don’t want to make a judgment with this gradient. Instead, I want to create a tool for discussing how we make the judgments we make.
When discussing AI writing, we all need to be on the same page. We can’t talk about these issues if we are all working from different definitions.
In the end, my hope isn’t that this helps with talking about AI, not necessarily cast judgment on it.
We have to reach our own answers on what is and is not acceptable in this spce. But, to get there, we have to get better at talking about it.
Want to Reuse or Republish this Content?
If you want to feature this article in your site, classroom or elsewhere, just let us know! We usually grant permission within 24 hours.