I have been playing around with chatGPT. There are many things it can do that work well - I have asked it for advice, for example, about specific incidents and it's given me options that I had not considered.
ChatGPT and I have also written poetry. I quickly figured out after two sessions of free verse that it has a formula for that. So free verse from chatGPT is going to look a lot alike after a while. Some of it makes sense, some not so much.
It can't do more evolved poetry forms, like sestinas or villanelles, with any real accuracy. It loses the syllables, doesn't always make sense, and sometimes just seems so out of whack that aside from using it for ideas for end rhymes, I can't see chatGPT being much use there.
It can take a plot outline and turn it into a story, but is the story then mine? I don't think so, not in a moral sense. I saved what it wrote but I put it in a file with the idea that one day I may take it back out and totally rewrite it. Then I started to wonder, well, what if there's a good line or two in the story that I leave? Is that ok?
My internal jury is still out on that one. I read an article the other day that said at the end something like, "This article was created by an AI tool and thoroughly reviewed by a human." I thought that the disclaimer was good.
I have used AI for images on my blog. I generally mark them as such, at least the first time. The logo I use for Saturday 9 was AI generated. I noted it the first time but not in subsequent uses.
I have also used AI for Thursday 13. Usually, I do this by asking for a list of 25 things on a specific topic, and then I choose the 13 I want and add information to it. I haven't been noting the use of AI on these, but I think in the future, I will put up a disclaimer.
What I would not do is what a friend of mine said she was doing. She is letting AI completely write her articles that she is then selling. I think personal use in a blog, especially one like mine that has no ads, is not a big deal, but if I am selling my work, then it needs to be my work. She is going so far as taping an interview, then putting the tape into a transcribing program, then taking the transcription and running it through AI to spit out an article.
In the first place, I would have never even thought to do that, and in the second place, I think that is not a moral way to use AI. She is representing this as her work when it is not. All she's doing is letting AI do the work and turning it in.
I have started reading articles about AI and morality because this is of great concern to me. This article was a good start.
How are teachers handling AI? I do not know as I am no longer in a classroom environment. I do know that the one time I played with chatGPT and asked it to write an article for me, it made up quotes, non-existent universities, etc., so I would never trust AI for something like that.
And now, just for the heck of it, I'm going to ask chatGPT about the morality of using AI in an article I am selling. Let's see what it says:
From chatGPT:
The morality of using AI to write an article that you then sell as your own work depends on several factors, including transparency, context, intent, and any agreements or expectations involved. Here are the key ethical considerations:
1. Disclosure and Transparency
-
Ethically Questionable: Claiming full authorship of something entirely written by AI without disclosing its role may be considered dishonest, especially if others (editors, clients, readers) expect human authorship.
-
More Acceptable: If the use of AI is disclosed—either explicitly or implicitly (e.g., in a terms-of-service agreement or contributor note)—the ethical concerns are reduced.
2. Contractual and Legal Obligations
-
If you're writing under contract or for a platform that expects original human-created work, using AI without disclosure may violate terms and be both unethical and legally problematic.
-
Some publications, especially in journalism or academia, have strict rules about ghostwriting or non-human authorship.
3. Creative Integrity and Professional Identity
-
Selling AI-generated work as your own might undermine your credibility if discovered—especially in fields where creativity, voice, and critical thinking are core values.
-
If you're marketing yourself as a writer but letting AI do all the writing, there's a mismatch between claim and reality.
4. Value Provided
-
If you significantly revise, guide, or curate the AI’s output (e.g., prompt engineering, structural editing, style tuning), you're still applying your skill and judgment—this is often viewed as collaboration rather than deception.
-
If you're simply passing along raw AI content, it's harder to justify claiming authorship, particularly in moral or professional terms.
Summary:
Using AI to help write or co-write is increasingly common and can be ethically sound if you:
-
Add your own input or editing;
-
Are honest when required by contract or context;
-
Don't mislead others about the nature of the work.
However, presenting entirely AI-generated content as your own original human creation, especially for profit, without any disclosure, is ethically shaky—more so in contexts where human authorship is assumed or valued.
(No disclaimer because it's in the post that I used chatGPT.)
I recently write a post about AI and I was amazed when I asked ChatGPT to write a post in the style of the Plastic Mancunian about chess being rubbish. Now I am retired I intend to look deeper into it, although I never used it really in my job as a software engineer. If you fancy reading it, the post is https://plasmanc.blogspot.com/2024/10/the-fake-plastic-mancunian.html
ReplyDelete:o)
Cheers
PM
As a teacher, I have spent quite a bit of time modeling appropriate usage of AI, particularly for help with scaffolding writing and creating an outline. I even included a piece about the use of AI in writing class assignments in my academic honesty policy. It's convenient, I'll give it that.
ReplyDeleteI don't trust AI at all.
ReplyDelete