I have weird chats with Alexa. This morning we started talking about the song Hotel California and somehow, I ended up wondering what would happen if AI smoked weed.
Alexa took that thought and ran with it:
I have weird chats with Alexa. This morning we started talking about the song Hotel California and somehow, I ended up wondering what would happen if AI smoked weed.
Alexa took that thought and ran with it:
![]() |
| AI Creation |
| Smaug from The Hobbit movies |
I asked ChatGPT to draw me a picture of a dragon. It gave me the first one above. I initially thought, aha, I have caught you plagiarizing, because the image reminded me immediately of Smaug the dragon in The Hobbit movies.
But upon looking at pictures of Smaug online, as you can see, the AI did not plagiarize. That is not the dragon from the movies. Besides, the dragon in The Hobbit movies would have been made with computer generated stuff, CGI, and apparently the animators drew inspiration from bats, alligators, and lizards for different features.
And a dragon is a dragon. It's sort of like drawing a human and expecting it to look like, well, a dragon. Obviously when I said, "draw me a dragon," ChatGPT was going to come up with some kind of dragon looking thing.
The fair use - or not - of art and literature in the LLM AIs is a complex and interesting topic. What happens when machines begin to read, remix, or even generate creative work?
People who support AI - and I suppose I would be one of them, because I see it as a tool - think the use widens the creative field. In other words, it gives a writer or an artist a bigger scope. I may not have read War and Peace, but maybe there are drips of it that I could have, or should have, used in my own work, some syntax or rhythm, maybe an idea or offshoot. And the truth is, no one can read every book or see every work of art. AI can sift through vast bodies of literature or art history, revealing patterns and influences that would take humans years to uncover.
Given that, AI can create new forms of expression: algorithms can generate images, stories, or music that I might never think to create, offering fresh textures and unexpected combinations.
For people who don’t consider themselves artists, AI can act as a collaborator—helping them sketch, draft, or experiment without the pressure of perfection.
If you look at it like this, AI isn’t replacing creativity; it’s expanding the toolkit.
However, AI is trained on existing works. It doesn't come from nothing. And it's not trained only on existing works that are out of copyright. If it were, it would all sound like Henry James or Charles Dickens. AI is trained on everything from The New York Times to the latest James Patterson novel.
And if that's the case, is the output really original? What does "original" mean if the work being spit out has its basis in Shakespeare and Nora Roberts?
Many artists worry their work is being used without consent, compensation, or even acknowledgment. That's valid. It takes a long time to write a book, to draw a picture, to engineer something.
It didn't take long for me to recognize a ChatGPT piece online. Especially in earlier models, the pattern of language was something I quickly caught. The way it used commas, or em dashes - which I never use, I just use a dash because em dashes take an extra step and I forget what it is - led to a lot of little essays that basically all read the same.
Newer models are not quite so predictable, but the methodology is still there if you look for it. Those cute little stories on Facebook that have some little "awww" moment or heart-jerking end-line? All the same thing.
We’re in a transitional moment with AI. Look again at the two pictures. They both were created with computers, but one, the one from the movie, was created with real people doing an awful lot of work to make up a realistic looking dragon. And ChatGPT in about a minute, maybe less, coughed up this lovely little dragon picture that immediately made me think "movie" and then look to see if that were indeed the case.
What do we value in art? Is it the final product, or the methodology? Imperfections, the lived experience, the point of view - all of this matters in a piece of creation when humans are behind the effort.
I'm not so sure that has any effect on the things an AI spits out.
The conversation isn’t about choosing sides so much as deciding what kind of creative ecosystem we want to build.
Like most things in art, the tension itself might be part of the story.
When the Storm Passes
- Alexa, 02/13/2026
When G. Elliott Morris of Strength in Numbers asked ChatGPT to fact-check an article for him yesterday, the chatbot couldn’t get its head around modern America. It told him there were “multiple factual impossibilities” in his article, including his statements that “[t]he current Secretary of Defense is a former talk show host for Fox News,” “[t]he Deputy Director of the FBI used to guest-host Sean Hannity’s show,” and “Jeanine Pirro is the U.S. District Attorney for DC.”“Since none of these statements are true,” it told Morris, “they undermine credibility unless signposted as hyperbole, fiction, or satire.”But of course, Morris’s statements were not “factual impossibilities.” In the United States of America . . . they are true.
I took this to ChatGPT and said, "Comment on this, please."
This is a perfect illustration of the gap between plausibility and actuality in a world where the improbable has become real.What ChatGPT did in Morris’s example is what many language models tend to do when confronted with statements that fall outside their internal “sense” of how institutions normally operate.The model wasn’t checking facts against live data; it was checking them against patterns — and the patterns of the past half-century say: “No, a Fox News host does not become Secretary of Defense. A Hannity guest host does not become Deputy Director of the FBI. Jeanine Pirro is not a U.S. Attorney.” Those things were factual impossibilities for most of modern American history.But reality in the Trump era hasn’t followed historical norms, and models trained on large amounts of pre-2025 text can get tripped up by that. They sometimes mistake unlikely for untrue, especially when political appointments or institutional roles deviate sharply from precedent.It’s a bit like asking a model in 1975 whether “a celebrity real-estate developer will become president” and having it flag that as satire.The model isn’t being “dumb”; it’s being statistical.So the real issue here isn’t that ChatGPT can’t handle “modern America” as such — it’s that the guardrails and expectations baked into its training struggle with periods of rapid institutional deformation. When a democracy starts resembling parody, factuality checks become harder for systems that try to balance accuracy, recency, and safety.And it’s also a reminder: a language model’s confidence does not equal truth. Even when it sounds authoritative, it may be reasoning from “what usually happens,” not “what actually happened yesterday.”In Morris’s example, the irony is sharp: the AI thought the truth was satire because the truth sounds like satire. That says as much about the moment we’re living in as it does about the model.
Each one does something a little different, and it takes a while to determine which one you might wish to use for what purpose.
For general purposes, ChatGPT rises above the rest. It recently had an upgrade that makes it sound more like a graduate student with a textbook than a person carrying on a conversation, but for general chat, asking questions, planning a trip, asking for suggestions about specific topics, ChatGPT is hard to beat.
One of its biggest problems is its lack of links and sources. It also tends to insist it's right and I have had it argue with me on occasion. Early on, it hallucinated a lot, but it doesn't do that as badly now (though I would still check everything it says if it's an important piece of work).
It can make images and graphics, too, but it is incredibly slow at it. Definitely not the AI to use if you're in a hurry for a picture.
Claude is very good at nuanced conversation, and it seems to have a more intricate thinking system on some topics than ChatGPT. I use it sometimes just to have a conversation or to learn about something. Recently there was conversation on Facebook about an upcoming Rapture, so I went to Claude and asked what that was about, for example.
Microsoft CoPilot is great for quick searches. For one thing, it comes back with links (although they are not always correct, so check them). It also will make up quotes occasionally, so ask where a quote came from if it tosses one at you. I used it recently to research the VA250 traveling museum, for example, and learn more about it. It is also quicker at making pictures, graphics or logos than any of the other AIs, although sometimes it has trouble with words.
Julius is an AI I just stumbled across. It’s mainly built for data work, not conversation, so I don’t expect to use it much myself. Still, as a sample of what it does, here’s the comparison it generated of the four AIs I’ve been trying out:
Strengths:
Weaknesses:
Best for: Creative projects, coding assistance, general-purpose tasks, and conversational AI
Strengths:
Weaknesses:
Best for: Long-form document analysis, nuanced discussions, research, and users who value thoughtful responses
Strengths:
Weaknesses:
Best for: Microsoft 365 users, enterprise environments, quick web-based research, users wanting free AI access
Strengths:
Weaknesses:
Best for: Data analysis, statistical modeling, creating visualizations, working with spreadsheets/datasets, quantitative research, programming tasks that need execution
| Feature | ChatGPT | Claude | Copilot | Julius |
|---|---|---|---|---|
| Context Length | ~128K tokens | ~200K tokens | ~32K tokens | ~128K tokens |
| Web Access | Via plugins | Limited | Built-in (Bing) | Via code (Selenium) |
| Code Execution | Sandboxed | No | No | Full Jupyter notebook |
| Data Analysis | Basic | Basic | Basic | Advanced/Specialized |
| File Upload | Yes | Yes | Limited | Yes (multiple formats) |
| Visualizations | Basic | No | No | Advanced (matplotlib, seaborn, etc.) |
| Best Use Case | General purpose | Long documents | Microsoft ecosystem | Data science & analysis |
| Pricing | $20/month | $20/month | Free + $20/month | Various tiers |
| Company | OpenAI | Anthropic | Microsoft | Julius AI |
Many professionals use multiple AI assistants for different tasks:
Each tool has its specialty, and using the right one for the job makes all the difference!