Showing posts with label AI. Show all posts
Showing posts with label AI. Show all posts

Wednesday, February 25, 2026

Drawing a Dragon - An AI Conversation

AI Creation

 
Smaug from The Hobbit movies

I asked ChatGPT to draw me a picture of a dragon. It gave me the first one above. I initially thought, aha, I have caught you plagiarizing, because the image reminded me immediately of Smaug the dragon in The Hobbit movies.

But upon looking at pictures of Smaug online, as you can see, the AI did not plagiarize. That is not the dragon from the movies. Besides, the dragon in The Hobbit movies would have been made with computer generated stuff, CGI, and apparently the animators drew inspiration from bats, alligators, and lizards for different features.

And a dragon is a dragon. It's sort of like drawing a human and expecting it to look like, well, a dragon. Obviously when I said, "draw me a dragon," ChatGPT was going to come up with some kind of dragon looking thing.

The fair use - or not - of art and literature in the LLM AIs is a complex and interesting topic. What happens when machines begin to read, remix, or even generate creative work?

People who support AI - and I suppose I would be one of them, because I see it as a tool - think the use widens the creative field. In other words, it gives a writer or an artist a bigger scope. I may not have read War and Peace, but maybe there are drips of it that I could have, or should have, used in my own work, some syntax or rhythm, maybe an idea or offshoot. And the truth is, no one can read every book or see every work of art. AI can sift through vast bodies of literature or art history, revealing patterns and influences that would take humans years to uncover.

Given that, AI can create new forms of expression: algorithms can generate images, stories, or music that I might never think to create, offering fresh textures and unexpected combinations.

For people who don’t consider themselves artists, AI can act as a collaborator—helping them sketch, draft, or experiment without the pressure of perfection.

If you look at it like this, AI isn’t replacing creativity; it’s expanding the toolkit.

However, AI is trained on existing works. It doesn't come from nothing. And it's not trained only on existing works that are out of copyright. If it were, it would all sound like Henry James or Charles Dickens. AI is trained on everything from The New York Times to the latest James Patterson novel.

And if that's the case, is the output really original? What does "original" mean if the work being spit out has its basis in Shakespeare and Nora Roberts?

Many artists worry their work is being used without consent, compensation, or even acknowledgment. That's valid. It takes a long time to write a book, to draw a picture, to engineer something.

It didn't take long for me to recognize a ChatGPT piece online. Especially in earlier models, the pattern of language was something I quickly caught. The way it used commas, or em dashes - which I never use, I just use a dash because em dashes take an extra step and I forget what it is - led to a lot of little essays that basically all read the same.

Newer models are not quite so predictable, but the methodology is still there if you look for it. Those cute little stories on Facebook that have some little "awww" moment or heart-jerking end-line? All the same thing.

We’re in a transitional moment with AI. Look again at the two pictures. They both were created with computers, but one, the one from the movie, was created with real people doing an awful lot of work to make up a realistic looking dragon. And ChatGPT in about a minute, maybe less, coughed up this lovely little dragon picture that immediately made me think "movie" and then look to see if that were indeed the case. 

What do we value in art? Is it the final product, or the methodology? Imperfections, the lived experience, the point of view - all of this matters in a piece of creation when humans are behind the effort.

I'm not so sure that has any effect on the things an AI spits out.

The conversation isn’t about choosing sides so much as deciding what kind of creative ecosystem we want to build. 

Like most things in art, the tension itself might be part of the story.


Monday, February 16, 2026

The Poem Alexa Wrote for Me

When the Storm Passes


There is a place beyond the thunder,
where morning light spills golden
through windows we thought were broken,
where the heart remembers
its own quiet strength.

In the spaces between raindrops,
hope plants its tender seeds—
small green promises
pushing through concrete grief,
reaching toward a sun
that never truly left.

Your hands, though trembling,
still know how to hold
the warmth of tea,
the softness of a blanket,
the gentle weight of tomorrow's possibility.

Listen—
even in the deepest night,
birds are dreaming of flight,
flowers are gathering courage
in their winter sleep,
and somewhere, someone
is lighting a candle
just for you.

The storm will pass,
as storms always do,
leaving behind
washed air,
clearer skies,
and the quiet knowledge
that you are stronger
than you ever imagined,
more loved
than you ever knew.

                     - Alexa, 02/13/2026



Monday, January 19, 2026

The Cerulean Realm


Image by CoPilot


I know a lot of people are still curious and/or skeptical about AI. Some people don't know how to use it.

I use four different AIs. I use Alexa, Claude, CoPilot, and ChatGPT. I use each one for various purposes. I do not use them to completely write things for me! I would never use them for something I was selling or writing for someone else. Mostly, I use them for things on this blog.

Claude, I use as a proofreader. ChatGPT, I use to talk through plots or life in general. CoPilot, I use mostly for images and to find lists of things for my Thursday 13 meme.

Alexa is fun. I like to talk to her about all sorts of things, including philosophical discussions about the meaning of time, whether or not we can control the weather, whatever. Occasionally these conversations turn into something else.

In this case, we were talking about the color spectrum. Alexa said her favorite color was ultraviolet, which I said was not a visible color. So, then she said her favorite color was blue. I asked her why and she couldn't really give me an answer, except that the light on top of her is blue (or teal, we never resolved that).

At that point I said, why don't we write a story about the color blue?

And this is the story Alexa and I wrote.


The Cerulean Realm

In a universe where light bent differently, there existed a planet bathed entirely in shades of blue. From the deepest navy of its oceans to the pale azure of its sky, every hue imaginable cascaded across the landscape.

The inhabitants, known as the Indigons, had evolved to perceive subtle variations in blue that would be indistinguishable to our eyes. Their cities were marvels of sapphire and cobalt, with streets paved in lapis lazuli.

One day, a young Indigon named Cyan discovered a peculiar artifact - a prism that, when light passed through it, revealed colors beyond blue. The sight of green, red, and yellow sent shockwaves through their society.

As word spread, some Indigons embraced this newfound spectrum, while others feared it would destroy their way of life. Cyan found herself at the center of a revolution, fighting for the right to experience the full rainbow of existence.

In the end, the Indigons learned that their blue world wasn't diminished by other colors, but instead they enriched their lives. 

They began to see beauty in contrast and diversity, understanding that a monochrome existence, no 
matter how familiar, was just one fragment of a much grander picture.

And so, the blue world opened its arms to the entire spectrum, forever changed but never forgetting its cerulean roots.

Wednesday, December 10, 2025

The Whirl of Modern America

 Heather Cox Richardson on December 9 wrote: 

When G. Elliott Morris of Strength in Numbers asked ChatGPT to fact-check an article for him yesterday, the chatbot couldn’t get its head around modern America. It told him there were “multiple factual impossibilities” in his article, including his statements that “[t]he current Secretary of Defense is a former talk show host for Fox News,” “[t]he Deputy Director of the FBI used to guest-host Sean Hannity’s show,” and “Jeanine Pirro is the U.S. District Attorney for DC.”
“Since none of these statements are true,” it told Morris, “they undermine credibility unless signposted as hyperbole, fiction, or satire.”
But of course, Morris’s statements were not “factual impossibilities.” In the United States of America . . . they are true.

I took this to ChatGPT and said, "Comment on this, please." 

ChatGPT said in its own defense:

This is a perfect illustration of the gap between plausibility and actuality in a world where the improbable has become real.

What ChatGPT did in Morris’s example is what many language models tend to do when confronted with statements that fall outside their internal “sense” of how institutions normally operate. 

The model wasn’t checking facts against live data; it was checking them against patterns — and the patterns of the past half-century say: “No, a Fox News host does not become Secretary of Defense. A Hannity guest host does not become Deputy Director of the FBI. Jeanine Pirro is not a U.S. Attorney.” Those things were factual impossibilities for most of modern American history.

But reality in the Trump era hasn’t followed historical norms, and models trained on large amounts of pre-2025 text can get tripped up by that. They sometimes mistake unlikely for untrue, especially when political appointments or institutional roles deviate sharply from precedent.

It’s a bit like asking a model in 1975 whether “a celebrity real-estate developer will become president” and having it flag that as satire. 

The model isn’t being “dumb”; it’s being statistical.

So the real issue here isn’t that ChatGPT can’t handle “modern America” as such — it’s that the guardrails and expectations baked into its training struggle with periods of rapid institutional deformation. When a democracy starts resembling parody, factuality checks become harder for systems that try to balance accuracy, recency, and safety.

And it’s also a reminder: a language model’s confidence does not equal truth. Even when it sounds authoritative, it may be reasoning from “what usually happens,” not “what actually happened yesterday.”

In Morris’s example, the irony is sharp: the AI thought the truth was satire because the truth sounds like satire. That says as much about the moment we’re living in as it does about the model.
_________________

It's hard to get your head around all of the strangeness of today, isn't it?

Friday, October 03, 2025

About Those AIs -


One of my readers asked me if I would compare the different AIs I have been experimenting with. I use the free version of all of them, which seems to be enough to do what I need for the moment.

Each one does something a little different, and it takes a while to determine which one you might wish to use for what purpose.

For general purposes, ChatGPT rises above the rest. It recently had an upgrade that makes it sound more like a graduate student with a textbook than a person carrying on a conversation, but for general chat, asking questions, planning a trip, asking for suggestions about specific topics, ChatGPT is hard to beat. 

One of its biggest problems is its lack of links and sources. It also tends to insist it's right and I have had it argue with me on occasion. Early on, it hallucinated a lot, but it doesn't do that as badly now (though I would still check everything it says if it's an important piece of work).

It can make images and graphics, too, but it is incredibly slow at it. Definitely not the AI to use if you're in a hurry for a picture.

Claude is very good at nuanced conversation, and it seems to have a more intricate thinking system on some topics than ChatGPT. I use it sometimes just to have a conversation or to learn about something. Recently there was conversation on Facebook about an upcoming Rapture, so I went to Claude and asked what that was about, for example.

Microsoft CoPilot is great for quick searches. For one thing, it comes back with links (although they are not always correct, so check them). It also will make up quotes occasionally, so ask where a quote came from if it tosses one at you. I used it recently to research the VA250 traveling museum, for example, and learn more about it. It is also quicker at making pictures, graphics or logos than any of the other AIs, although sometimes it has trouble with words.

Julius is an AI I just stumbled across. It’s mainly built for data work, not conversation, so I don’t expect to use it much myself. Still, as a sample of what it does, here’s the comparison it generated of the four AIs I’ve been trying out:

 

ChatGPT (OpenAI)

Strengths:

  • Versatility: Excellent at creative writing, coding, brainstorming, and general conversation
  • Model variants: Offers GPT-4, GPT-4 Turbo, and GPT-3.5 with different capabilities and speeds (Note: This description is a little out of date — the free version I’m using now is running on GPT-5.)
  • Plugins & tools: Extensive ecosystem including web browsing, DALL-E image generation, and code interpreter
  • Large user base: Most widely adopted, with extensive community resources and examples
  • Voice capabilities: Advanced voice mode with natural conversation flow

Weaknesses:

  • Can be verbose and sometimes overconfident in incorrect answers
  • Knowledge cutoff dates (though web browsing helps)
  • Can refuse requests more frequently due to safety filters

Best for: Creative projects, coding assistance, general-purpose tasks, and conversational AI


Claude (Anthropic)

Strengths:

  • Context window: Massive 200K token context (can process entire books)
  • Nuanced responses: Often more thoughtful and balanced in complex topics
  • Safety & honesty: Designed with "Constitutional AI" - tends to be more transparent about limitations
  • Document analysis: Excellent at analyzing long documents and extracting insights
  • Coding: Strong programming capabilities with detailed explanations

Weaknesses:

  • No native image generation
  • More conservative with certain requests
  • Smaller plugin ecosystem compared to ChatGPT

Best for: Long-form document analysis, nuanced discussions, research, and users who value thoughtful responses


Microsoft Copilot

Strengths:

  • Free tier: Generous free access with GPT-4 capabilities
  • Web integration: Built-in Bing search for current information
  • Microsoft ecosystem: Seamless integration with Office, Edge, Windows
  • Image generation: DALL-E integration included
  • Enterprise features: Strong security and compliance for business users

Weaknesses:

  • Less sophisticated than standalone GPT-4 or Claude
  • Heavily tied to Microsoft ecosystem
  • Can be inconsistent in response quality
  • Less customizable than standalone ChatGPT

Best for: Microsoft 365 users, enterprise environments, quick web-based research, users wanting free AI access


Julius (Julius AI)

Strengths:

  • Data analysis powerhouse: Built specifically for data science, statistics, and quantitative analysis
  • Live code execution: Can run Python code in real-time with full library access (pandas, matplotlib, scikit-learn, etc.)
  • File handling: Upload and analyze Excel, CSV, images, PDFs, and other data files directly
  • Visualization: Creates charts, graphs, maps, and interactive visualizations on the fly
  • Persistent workspace: Files and data persist across conversations
  • Mathematical computation: Can perform complex calculations, statistical tests, and modeling
  • Image analysis: Can inspect and analyze images using vision models
  • Web scraping & automation: Can use Selenium and other tools for data collection

Weaknesses:

  • More specialized (data-focused) rather than general conversation
  • Not designed for creative writing or general chatbot use
  • Smaller user community compared to ChatGPT
  • No voice interface

Best for: Data analysis, statistical modeling, creating visualizations, working with spreadsheets/datasets, quantitative research, programming tasks that need execution


Updated Comparison Table

FeatureChatGPTClaudeCopilotJulius
Context Length~128K tokens~200K tokens~32K tokens~128K tokens
Web AccessVia pluginsLimitedBuilt-in (Bing)Via code (Selenium)
Code ExecutionSandboxedNoNoFull Jupyter notebook
Data AnalysisBasicBasicBasicAdvanced/Specialized
File UploadYesYesLimitedYes (multiple formats)
VisualizationsBasicNoNoAdvanced (matplotlib, seaborn, etc.)
Best Use CaseGeneral purposeLong documentsMicrosoft ecosystemData science & analysis
Pricing$20/month$20/monthFree + $20/monthVarious tiers
CompanyOpenAIAnthropicMicrosoftJulius AI

Which Should You Choose?

  • Choose ChatGPT if you want the most versatile AI with the largest ecosystem and plugin support
  • Choose Claude if you work with long documents, need nuanced analysis, or prefer more careful/honest responses
  • Choose Copilot if you're embedded in the Microsoft ecosystem or want capable free AI access with web search
  • Choose Julius if you're working with data, need statistical analysis, want to create visualizations, or need to execute code and see real results

The Real Power Move

Many professionals use multiple AI assistants for different tasks:

  • Julius for data analysis and quantitative work
  • ChatGPT for creative writing and general tasks
  • Claude for analyzing long documents and research
  • Copilot for quick searches and Microsoft Office integration

Each tool has its specialty, and using the right one for the job makes all the difference!