AI and I
By Charlie Houston
The Drudgery of Research
Back in the pre-Google and pre-artificial intelligence times, research could be torment. Some questions could be answered through encyclopedias; others required hours in libraries, poring through tome after tome. Either way, information had to be gathered, sorted, edited, and finally shaped into something readable. It was time-consuming, sometimes painfully so, but I usually enjoyed learning random things.
Some Research, Pre-AI
In fifth grade I wrote a paper on paranoid schizophrenia. I found the topic fascinating, which apparently showed in the A+ and glowing remarks I received. I’m still not sure what choosing insanity as my topic choice said about me.
Years later, while working at a law firm during graduate school, I was tasked with researching the metallurgy of helicopter tail rotor blades. The firm’s client, a major manufacturer, was defending wrongful death cases involving in-flight tail rotor failures.
That assignment meant dense technical reading and careful synthesis—no shortcuts available. I was a budding MBA with a liberal arts background, not a scientist, but was able to present a cogent report.
In my development career, I once had to prepare a paper comparing modern deep-cell parabolic lenses for fluorescent ceiling fixtures, to old flat acrylic lenses. The goal was to quiet a particularly stubborn construction manager. I reported that the deep-cell fixtures produced a more effective “bat wing” light distribution, and that design became our standard.
Had AI existed then, much of that research could have been done in minutes. Today I use AI often, though I’m no expert. This is simply my ongoing experience with it.
Is AI Accurate?
My first foray into AI was with ChatGPT. I started with a basic question: “What are the largest cities in Austria?” The answer came back quickly and seemed solid. Then I tried something more involved: “Compare Roman and Norse mythology.” Again, the response was fast and impressively organized. As a newcomer, I was amazed.
Then I heard that AI could be wrong. I concocted a test. I asked ChatGPT a legal question: “In Virginia, what is the liability of a farm owner in a slip-and-fall case?” Its response looked authoritative, replete with case citations. But when I checked several of those citations, most were incorrect or irrelevant. These so-called “hallucinations” are a known limitation of AI, but seeing them firsthand was jarring.
That raised two obvious questions: How much trust should one place in AI? What AI source is best?
Four AI Engines
I gave ChatGPT another chance. I asked whether the phrase “presumed innocent until proven guilty” is technically accurate. It responded with a standard explanation: “a foundational principle of criminal law meaning the accused is legally considered innocent until proven otherwise.” That answer is familiar—but not precise.
I prefer a clearer statement: “presumed not guilty until adjudicated guilty.” Consider an example. A man stands over a body, knife in hand, blood everywhere. Police arrive and arrest him on the spot. Do people actually believe he is “innocent”? Of course not. They assume guilt—but also recognize that guilt must be proven in court. That distinction matters.
Gemini, built into Google, was next. It described the phrase as a legal fiction and discussed burden of proof. That was partially useful, but still somewhat vague.
Claude.ai did better: it explained that the presumption of innocence is a procedural rule governing how trials are conducted, not a statement about what anyone truly believes. That was much closer to the mark.
Perplexity.ai began with strong framing: “the phrase is legal shorthand and inherently imprecise; courts do not declare factual innocence, only that guilt was not proven.” That, to me, was the most satisfying AI answer, even if I still prefer my own phrasing.
Research vs. Analysis
AI is excellent at gathering and summarizing information, but its outputs require a degree of skepticism. Traditional Google search gives you links; AI tools give you synthesized answers. Both are useful, but what I often want is analysis—something closer to judgment.
So I tried a more subjective question.
For several years, Emily and I shared a hobby: driving sports cars on the Summit Point tracks in West Virginia. (I once hit 165 mph.) My last track car, a heavily modified Porsche Cayman S, was a thrill. Then its front suspension broke in a turn, and the car was totaled.
That incident helped me decide to step away from track driving. The hobby had begun to feel like piling up hundred-dollar bills, dousing them with gasoline, and lighting a match.
Still, I miss sporty driving. So I posed a question to the four AI systems: “Which Porsche Cayman model is most likely to hold its value—a 2008 S, a 2010 S, or a 2014 base?”
The answers:
ChatGPT: 2008 Cayman S
Gemini: 2010 Cayman S
Claude: 2008 Cayman S
Perplexity: 2014 Cayman base
Four AI systems, three different answers. That tells you something. There’s more: AI can inform a decision, but it doesn’t make one. In the end, factoring in price, performance, and comfort, I’ve settled—tentatively—on the 2014.
Curiosity
I’ve always been curious, and AI has become a convenient outlet for that. These days I ask constantly questions: Should olives be refrigerated after opening? What are Europe’s largest metropolitan areas? When was Henry IV of France born? What is plasma physics? What distinguishes “presumed” from “assumed”? (They are not the same.)
My questions never really stop. AI makes it easy to indulge that curiosity.
AI and Writing
For this piece, I stayed with Microsoft Word through multiple drafts. Then I ran it through Perplexity to see what it might suggest. It offered several edits, most of which I accepted. I also asked it to shorten the piece by about 200 words. It did so effectively—but I had to perform some editing, mainly for style.
AI helps, but it doesn’t present a polished document. There are always a perfect word and a perfect sentence, and finding them still is a human task. This is, then, the seventh draft.
Charlie Houston once played elaborate word games with his father and later competed in the National Crossword Puzzle Tournament. His wife tries to ignore his idiosyncrasies.
Perplexity.ai suggested this article reflects “intellectual humility.” Those who know Charlie would likely disagree.
Comments
Any name-calling and profanity will be taken off. The webmaster reserves the right to remove any offensive posts.