FREE NEWSLETTER

Man vs. Machine

Jonathan Clements

COULD HUMBLEDOLLAR be replaced by a website chock-full of articles created using artificial intelligence? The short answer: It would be remarkably easy—and I fear readers wouldn’t object, especially if they didn’t know how the articles were generated.

To show what’s possible, I requested eight personal-finance articles from three freely available artificial intelligence (AI) tools, ChatGPT, Google’s Gemini and Microsoft’s Copilot. The first of those articles is published today, with the other seven appearing over the next four days.

The most onerous part was coming up with topics for the eight articles—and, trust me, it wasn’t exactly heavy-lifting. In fact, the mischievous part of me rather enjoyed it. I did no editing on the resulting articles beyond some formatting changes and a few slight tweaks to fit HumbleDollar’s style, such as turning “healthcare” into “health care.” None of this took much time. Indeed, if I didn’t take too many bathroom breaks, I figure I could have created and scheduled an entire month’s worth of articles in a single day.

Is there a personal-finance site that already does this? I don’t know. Should we be concerned about the possibility? In the articles I requested, I didn’t see any obviously wrong financial information.

Instead, I have a different concern: These AI engines are more than happy to create fabricated stories. Take a look at today’s article. I cooked up the whole stuck-in-an-elevator-with-Jack-Bogle scenario in my head, and the AI engines took the notion and ran with it.

Regular readers will know that HumbleDollar articles are frequently built around the financial experiences of the site’s writers. As I often tell contributors, “You may not be a personal-finance expert, but you are an expert on your own life.” Now, however, it seems we can substitute fiction for real-life experience. Should we be bothered? I am. But I may be in a minority. In fact, I love the Jack-in-the-elevator story. I just wish it were true.

When I started playing around with the AI engines, I toyed with having them generate a dozen articles, and then running them over the course of a week using fictitious bylines, to see if anybody would notice. I even considered creating a new contributor, and posting regular articles by him or her, to see if readers caught on.

But I didn’t do this—because I consider it unethical to pass off machine-written articles as penned by humans. Still, I suspect I could have got away with it.

I found that ChatGPT and Google Gemini were more likely to spit out the sort of articles that HumbleDollar publishes, while Microsoft Copilot resisted the personal journalism that this site is known for. That explains why just one of the eight articles you’ll see in the days ahead comes from Copilot.

Is there a role for AI in personal-finance journalism? I could see using AI to generate the first draft of articles, which would then be scrubbed to remove inaccurate or fictitious information. But I think good ethics would necessitate disclosing that the article’s first draft was created using AI. 

Ethics aside, are there any drawbacks to AI-generated articles? The writing style leaves a little to be desired, but I’m probably pickier than most.

In fact, I’ve tried using all three engines to “lightly edit” a few of the less polished article submissions I’ve received. I was most happy with the output from ChatGPT. Even then, in the “light” editing process, the writer’s voice tends to disappear.

Is AI some sort of existential threat? It doesn’t strike me that way. It’s a tool—one that’s new and hence generating lots of debate, but which will soon be as commonplace as cellphones and Alexa devices. Still, in my little world of personal-finance writing, AI undoubtedly offers the chance to behave badly. Humans being humans, I’m entirely confident that this temptation will prove irresistible to some.

Jonathan Clements is the founder and editor of HumbleDollar. Follow him on X @ClementsMoney and on Facebook, and check out his earlier articles.

Want to receive our weekly newsletter? Sign up now. How about our daily alert about the site's latest posts? Join the list.

Subscribe
Notify of
53 Comments
Newest
Oldest Most Voted
Inline Feedbacks
View all comments
Fund Daddy
4 months ago

This discussion reminds me of 1981. This is when I graduated with a Computer Science degree and was an IT developer for over 30 years. IT gets better over the years and AI will get better too, just wait and see, and when it gets better, it will be fast and replace humans too.
HD has great contributors, but many sites, articles and opinions have been regurgitating for years for bait and click.

Last edited 4 months ago by Fund Daddy
David Firth
4 months ago

As a professor of Management Information Systems, I teach and write about when to use AI. My current thinking is that it comes down to motivation, both the motivation of the producer, and the motivation of the consumer.
Here, and as we’re seeing clearly with the generative AI articles Jonathan is posting this week, generative AI is producing articles that are “more like a brochure” and, for us regulars to HD, boring, because they lack the infusion of human foibles.
If Jonathan just wanted to save time writing articles (one end of the producer’s motivation scale), then generative AI is a good idea FOR HIM. But if he (and others) want to tell a personal story infused with it’s inevitable personal issues (the other end of the producer’s scale) then articles need to be actually written.
On the consumer axis, am I just looking for information? If so, generative AI is pretty good, and the articles Jonathan is posting might get it done pretty well. But most of us here at Humble Dollar are not just here for information. We are here to learn. We are here to hear about the human struggle. For that, we need to hear from other humans.

William Housley
4 months ago

We are starting to ask AI to create. I think this is a misuse of AI. Capt. Kirk is a better example of how to use AI. He would ask the computer for information to help in the decision making process. The computer did not make decisions nor do I remember James asking the computer to create.

smr1082
4 months ago

AI is learning from data sets made available. By analyzing all Humble Dollar articles it can certainly simulate a typical article. It would have trouble with creating new personal experiences, which are unique. With time and more data sets, even this gap will be bridged.

I can see a digital twin of Jonathan Clements well trained to write articles, do video interviews and podcasts while he is sipping wine at a beach in Florida. That day is not far off.

SanLouisKid
4 months ago
Reply to  smr1082

Hopefully AI will create Bitcoin that we can donate to Humble Dollar to maintain Jonathan on that beach in Florida.

Jonathan Clements
Admin
4 months ago
Reply to  smr1082

For my digital twin, I’d recommend a moderately priced New Zealand sauvignon blanc.

OldITGuy
4 months ago

I’m no expert on AI, but having just returned from a 2 week road trip in northern calif (visiting the redwoods) it occurs to me that currently AI is a bit like the driving assistance technology on my 2019 car. Very useful when used by someone who understands it’s limitations, but actually of negative value when used inappropriately or with unrealistic expectations.

Mark Schwartz
4 months ago

AI doesn’t have any skin in the game. It obviously lacks personal touch and a sense of reality. It’s OK for looking up the current treatments for lymphoma or how do I change the impossible spark plugs on my Subaru boxer engine but without humans providing their true life stories its dead information. I fear AI might be taken for real life reality one day and conflicts started over computer generated words. I’m not sure how that will all be filtered in the future, but the use of AI and TV generated figures speaking as real life is scary. Keep HD real with human input stories.

Gozo Rabat
4 months ago

In reading this ChatGPT piece, I sadly noticed how its “stilted” qualities remind me of my own writing efforts. Specifically, of those qualities that deaden my work.

On one hand, this AI stuff has only been at it for a few years, while I am deep into my seventies. On the other hand, by the time I’m over and done, and AI maybe reaches its greatest creative stride, maybe by then, I’ll be dead and gone and in my grave.

And, be-Jeebers! Whenever my cold, dead body finds itself there in that grave, I sure hope it won’t find a Google or Alexi or Siri reminder buried there with me, too. Still bugging me to activate and use their d@mn AI.

Regards,
(($; -)}™
Gozo

Andrew Clarke
4 months ago

Note: I copied this reply from “The Lift I Needed.”

Jonathan,

Interesting experiment.

As other commenters have noted, the AI-generated article is stilted, without the personality that makes HumbleDollar engaging. Even so, it’s remarkable that a simple prompt could produce a more coherent and useful article than many people could write.

I would guess that AIs will increasingly be used for basic investor education (“what is the difference between a Roth and Traditional IRA?”) in journalism and, especially, corporate communications, with oversight by a skilled editor.
This trend will eliminate entry-level jobs that allowed writers to learn about personal finance and, over time, develop real expertise.

But on an optimistic note, ChatGPTs might elevate the value of idiosyncratic, first-person accounts such as those on HumbleDollar.

I just hope there are enough readers who appreciate the difference between a bland, AI-generated (and maybe fabricated) story and a piece that reflects a real human experience.

Andy

James Mahaney
4 months ago

The advent of the internet changed the trajectory of my career and life. Having information available that you didn’t have to search through individual books? Acquiring knowledge was made so much easier! Think of how visiting Humble Dollar has increased your knowledge. AI is going to make the availability of knowledge even quicker to acquire. For those who are hungry to learn, it will be amazing. Yes, the early versions like ChatGPT 3.5 hallucinated at times and wasn’t fast. But, here we are just 18 months later and GPT 4o is much more accurate and much,much faster. IMO, Type AI is much better as an editor as it was built to specialize in writing and editing on as it is layered top of GPT 4o or Claude. Count me as highly optimistic of what is to come, and I believe it will level the playing field as knowledge becomes more easily available to access.

Jeff Bond
4 months ago

I love all the comments on this. I’m glad HD readers are so engaged in the topic. My concern about AI in general and this piece in particular is that nowhere did ChatGTP label the story as fanciful or fictionalized. I’d wager the metadata doesn’t, either. There needs to be some form of traceability for the construction of a piece like this, so the reader has a reference to the facts, if they exist.

Peter Bengelsdorf
4 months ago

I’ve been doing similar experiments, creating lesson material for English learners. This has made me familiar with the chatbots’ essay style. Once you get over marveling at the machines’ newfound ability, you begin to see that all the AI essays have an eerie similarity. That includes the one you posted.

I’ve also encountered the bots’ (admitted and widely reported) tendency to be inaccurate, and to fabricate without bothering to make clear they’ve done so. The fears I’ve been reading about AI are real, but another one has been percolating in my mind: How deep will the inevitable crash be when, as with all new technologies, the hype around this one falls short of the billions of dollars invested in it? Subtract AI from the recent history of the markets, and things don’t look good at all.

R Quinn
4 months ago

I have tried AI many times and have fact checked as best possible and have yet to inaccuracy let alone fabrication. In fact, sometimes I get the facts first to be sure.

Cammer Michael
4 months ago
Reply to  R Quinn

In late 2022 I played with ChatGPT3.5 for science. I asked it questions with known answers. It usually answered mostly correct, but threw in a wrong answer. Without identifying the wrong answer, I would tell it that something was wrong, and then it would usually be able to correct itself. This still occurs, although less so and the writing style is less bland, with ChatGPT4, including using a paid professional version.
An area where it is often wrong is citing work of others. It continues to have mostly correct citations, but with the spelling of an author’s name or a word in the title a little off. Sometimes is makes up something completely wrong.
So where there are specific facts, it’s generation based on statistics is not good enough. It doesn’t yet know when it needs to switch from AI to pure search.

Peter Bengelsdorf
4 months ago
Reply to  R Quinn

Here’s an entertaining example. When I prompted, “Show me five examples of famous comedians making jokes that include the past perfect verb tense,” the results may charitably be described as whacky. The examples cited real comedians but the jokes were fabricated. The bots seemed to be trying to impress me with their ability to mimic the style of the comedians. In addition, while the bots could readily define the past perfect tense, they had trouble identifying it.

It reminds me of another much-hyped feature of machine learning — language translation. So much promise! So much left to be desired! It is making progress, to be sure, but not at the rate stock investors tend to look for.

Mot Det
4 months ago

I don’t necessarily see AI as a big issue, per se. At its best, LLMs will mostly regurgitate and collate valid information, albeit, without the human element/storyline that makes articles in HD more than just reading Wikipedia or text books.

On the other hand, when LLMs hallucinate, they will either crank out complete nonsense, or contrarian views, which is part and parcel to any information source with sufficient diversity.

Case in point might be Dave Ramsay’s rant from last year claiming that his safe withdrawal rate is 8% because inflation was running at 4% and he could easily get 12% average return from his portfolio. Is that fundamentally any different than an LLM hallucinating and ignoring not only sequence of return risk, but also just bad statistics?

Note that LLMs tend to suck at doing actual math and have to be modified to be able to handle math problems. But, math engines require valid inputs to produce valid results, and the converse is still garbage in, garbage out, so if the attached LLM is hallucinating, even doing proper math will simply result in another Dave Ramsay meltdown.

Philip Stein
4 months ago

The AI-generated article is impressive, but I’m glad Jonathan reviewed it’s contents before posting it. I fear we’ll be in for a great deal of mischief when some people start directly posting what an AI chatbot spits out without having the contents reviewed by an expert beforehand.

In a manner similar to peer review of scientific papers before publication, I would hope AI-generated content of important subjects like personal finance would also undergo some sort of peer review to at least identify and remove “hallucinations.” But I know it’s naive to think this will always be the case.

It’s well-known that generative AI is “trained” on vast amounts of data—and it is truly astounding how AI can summarize large volumes of material for easy consumption. But AI can only present to us what is already known. Everything described in the “meet-John-Bogle-in-the-elevator” story was already written by Bogle beforehand.

What would Bogle’s wisdom have revealed to us had he lived longer? AI can’t tell us. Wisdom and experience can sometimes connect dots and discover truths that were never previously considered. AI chatbots can summarize for us what is known—they can’t conceptualize and create new knowledge. Can you imagine an AI chatbot “discovering” the Theory of Relativity before Einstein wrote about it?

Generative AI is a powerful tool, but in the wrong hands it has the potential to do harm. I fear that until society can tame this technology, we’ve grabbed a tiger by the tail.

Cammer Michael
4 months ago
Reply to  Philip Stein

You haven’t convinced me that AI cannot come up with something new. My experience is that it already has.

Philip Stein
4 months ago
Reply to  Cammer Michael

I assume that by “something new” you are referring to AI discovering patterns in data weren’t previously identified. That is impressive, yet it involves examining data that has been previously collected.

Einstein began his studies based on the prior knowledge of others—and, presumably, AI could have had access to the same information. But if Einstein had never lived, could AI have come up with a Theory of Relativity on it’s own? I remain skeptical.

Winston Smith
4 months ago

Jonathan,

2 points …

(1)
I spent my working life playing with computers. There are limitations to what even the best/strongest program can do.
I personally don’t like the results of my using AI.

(2)
for me the best part of HD is reading the posters own financial journey. While the ChatGPT was a cute little story, it missed years and tears and years of personal life experiences.

Please don’t go down the ‘blog written by AI path’.

jerry pinkard
4 months ago

Jonathan, all journalists do not have your ethics. I could see some unscrupulous writer making a living this way. Shame on him. I much prefer the originality of real people.

Doc Savage
4 months ago

I’ve wondered a lot about whether articles I’m reading are AI-generated. I notice lately that articles in WSJ sometimes appear ‘thin’ or written as if they’re explaining a topic in a very pat or simplistic way. I’ve wondered if it’s either AI-composed or simply that rigorous and ‘dimensional’ composition is no longer emphasized in higher education these days.

SanLouisKid
4 months ago
Reply to  Doc Savage

I was using a chat connection with a company recently and needed to connect with a real person instead of a “bot”. I asked, “What color shirt are you wearing?” to help establish a human connection. That worked.

Cammer Michael
4 months ago
Reply to  Doc Savage

The WSJ has gotten really thin and sloppy lately. But I suspect this has more to do with expectations set by management.

Last edited 4 months ago by Cammer Michael
SanLouisKid
4 months ago

Sometimes I drive down the road and think about what a “driverless car” would need observe and analyze in order to be safe. I look at traffic at the next light to see if anyone is edging out into the intersection. Are they making a “right on red” into a clear lane or in front of another car. Is someone in a car around me weaving a bit (drunk, stupid or on their cell phone?) and is someone coming up fast behind me going to run a light and hit a car and maybe involve me? I could go on but try it yourself and see if you could be replaced by a “computer.”

That’s some of what I feel about AI replacing people. As Jonthan pointed out, it can give you the mechanics of personal finance, but it can’t give you the nuances. The experiences I’ve had have created my mindset and ability to handle money.

On the other hand, if AI read all the articles on HD it could be pretty darn smart. (smile)

Doc Savage
4 months ago
Reply to  SanLouisKid

Good thought. I spend a lot of my driving time observing the drivers around me and far ahead of me to gauge their intentions or to pick up on the nuances in their driving attitude. I’ve also wondered if self-driving cars could do the same.

David Lancaster
4 months ago
Reply to  Doc Savage

I hate following any vehicle I can’t see beyond as I want to know what the driver I front of them is doing so I can further anticipate a problem evolving.

mytimetotravel
4 months ago

Surely much of the interest of Humble Dollar articles lies in the personal voice and personal experience of the writer. AI can’t write “in the style of” unless the person being imitated has already written a bunch of articles. Even then, it has a tendency to sound “flat” to me. And as you say, it’s not generating the topics. However, when it comes to AI I’m a Luddite, I’m tired of the tech bros moving fast and breaking things.

Dave Melick
4 months ago

Jonathan: great topic to write about as AI seems to be everywhere these days. Many believe it to be a “new thing”, but less-capable iterations of AI have been around for years. Thanks for experimenting with AI in the upcoming articles!

stelea99
4 months ago

Jonathan, we should probably be quite fearful of the effect AI is going to have on the number of people working. Of course, the more people who experiment with AI, the better will be its output creating the tool that will eventually really transform the world of work. As a retired investor I cheer the investment increases that might come from AI. I worry about what may happen to the workforce which already has such a big division between the skilled and the unskilled.

USAA just sent me an offer to sign up for voice recognition security on my account there. I, of course, won’t do that for the same reason that I never did so at Schwab. AI can deep fake any voice.

Phil Scott
4 months ago
Reply to  stelea99

For sure!. Turned down USAA’s invite for the same reason. I deposit ~3 hard-copy checks/yr and USAA uses snail-mail or phone app. I did the phone-app and am seriously consider an ‘Uninstall’ until the next time I need the deposit routine. Phone app opens all accounts and functions. My wife and I use Norton AV on computers and phones, but I’m unsure how thorough that would be if confronted by a dedicated AI hacker (is that an oxymoron?).

SanLouisKid
4 months ago
Reply to  stelea99

Number of people working and the reduction of physical activity. One of our favorite cartoon movies is WALL-E which observes society being replaced by robots and artificial intelligence. Maybe memberships (and investment opportunities) in gyms will increase.

Mark Gardner
4 months ago

Artificial Generative Intelligence will enhance the analytical and communication capabilities of experts for sure. It does mine every day and is allowing me to more effectively do my job and venture into areas I would have been naturally shy to do.

However, the sad reality is that it will also accelerate the spread of disinformation and propaganda that many Americans will fall prey to and they will make decisions against their own interests, whether it be with their investments or their vote. We’ll have to endure this pain before we realize its benefits.

Last edited 4 months ago by Mark Gardner
Jonathan Clements
Admin
4 months ago
Reply to  Mark Gardner

I think that, thanks to the internet, there is a parallel and unfortunate development: It’s easier to misinform on a mass scale than ever before, and the rewards for such unethical behavior seem to far exceed those for trying to promote truth. This isn’t AI’s fault, but AI obviously facilitates bad actors. Frankly — and sorry to get on a soapbox here — I don’t know how these scumbags live with themselves, and I hope they choke on the dollars, the fame and the votes they accrue.

William Housley
4 months ago

Spelin and gramer I have never been good at. I use ChatGPT as me editor. 🙂

Mitchell Schoenbrun
4 months ago

and I fear readers wouldn’t object, especially if they didn’t know how the articles were generated.

Uh oh! Seems like you might have opened a can of worms.

ChatGPT: Please write me an article in the style of Jonathan Clements on the potential use of public AI sites such as ChatGPT and Copilot in the writing of interesting financial articles.

🙂

Jonathan Clements
Admin
4 months ago

I tried that framing, and discovered that “in the style of Jonathan Clements” is embarrassingly breezy.

Jeff
4 months ago

Jonathan, I personally love the “experiment”. We wrestled with a similar problem when teaching our med students; we realized answers to practice case problems might easily be solved by A.I. methods. Our solution was to embrace the technology and refocus our teaching methods to challenge the students to learn in other (dare I say complimentary) ways.

AnthonyClan
4 months ago

My concern is if the younger generation relies to much on AI, how will they learn to write? Few memorize facts anymore, “why – I can just Google it…”
We need experts in all fields to monitor AI results. Experts will flourish with AI, their underlings “rightsized” with AI taking over their work. Where will the future experts come from if budding experts are all relying on AI?

Jonathan Clements
Admin
4 months ago
Reply to  AnthonyClan

When I was a school boy and calculators started appearing, I remember folks lamenting that kids would no longer know how to do math. Hasn’t been a problem. I think these technologies allow us to focus our mental energies on other issues — for instance, instead of memorizing facts, we can think more about how to analyze them.

Joe Cyax
4 months ago
Reply to  AnthonyClan

 I believe you to be spot-on on two points:

1. “Few memorize facts anymore, “why – I can just Google it…” “: The thing is, it is not really possible to actually “think” without facts already in your head. Of course, real thinking, being laborious, is perhaps what many are trying to get away from.

2. “We need experts in all fields to monitor AI results.” : Even beyond that, we need experts in all fields to actually generate the facts and ideas that AI uses when it draws from the existing body of knowledge (on the web). As AI takes on more and more tasks, it may descend into an echo-chamber, where less and less person generated data is used by AI generate more and more.

Scary times ahead.

Dan Smith
4 months ago

It’s sort of like my generation’s use of Cliff Notes to cheat on book reports.
Tom Walton, the retired editor of our newspaper wrote a tongue-in cheek article about AI several months ago; https://www.toledoblade.com/opinion/columnists/2023/09/10/walton-half-a-brain-but-a-good-heart/stories/20230910033

richard curley
4 months ago

I’m concerned both with the ethical issues raised by AI being used in this manner (some of which Jonathan mentions) and by the prospect that even fewer people will learn to write their own prose. Writing and reading is also a chance for humans to connect and inserting computers into the process just creates more isolation in at least some ways.

Rick Connor
4 months ago

Interesting experiment. I’ve played with ChatGPT a bit for some business writing. I found it got me about 80% of the way there. I had to edit in specifics and changed a few things. I haven’t tried the other products but will.

In my career we frequently reused materials as a start or example in writings, especially in proposals. This was always our proprietary material, but I saw other companies occasionally borrowed our material. Regardless of the technology, ethical people find a way to be unethical.

Nick Politakis
4 months ago

I really don’t know how I feel about the issue. I know that I like reading real life stories and not imaginary ones.

Harold Tynes
4 months ago

I guess that many articles in my local paper and others are already AI written. As a sports fan, I’ve noticed that the agate type baseball transactions and short game summaries appear automated. I used ChatGPT as a prompt for writing. Thanks for doing this. I have not used the other tools so your conclusions may guide me.

Will
4 months ago

Great experiment! I look forward to reading all 8.

Jo Bo
4 months ago

I agree, AI is already a useful tool in many arenas. Nonetheless, this reader never opens financial news written by the ever present MarketWatch Automation.

AKROGER SHOPPER
4 months ago

Jonathon this reminds me of the commercial is it real, or is it Memorex. Who would have thought having the articles fabricated would become commonplace. Thanks for the heads up on the experiment.

David Lancaster
4 months ago

it seems we can substitute fiction for real-life experience. Should we be bothered? I am.”

…and so am I. I no longer trust that any important financial information emailed to me is real, and in the habit now of going to the institutions’ website rather than clicking on the attached link.

R Quinn
4 months ago

Based on what I read everyday on social media sites I’d say artificial intelligence would be an improvement as it would contain more intelligence in many cases.

Edmund Marsh
4 months ago

Regarding your last statement, G.K. Chesterton famously made the same observation. Even if we lived in Paradise, we’d probably find a way to mess up a good thing.

Free Newsletter

SHARE