MY DAYS WRITING for HumbleDollar may be numbered. I recently started playing with Google’s Bard, OpenAI’s ChatGPT and Microsoft’s version of the ChatGPT artificial intelligence (AI) platform, and was curious to see how they might perform in providing basic financial guidance. Their answers were generally sensible and aligned with HumbleDollar’s approach—though also occasionally flawed.
You might think that AI can’t possibly replace articles penned by contributors, since the charm of HumbleDollar is the contributors’ personal stories. Yet, if asked for a 500- or 700-word story, AI platforms can readily deliver original pieces of writing on any topic. While these stories are fiction, they can seem surprisingly realistic.
So far, I’ve found the most useful feature of these AI platforms is their editing ability. I’ve asked the platforms to edit a few of my finance and sports articles and, in every case, the platforms have improved my prose. They tend to make the writing more concise, choose more descriptive words and, in some cases, even suggest relevant content or context.
These platforms are built using large language models, so editing to deliver fluid prose is one of their strengths. You can use them to help you with almost any writing task. They can edit resumes, craft a job query, prepare interview questions, create training materials, complete homework (unfortunately), write poems and compose a love letter.
I’ve also tried other topics. The platforms were especially helpful with planning vacations and hiking trips. They do a good job of finding routes and offering travel alternatives. As with a regular search engine, the more specific the query, the more useful the output. Here are some AI responses to a handful of basic financial queries:
When answering these financial queries, both platforms almost always included a caveat to consult with a financial advisor. Likewise, when signing up, the platforms highlight that these AI systems are in the early development stages and that the responses may contain incorrect information. So far, I prefer Bard’s content delivery and ChatGPT’s editing capabilities.
Despite the occasional flaws, I would still recommend Bard or ChatGPT as useful tools, including for questions about personal finance and retirement basics. While they certainly won’t replace our humble editor any time soon, they can help with writing, editing and planning.
Interested in trying Google’s Bard, OpenAI’s ChatGPT or Microsoft’s ChatGPT? Using them isn’t as simple as typing in an internet address. Instead, you’ll have to request access. From my computer, I used the Google Chrome browser for Bard and the Microsoft Edge browser for the two versions of ChatGPT. Bard is also available using Apple’s IOS software.
Author note: This article has used AI editing assistance, and no egos were hurt in its implementation.
John Yeigh is an author, speaker, coach, youth sports advocate and businessman with more than 30 years of publishing experience in the sports, finance and scientific fields. His book “Win the Youth Sports Game” was published in 2021. John retired in 2017 from the oil industry, where he negotiated financial details for multi-billion-dollar international projects. Check out his earlier articles.
Want to receive our weekly newsletter? Sign up now. How about our daily alert about the site's latest posts? Join the list.
Intelligence is a much different attribute than wisdom. And artificial intelligence is a much different attribute than artificial wisdom, or any wisdom, for that matter. I’m happy to use AI to find out facts. I’m satisfied to use AI to explain things where the explanations are straightforward. But once you need to inject values, consider the influence of experience and then weigh your assessment based on lots of personal nuance, I think I want to do that myself.
I’ve wanted to try ai, and thanks to your article, I did. I couldn’t get into Bard; I had better luck with ChatGPT. I used it in my newsletter, where I commented on Jonathan’s book, My Money Journey, and rewrote my comments. It did a nice job, but it didn’t sound like me.
Sonja – Likewise, Humble Dollar articles tend to sound somewhat like our editor, but delivering clear and concise writing is a winner no matter the editing source. This AI edited article was among the least edited articles after submission, and AI editing helped me even more to refine a 4,000 word adventure article. While I only use bits of the AI suggested edits, I’m never going back.
This is an almost perfect companion to the prior article Second Act ( https://humbledollar.com/2023/05/second-act/) about the need for life coaches.
I am a college professor in the field of Management Information Systems, which is the intersection of business and technology. I teach college sophomores (19-21 year olds mostly) about the impact of technology and how it affects “who looses a job and who gains a job”. One theme I consistently push hard on is that, despite amazing technological advances such as AI, people still want to talk to people about their specific problems and specific issues.
A life coach can do exactly this.
Another interesting aspect of AI and the media coverage I am seeing, including this very good Humble Dollar article, is that it is mostly about the “how” and little about the “when”.
I developed a framework I am in the process of publishing in one of our quick-to-market scholarly journals that I wrote with one of the sophomores from my class last semester. It focuses on the “when”.
Although the discussion was about getting an MBA specifically, I personally think the most interesting finding from my work came in a discussion with a weekend MBA class on whether or not the University is just there to get you the degree (then just use ChatGPT as often as possible) or is there to teach you a craft or skill, and at the same time whether or not your purpose is to just get a degree or to learn a craft or skill. This thinking can be taken down to the class level, and then each individual assignment within a class.
Where this becomes super interesting (to me at least!) is at the intersection between these two end points (just-for-the-degree / for-the-skillset).
For instance, did John Yeigh write this article just to get published in Humble Dollar or to provide much-needed advice to improve the reader’s skillset?
David
David – here is the anatomy of the 6 week historical journey and reasoning for this article:
Early April: Started exploring AI – 1) out of general curiosity and 2) to keep pace with my Gen Z kids (I’m actually ahead).
Queried AI to edit articles, and AI particularly helped tweak an adventure travel article.
Queried Humble Dollar finance type concepts where AI mostly passed but occasionally failed big – this is completely new info for HD readership, so the seeds for a worthy article were born
Mid to late April: wrote, submitted and edited
Mid May: published
I write on multiple subject areas where I can hopefully add something new which is personally enriching. I have published 100+ articles since 1989 plus a book. Magazine articles in the pre-web days earned real money, but writing these days is mostly for self actualization.
I’ve grown tired of the media’s obsession with AI. Same with Crypto. Until there’s some breakthrough, I’ll only scan headlines about them. Classic information overload.
You don’t consider the new large language models a breakthrough?
My concern is the privacy issue. Google and Microsoft collect so much private information I am loathe to contribute more by using their AI platforms. What do they do with the information we give them?
Google’s Bard is not available in Europe probably due to the stronger privacy issues.
ChatGPT now gives the ability to turn off chat history which will (in theory) keep them from saving any prompts you give. Bard gives ways to pause or delete activity as well.
Whether you trust them to do as they say on privacy? That is up to you.
The scary thing is that the AIs will get better and better over time. The predictions for AGI (artificial general intelligence) arrival from experts has gotten closer and closer with the recent developments from ChatGPT, Bard, and Bing. (Technically Bing is built on OpenAI’s ChatGPT since they own it now.)
The amount of white collar worker dislocation over the coming two decades might require a dramatic rethink of how society is structured.
Interesting times indeed..
This is fascinating stuff. Curious so i just used rytr.me I put in the topic and several key words and the app wrote an article designed for a blog. Pretty generic stuff, but if i published it is it plagiarism?
The plagiarism angle is still being decided by the courts for music, art, and (as you point out) text generated by AI.
Timely topic. NPR had a piece this morning about congressional hearings starting today about AI and likely guardrail type legislation to limit AI powered deception and fraud. Per the NPR radio piece the US is trailing the EU in legislation and protecting personal information.
https://www.npr.org/2023/05/16/1176371658/capitol-hill-hearings-to-take-a-closer-look-at-guardrails-for-artificial-intelli
Thanks John
Nice article. I also have played around with Bard and Bing and have 3 comments. First, it’s interesting to see the different responses from Bard and Bing to the same query. Second, I’ve found some situations where the AI’s really are much more efficient than standard search queries. For example, Bard’s really helpful and detailed for answering questions about how to do something in Google sheets that I hadn’t done before. It’s much more useful and interactive as compared to the help function or traditional web searches. Third, I’ve found both products very useful for consolidating information from multiple sources into 1 table. As I write this, I’ve found the Bing smartphone app’s ability to listen to my question verbally and respond verbally really useful if I’m in a situation where reading the answer on the smartphone is difficult (such as in bright lighting or when I don’t have my reading glasses). But I have found that you get better responses from both if you take a second to state some background information concerning what you’re asking and then form a well crafted question. It seems to help them zero in on what you’re looking for quicker and more precisely.
All good points. I will try the “how-to” approach. In the past, I have googled a variety of ways to ask for info until I got to the right area and then narrowed it down article by article. Sounds like I will need to prune while using AI, too, but it may be faster. Thanks.
OldITGuy – thanks for even more clarifications on how best to use. In a similar manner, I found some initial output lacking. These output gaps led me to hone in on improved follow-up queries.
How were those whose work was used to build that generative AI model compensated? AI has an issue here, and with transparency as well as bias. Solutions are not quick or easy.
You are correct that Copyright and Intellectual Property rights have long been web issues; now exacerbated by AI. For example, my book was offered for sale two weeks before the publisher’s launch date by 20 pirate sellers. Today there are too many illegal sellers to chase who have stolen our copyright – we shut down some initially, but the sellers just popped up with the same ad copy under a new name.
It is tough for artists, musicians and writers to make a buck these days. The financial playbook has changed to touring, giving paid speeches, building brand to get ad revenue, and upselling branded products.
I sense there’s an issue here that I’m unaware of. What prompts your question, David?
John’s piece is excellent, I’m making broader points to be mindful of as we all rush headlong into using these shiny new AI things.
These big AI inference models like GPT are built from source datasets. Many AI experiences do not allow the user to easily discover the sources of that data to determine for yourself whether they are fully trustworthy.
Also, from John’s piece you can see the value derived from the GPT model but from a compensation perspective that value is not linked back to the work of those who produced the data from which the model is built. When you publish a piece on HD, do GPT creators — who crawl the site for your article or Guide data — agree to compensate you for your content as advertisers do today?
Finally, all datasets have skew or bias, along any dimension you examine, and the industry has only the beginnings of reliable methods for detecting and mitigating such bias so the data can still be used while producing fair, reasonable and unbiased results.
These are all solvable problems with time.
David can answer for himself, but I have read that a lot of “content creators” (i.e. writers) are upset that their output was used to train AI’s without their consent and certainly without payment. The developers basically “scraped” the web for everything they could access. HumbleDollar included, I’m sure. It is same model as selling user’s data without compensation.
John, thanks for an interesting article. I have not ventured into AI platforms but you have piqued my interest. One of my concerns about writing articles is am I providing anything new, creative, meaningful, … Much of what appears in the financial press is a consolidation of existing information. When I research a topic I often find that a solid article on the topic already exists. I try to bring a new twist where I can, but one of my main reasons for writing is the hope that I can expose new readers to important topics.
Interesting test drive, John. My question will reflect my general ignorance of the topic, but I wonder, for the financial questions, do you know how far are the programs able to reach for their answers? Are they influenced by the biases of the programmers, or do they access research and objective data to draw conclusions for their answers?
I’m no expert, but my understanding is that they’re using data from the web. Both Bard and Bing will show you the sources of the data they used to answer a query if you ask them to. You can then ask them to only use certain sources of data and refine their answer and they’ll both do so. For example, in response to a health question I asked both to show me their data sources. then I asked them to refine their answer based on data from the mayo clinic, cleveland clinic, and the harvard health publishing and they both did so. I hope this helps answer your question.
Very helpful, thanks!
I honestly don’t know, but Google’s and Microsoft’s leaders have clearly indicated an intent to deliver the most reliable info. My basic financial queries generally delivered answers of sensible norms. Still, AI locates info with different biases; as an example the two different conservative portfolio allocations. Yet, if there weren’t different biases, we couldn’t have our daily debates on Humble Dollar :).
So, we’ll have to use our own judgement in the end, just as we do now. Non-buyer be wary.
John, in the examples you mention it seems AI was effectively consolidating existing information, previously developed ideas and concepts. I wonder how far we are from having it create new things or challenge existing ideas?
Indeed the AI platforms consolidate existing content and often summarize eloquently. They also often provide footnoted references, especially Bing. However, as per my IRMAA marginal premium query, all the programs delivered incorrect information as if it were fact.
In some respects, AI is already creating, because it is researching, ordering, sorting, rationalizing and delivering new content. We just have a bit of caveat emptor to fact check it ourselves.