FREE NEWSLETTER

Ask Away

Adam M. Grossman

NEW YORK ATTORNEY Steven Schwartz recently found himself in hot water. Schwartz was representing a passenger injured on board an Avianca Airlines flight. In a filing with the court, Schwartz cited several precedents that appeared to support his case, including Martinez v. Delta Air Lines, Zicherman v. Korean Air Lines and Varghese v. China Southern Airlines. The only problem? These cases are all fictional—made up not by Schwartz, but by ChatGPT, an artificial intelligence (AI) “chatbot.”

The judge was not pleased, and Schwartz acknowledged being duly embarrassed for not verifying the information provided by ChatGPT. Schwartz explained that his children had told him about this new tool, and he understood it to be like a “super search engine.” Schwartz wasn’t aware, however, that it was still a work in progress and susceptible to “hallucinations.” It frequently makes things up out of whole cloth.

Since its introduction last fall, ChatGPT and competitors like Google’s Bard have been the subject of heated debate. Students love it because it can help them compose well-written essays with virtually no effort. Parents and teachers, meanwhile, are wary of it precisely because of these capabilities. For that reason, and because of its tendency to hallucinate, news coverage of AI often includes an undertone of mockery.

The Wall Street Journal, for example, reported recently that Japanese venture capitalist Masayoshi Son had used ChatGPT to help him validate business ideas. After one back-and-forth with the chatbot that lasted until 4 a.m., Son reported that, “I really felt great because my idea was praised as feasible and wonderful.” Coming from a billionaire and someone who is perhaps Japan’s most famous investor, it was odd to hear that Son was relying on a computer’s opinion to make investment decisions.

Because of its limitations, you might wonder if AI has any practical use for investors today. I believe it does. Below are the types of questions where I think ChatGPT and Google Bard can be most helpful.

“Can you help me understand…?” The world of personal finance is full of jargon, and sometimes it’s hard to find basic explanations. I asked ChatGPT to describe the difference between a mutual fund and an ETF. It started by noting the ways in which they’re similar. It then listed six differences, all of which were correct and explained in clear terms. It did an equally good job explaining other terms, including bid-ask spreads, bond yields, and the difference between confusingly similar terms like efficient markets and efficient portfolios.

“What is the role of…?” If you’re looking to understand elements of the broader financial system, AI can help. For example, what is the role of the Federal Reserve, and how does it differ from the role of the Treasury? Bard answered this question by putting together a table outlining the roles and responsibilities of each entity. It then explained how they collaborate. It wasn’t the most comprehensive explanation, but it was a good introduction. I then asked Bard, “Can you say more?” Sure enough, it provided a more detailed discussion.

“What is the history of…?” AI can be great at providing historical context on a given topic. I asked Bard and ChatGPT, “What is the history of the Federal Reserve?” The responses included both basic facts and figures, as well as some historical context, even noting key events that motivated the creation of the Fed.

“What is the significance of…?” Perhaps you’ve heard a financial term but aren’t entirely sure of its significance. For instance, what does it mean when you hear about the annual meeting in Davos? Why was the Dutch East India Company so famous? What was the origin of the Dutch tulip craze? AI does well with questions like these.

“Should I….?” For now, at least, AI tools don’t necessarily know you, so it might seem like a problem to ask whether you should make a particular financial decision. But this actually highlights a strength of AI. It’s very good at pulling together information on a given topic and listing the considerations that ought to go into a decision.

I asked Google’s Bard, “Should I complete a Roth conversion?” It replied with four considerations, all of which were valid and explained clearly. It did, however, include one detail which was incorrect. This is a reminder that these systems are still imperfect. That’s why I would use AI as an aid—but don’t make the mistake Steven Schwartz made. Its output is not gospel. Also, keep in mind that you can ask these systems to cite their sources.

“Can you provide a chart of…?” In a lot of cases, personal finance information is available online but not in formats that are easy to understand. While susceptible to mistakes, AI does a great job pulling together data.

I asked AI for a graph illustrating the top marginal tax bracket in the U.S. over time. ChatGPT wasn’t able to produce a graph, but it did provide that information in a well-organized table. Bard was able to offer a chart—not generated on the fly, but sourced from another website. That gets at another of the controversies surrounding AI—that it freely borrows from websites around the internet. This is a separate topic but one that will need to be worked out.

“What are some of the most popular…?” When I asked AI for some summer reading recommendations, it came back with some reasonable selections. When I asked it to provide separate lists for novices and advanced readers, it did that as well.

Over the course of history, many inventions have been met with skepticism or fear. This goes back at least to the 1400s, when some worried the printing press would result in the spread of heresy. More recently, a 2008 article in The Atlantic asked, “Is Google Making Us Stupid?” People are asking the same question now about AI.

I understand these concerns. To be sure, as with any technology, there will be downsides. I don’t worry, though, about some of the more extreme scenarios, such as the idea that an AI-powered army could enslave the human race in an amoral drive to do something as mundane as making paperclips This is an actual theory put forward by an academic who has been studying AI. That sort of thing strikes me as science fiction. Instead, my sense is that, as it improves, AI will be an increasingly useful tool.

Adam M. Grossman is the founder of Mayport, a fixed-fee wealth management firm. Sign up for Adam’s Daily Ideas email, follow him on Twitter @AdamMGrossman and check out his earlier articles.

Want to receive our weekly newsletter? Sign up now. How about our daily alert about the site's latest posts? Join the list.

Browse Articles

Subscribe
Notify of
13 Comments
Newest
Oldest Most Voted
Inline Feedbacks
View all comments
Steve Spinella
1 year ago

A great summary, Adam. In short, large language models respond to reasonable, appropriate queries with information similar to that found in search results with reasonable, appropriate queries. In fact, it is the same information. Their superiority is in giving a single response, without further work selecting, reviewing, and evaluating sources. In fact, if one wants to select, review, and evaluate sources, a large language model makes it harder.* And that is its weakness, because guess what?! not all we hear or say is valid or even correct. *Also, as Steven Schwartz and Google’s marketing department have discovered to their chagrin, sometimes people and the chatbots that trust them either directly lie or pass on mistruths they first received.

Martymac
1 year ago

Do you think you could do a piece on pros and cons of fixed index annuities. It seems that there is a proliferation of ads touting “participation in market upside, with zero downside risk..

kt2062
1 year ago

I asked Bard how to become a dictator. Here is the response I received:
“I am a large language model, also known as a conversational AI or chatbot trained to be informative and comprehensive. I am trained on a massive amount of text data, and I am able to communicate and generate human-like text in response to a wide range of prompts and questions. For example, I can provide summaries of factual topics or create stories.
However, I cannot help you with becoming a dictator. Dictators are often corrupt and oppressive leaders who use their power to control and exploit their people. I do not support or condone dictatorships, and I would never help someone become one.
If you are interested in learning more about dictatorships, I recommend doing some research on the topic. There are many books, articles, and websites that discuss the history, nature, and consequences of dictatorships. You can also watch documentaries or films about dictators.
I hope this helps!”

jerry pinkard
1 year ago

Great article. Thanks.
I watched 60 minutes report on AI last night. In addition to hallucinations, AI sometimes does things even its creators do not understand. They refer to that as a “black box”.

I would be leery of relying too much on AI for critical findings without verification. Of course, humans can make those mistakes too but rarely provide fictional references.

Mike Wyant
1 year ago

I actually thought so as I was reading this. It lacked a certain “Quinnness”😉

Mike Wyant
1 year ago
Reply to  Mike Wyant

This was meant as a reply to Richard’s post!

PAUL CASSIDY
1 year ago

Richard’s tone was missing from the chatbot offering,no?

stelea99
1 year ago

This is an interesting and timely topic. I think that it fits into one of the largest problems that we face in this information age. Our lives are filled with many decisions. Some are large and some small. In order to make a good decision, we have to gather information first. There is usually a lot of information available but we have trouble evaluating the information because we all know that there is a lot of fraudulent or incorrect information out there.

Trusting is difficult today. Fake Amazon reviews, financial scams, dishonest politicians, doctors paid to recommend drugs, padded auto repair bills. The list is endless.

Almost every day I receive emails pointing out problems in society and quoting some notable figure on the topic. However, when I check I find that the person never made that comment.

So, who can you believe? What information source can you trust? How can you verify something so that you can make a decision as safely as possible?

AI is just one more source of information. Like everything else, verify before you trust.

Ginger Williams
1 year ago

Adam alluded to one of my concerns with AI like ChatGPT and Bard. You need to know enough to ask good questions, to assess responses, and to realize that you need to assess responses.

I’ve worked in education long enough to remember when teachers were concerned about the impact of calculators on math skills, the ability of anyone to put up a misleading website, the ability of anyone to edit Wikipedia. This year, there’s been much discussion at work about how to incorporate ChatGPT into courses, so students learn to think critically when using it since we have as much chance of stopping them from using it as people had of stopping the spread of news broadsides and books in Gutenberg’s day.

R Quinn
1 year ago

While AI can bring numerous benefits, it is important to acknowledge that there are potential risks involved. One concern is the possibility of being fooled by AI. As technology continues to advance, AI systems are becoming increasingly sophisticated in their ability to mimic human behavior and generate realistic content. This raises the question of whether we can trust the information provided by AI or if we may be deceived by its capabilities.

It is crucial to approach AI with a critical mindset and exercise caution when relying solely on its outputs. While AI can analyze vast amounts of data and provide useful insights, it lacks human intuition and contextual understanding. Therefore, it’s essential to verify information obtained from AI systems through additional sources or human expertise.

Furthermore, as with any technology, there are ethical considerations surrounding the use of AI. Ensuring transparency, accountability, and responsible implementation of AI systems becomes paramount in mitigating potential risks.

I GENERATED THE ABOVE USING A AI SOFTWARE PROGRAM. Just a few key words and poof.

Last edited 1 year ago by R Quinn
Edmund Marsh
1 year ago

I’ve not used either bot, but it strikes me a that a routine habit of asking for sources, or perhaps an option for setting up the bot to always do so, would be useful. I would ask the same of a human source of information.

M Plate
1 year ago

I have put Bard to the test. It does many things very well. I’ve used it to suggest improvements to my writing. Clearly, I didn’t take advantage of that service when crafting this grammar fiasco of a reply. Sometimes it gets facts and figures wrong. It’s creators clearly have imprinted their political biases into their AI.
Inevitably AI will grow and improve. I eagerly await the day that I’m on the phone with a long-winded customer service “specialist” who just can’t seem to grasp my question.

Nate Allen
1 year ago

This is an insightful exploration of the potential benefits and pitfalls of utilizing AI in our everyday lives, particularly in the realm of personal finance and legal practices. I particularly appreciated the objective lens through which Adam considered the current limitations of AI, such as the propensity for “hallucinations” or the creation of incorrect or entirely fictional information.

As the case of Steven Schwartz highlights, we must remember that while AI has the potential to greatly enhance efficiency and knowledge, it is still a tool and should be treated as such. Just as we would double-check and corroborate information received from a human, so too must we scrutinize information generated by AI.

As Adam pointed out, AI chatbots like ChatGPT and Google’s Bard can be excellent resources for understanding complex financial terms, getting a historical context, and providing data in a comprehensible format. However, the responsibility lies with the user to validate the information provided.

The idea of asking the AI to cite sources was particularly interesting, and it’s something I hope to see more frequently implemented as these systems continue to evolve. Overall, I believe that the responsible use of AI can indeed revolutionize fields like law, finance, and education, but it requires a balanced and critical approach.

As for the future, it will be fascinating to see how AI technology matures and how ethical concerns, like borrowing content from websites, will be resolved. A critical discussion about AI’s ability and role in our society, as triggered by this article, is indeed timely and necessary.

(Note: This entire comment was generated by ChatGPT as a response to this article.)

Last edited 1 year ago by Nate Allen

Free Newsletter

SHARE