NEW YORK ATTORNEY Steven Schwartz recently found himself in hot water. Schwartz was representing a passenger injured on board an Avianca Airlines flight. In a filing with the court, Schwartz cited several precedents that appeared to support his case, including Martinez v. Delta Air Lines, Zicherman v. Korean Air Lines and Varghese v. China Southern Airlines. The only problem? These cases are all fictional—made up not by Schwartz, but by ChatGPT, an artificial intelligence (AI) “chatbot.”
The judge was not pleased, and Schwartz acknowledged being duly embarrassed for not verifying the information provided by ChatGPT. Schwartz explained that his children had told him about this new tool, and he understood it to be like a “super search engine.” Schwartz wasn’t aware, however, that it was still a work in progress and susceptible to “hallucinations.” It frequently makes things up out of whole cloth.
Since its introduction last fall, ChatGPT and competitors like Google’s Bard have been the subject of heated debate. Students love it because it can help them compose well-written essays with virtually no effort. Parents and teachers, meanwhile, are wary of it precisely because of these capabilities. For that reason, and because of its tendency to hallucinate, news coverage of AI often includes an undertone of mockery.
The Wall Street Journal, for example, reported recently that Japanese venture capitalist Masayoshi Son had used ChatGPT to help him validate business ideas. After one back-and-forth with the chatbot that lasted until 4 a.m., Son reported that, “I really felt great because my idea was praised as feasible and wonderful.” Coming from a billionaire and someone who is perhaps Japan’s most famous investor, it was odd to hear that Son was relying on a computer’s opinion to make investment decisions.
Because of its limitations, you might wonder if AI has any practical use for investors today. I believe it does. Below are the types of questions where I think ChatGPT and Google Bard can be most helpful.
“Can you help me understand…?” The world of personal finance is full of jargon, and sometimes it’s hard to find basic explanations. I asked ChatGPT to describe the difference between a mutual fund and an ETF. It started by noting the ways in which they’re similar. It then listed six differences, all of which were correct and explained in clear terms. It did an equally good job explaining other terms, including bid-ask spreads, bond yields, and the difference between confusingly similar terms like efficient markets and efficient portfolios.
“What is the role of…?” If you’re looking to understand elements of the broader financial system, AI can help. For example, what is the role of the Federal Reserve, and how does it differ from the role of the Treasury? Bard answered this question by putting together a table outlining the roles and responsibilities of each entity. It then explained how they collaborate. It wasn’t the most comprehensive explanation, but it was a good introduction. I then asked Bard, “Can you say more?” Sure enough, it provided a more detailed discussion.
“What is the history of…?” AI can be great at providing historical context on a given topic. I asked Bard and ChatGPT, “What is the history of the Federal Reserve?” The responses included both basic facts and figures, as well as some historical context, even noting key events that motivated the creation of the Fed.
“What is the significance of…?” Perhaps you’ve heard a financial term but aren’t entirely sure of its significance. For instance, what does it mean when you hear about the annual meeting in Davos? Why was the Dutch East India Company so famous? What was the origin of the Dutch tulip craze? AI does well with questions like these.
“Should I….?” For now, at least, AI tools don’t necessarily know you, so it might seem like a problem to ask whether you should make a particular financial decision. But this actually highlights a strength of AI. It’s very good at pulling together information on a given topic and listing the considerations that ought to go into a decision.
I asked Google’s Bard, “Should I complete a Roth conversion?” It replied with four considerations, all of which were valid and explained clearly. It did, however, include one detail which was incorrect. This is a reminder that these systems are still imperfect. That’s why I would use AI as an aid—but don’t make the mistake Steven Schwartz made. Its output is not gospel. Also, keep in mind that you can ask these systems to cite their sources.
“Can you provide a chart of…?” In a lot of cases, personal finance information is available online but not in formats that are easy to understand. While susceptible to mistakes, AI does a great job pulling together data.
I asked AI for a graph illustrating the top marginal tax bracket in the U.S. over time. ChatGPT wasn’t able to produce a graph, but it did provide that information in a well-organized table. Bard was able to offer a chart—not generated on the fly, but sourced from another website. That gets at another of the controversies surrounding AI—that it freely borrows from websites around the internet. This is a separate topic but one that will need to be worked out.
“What are some of the most popular…?” When I asked AI for some summer reading recommendations, it came back with some reasonable selections. When I asked it to provide separate lists for novices and advanced readers, it did that as well.
Over the course of history, many inventions have been met with skepticism or fear. This goes back at least to the 1400s, when some worried the printing press would result in the spread of heresy. More recently, a 2008 article in The Atlantic asked, “Is Google Making Us Stupid?” People are asking the same question now about AI.
I understand these concerns. To be sure, as with any technology, there will be downsides. I don’t worry, though, about some of the more extreme scenarios, such as the idea that an AI-powered army could enslave the human race in an amoral drive to do something as mundane as making paperclips This is an actual theory put forward by an academic who has been studying AI. That sort of thing strikes me as science fiction. Instead, my sense is that, as it improves, AI will be an increasingly useful tool.