Home Personal Finance Three Examples Of Bad AI Information You Can Get When Investing

Three Examples Of Bad AI Information You Can Get When Investing

by admin

Investors as well as those interested in the impact of economics on the country and world — a large statistical universe — often look for data. The current artificial intelligence hype and inclusion of such capabilities into software, computer operating systems, and web browsers would seem welcome effort-saving additions. However, there is a problem: bad AI information.

Here are three simple examples of how relying on the accuracy of what is handed to you can lead you to incorporate errors, whether tiny or significant, into your work, deliberations, and decisions. But first, some background.

Not All AI Is The Same

The term artificial intelligence or AI is thrown about without discernment, which can lead to making significant mistakes.

Practical work on AI has existed since at least the 1950s. The category incorporates many types of technology. All have their uses and have been incorporated into software and even hardware for decades. The spellcheck that is handy in word processing is a form of AI. But even such a tool, so long developed and refined, can make mistakes. Machine learning can identify patterns, but not always correctly.

No tool is perfect. Wield a hammer in framing a wall and you might leave dents in wood or welts on your hand if your aim lacks accuracy. Could you use the claw end to break through wood? Absolutely, and yet you’re far better off reaching for a saw.

The current AI fad is the generative form, like ChatGPT. Some will claim they are a prototype of general intelligence. They aren’t. Rather, these programs employ laudably sophisticated statistical capabilities exercised on huge amounts of data — writing, visual imagery, video, audio, or computer code — and look for patterns. What word or graphic element typically follows a specific other in a given context?

Results can seem magical and even like a solution to the so-called Turing test, a thought experiment by the mathematical Alan Turing. If a third-party judge following a blind conversation between a computer and a human can’t tell the difference between the two, the device passes the test.

Semblance, however, is not identity. A convincing trompe l’oeil painting remains a flat service, not a three-dimensional object. Generative AI frequently creates hallucinations: fabricated information that might seem correct but isn’t. That can go as far as citing research papers or legal citations that don’t exist.

There are other reasons, like data sources or timing, why generative AI can provide bad information.

Examples Of Bad AI Information

This topic started for me when double-checking the current federal funds rate range on March 18, 2025, which is the benchmark interest rate the Fed sets. Using Bing on Microsoft’s Edge browser, there was a list of potential sources. At the top of the first page was an AI summary. Here’s what it showed.

Notice the conflicting information, with the rate range shown as 4.50% to 4.75% at the top. However, the description says that the target range, which should be the same, is instead listed as 5.25% to 5.50%.

Neither of these answers was correct. The current rate as of writing is 4.25% to 4.50% according to the Federal Reserve. That was set in December 2024 and left untouched in the January meeting of the Federal Open Market Committee, the part of the Fed that sets rates.

The AI display was seriously wrong on a basic economic and financial figure.

I wondered what else might be awry. Next, I looked for the yield on the 10-year Treasury Note, another standard reference point for finance. It is a typical proxy for the risk-free interest rate portion of other longer-term interest rates.

Here’s the Bing AI answer, which should have been the closing value on March 17.

And here is the March 17 value (highlighted) from the Department of the Treasury.

The difference isn’t large, only .03 percentage points, but it is significant in bond trading, especially if part of a possible upward trend. The question is what source the software used for its information. The 4.28 figure was last seen on March 11 as an end-of-day rate.

The third example was the Tesla stock price on the morning of March 18. This was more subtle. Here’s what the AI source showed.

Here is a screenshot of Yahoo Finance.

Share prices can shift quickly and an opening figure could easily differ between two displays, particularly as the information is often delayed by 15 or 20 minutes. And yet, the figures for the March 17 closing figure should have been the same. The AI-provided figure was $228.03. From Yahoo Finance it was $238.01. I double-checked using S&P Global Market Intelligence, which also showed $238.01. Again, was there a problem with different sources of information or one of timing?

There may be types of AI that could work well in investing. Perhaps some of the technologies might help with managing portfolios, but it’s clear that automatically trusting data from software could result in bad AI information that could be highly risky.

You may also like

Leave a Comment