The Hidden Dangers of AI in Finance
This post The Hidden Dangers of AI in Finance appeared first on Daily Reckoning.
Jim Rickards recently published a compelling article on AI risk for Insider Intel subscribers.
In it, Jim discusses a different way in which AI could crash markets. One that is totally separate from the DeepSeek, China, and NVIDIA angle we’ve been covering for the past week.
Today we’re going to review his key points and explore them in detail.
Here’s Jim:
“The ultimate danger arises when a large cohort of asset managers controlling trillions of dollars of assets all employ the same or similar AI algorithms in a risk management role. An individual robot working for a particular asset manager tells the manager to sell stocks in a crashing market. In some cases, the robot may be authorized to initiate a sale without further human intervention.
Taken separately, that may be the best course of action for a single manager. In the aggregate, a selling cascade with no offsetting buy orders from active managers, specialists or speculators takes stock prices straight down. Amplification through feedback loops makes matters worse.
Individual AI systems have various trigger points for selling. Not all will be triggered at once, yet all will be triggered eventually as selling begets more selling, which triggers more automated systems that add to the selling pressure, and so on. There are no contrarians among the robots. Building sentiment into systems is still at a primitive stage.”
This is a good example of why I read Jim’s work. He always approaches issues from a unique and thoughtful angle.
This risk is clearly real. We are now at the point where trading firms are integrating LLMs (AI models) into their proprietary algorithms.
What happens if a majority of trading firms are using the same AI software to drive their trading? For example, it’s likely that many money managers have integrated OpenAI’s ChatGPT models into their algos.
Now that DeepSeek R1 is the new shiny object, maybe a significant portion of firms are switching to that model.
Perhaps DeepSeek approaches trading in a completely different way. What happens if ChatGPT interprets data bullishly, but DeepSeek sees the same information as bearish?
This means that the release of a new cutting-edge model could actually change the market’s direction. It’s newer, it’s smarter, and it thinks stocks are overvalued by 40%! Sell!
Or perhaps even more disturbing, what if all the leading models come to very similar conclusions? Everybody deciding to buy or sell at the same time is a recipe for trouble. It would magnify moves on the way up and down, and could create a nasty feedback loop (as Jim points out).
70% of Markets
Today algorithmic (software-driven) trading accounts for an estimated 65-70% of American stock market volume. And this number is only increasing.
We don’t know how deeply AI models (specifically LLMs) are integrated into trading algorithms at this moment. Investment software is notoriously secretive. But my guess is it’s a significant part of the market and growing fast.
For example, we can assume that large trading firms are using LLMs/GPTs for market sentiment analysis. They’re reading and digesting social media like LinkedIn and X to figure out if the public is scared or greedy. This data is then integrated into core algos.
AI sentiment analysis has the potential to be a very useful tool for traders. However, the risks Jim mentions do apply. We will likely see increased “herding” behavior as a result.
Sentiment analysis is one thing. But are we at the point where LLMs are actually making trading decisions at a large percentage of firms? My guess is that if we aren’t there yet, we will be soon. Using AI models is an order of magnitude cheaper than hiring Harvard MBAs and Princeton PhDs.
As I mentioned earlier, AI users tend to latch onto the latest, hottest model. ChatGPT dominated early on, then Anthropic’s Claude became the trending new thing. Now, China’s DeepSeek is making waves.
There’s a very real risk of herding reactions if everyone is trading off similar analysis. And if everyone is constantly switching to the hot new model, there are countless ways in which it could affect markets going forward.
Mr. Rickards closes his piece with a powerful metaphor:
“You might want to re-read Frankenstein by Mary Shelley. The novel centers on the creature made by Dr. Victor Frankenstein. The term “creature” is not incidental; it’s intended to evoke the term Creator. Contrary to movie portrayals of the creature as a brutish, homicidal monster, the literary version was actually highly intelligent and learned French, read Shakespeare, and was able to engage in long philosophical discourses.
Of course, such natural language processing and self-learning are exactly what AI is about. The creature can be thought of as the first fully realized AI system in literature. The central dilemma in Frankenstein was not whether the creature was intelligent (it was). The dilemma was whether it had a soul. You can draw your own conclusions. My view was that the creature did not have a soul … but perhaps it deserved one. Now we face the same dilemma with AI.”
One way to prepare is to hedge the digital world with the analog (gold and silver), as Jim suggested yesterday. I especially like silver here (which is +3.6% on the day as I write this and looking good).
Insider Intel subscribers can read Jim’s full AI analysis here. And if this topic interests you, I strongly recommend checking out Jim’s new book: MoneyGPT – AI and the Threat to the Global Economy. The book couldn’t be more relevant.
The post The Hidden Dangers of AI in Finance appeared first on Daily Reckoning.
Source: View source