The political betting markets may be too slow to react to Harris' performance of a lifetime because both the bots and the humans involved are consuming information from right-wing media saying that Trump clearly won. This is the analogue to the people trying to influence stock prices with fake press releases or pump-and-dump info, or the example you gave where the bot failed to understand the context of the information about "Berkshire". Humans make errors like that too, and EMH assumes their mistakes must even out across large numbers!
I think this illustrates a limitation to human and artificial intelligence. The outputs are only as good as the inputs. A better AI, or a smarter human, would be able to understand language information within a broader context of source reliability, the incentives sources have to provide misinformation, whether a piece of information is far-fetched, consistency with intellectual frameworks such as physics, medicine, or economics, and consistency with sensory information about the real world.
E.g. there were tons of doomer videos, podcasts, and interviews saying the US is already in recession in the 2nd quarter, but then
GDP growth was announced at a massive +3%. My human critical thinking process notes the discrepancy, identifies the more reliable source of information, and discounts the value of the contradictory information's specific sources. Diving further, I draw conclusions about the tendency of internet users to seek out negative information, which drives ad clicks, which incentivizes content creators to produce the content that maximizes their revenue. A third level of cognition ties the bias of internet content consumers to prefer negative content with what I've previously learned about
negativity bias, and this entire structure of thoughts leads me to broader conclusions about other ad-driven information sources and human behavior.
Thus, the observation of YouTuber doomers being wrong leads me to also discount the value of ad-supported Associated Press and Reuters articles, and inclines me to search for information sources which don't have the characteristic of being funded by internet traffic. A fourth level of cognition identifies subscription based sources, academic articles, and government statistic sites as potentially being more reliable than the entire class of ad-supported sources. A fifth level of cognition checks this impulse, and notes that these sources may have their own different systematic biases.
This string of thoughts through a cloud of information comes together as my own conclusions, behavioral tendencies, ideological frameworks, blind spots, and expectations. However different human minds would take a different path through all the inferential data, or perhaps stop at a lower level than where I stopped. Some human minds conclude the YouTubers are right and the government falsifies their data for political purposes. Are they wrong? Can we prove it?
This illustrates the challenge involved with deciding whether an AI is "working" or not. Human minds are a dime a dozen, so the hurdle for AI is to be significantly better at thinking than human experts*. Yet if human minds are all over the place in terms of their accuracy and validity, then what does a working AI look like? Perfect accuracy? 51%? Pass the Turing Test and hope for the best?
AI's are at a natural disadvantage because they lack sensory inputs from the real world. An AI can process a press release with positive language about a new car model, but cannot learn from a test drive or understand that the car is ugly. It can process a restaurant chain's financials but not understand that the food quality and service have slowly taken a dive. It can understand the specifications of a gadget, but not the functionality in a user's hand, or the aurora of status implied in the advertising campaign, or how any of this comes together as a human experience. The AI also does not naturally want. Maslow's Hierarchy is a piece of information in a framework, not a lived reality in a human meat-body with various urges, an unending stream of sensations, and a constantly changing environmental and cultural context for interpreting these motivations.
Thus it will take a lot of tricky programming for an AI to predict human behavior - and economics is at its root a behavioral science, not a branch of math. An AI could tell us that people like playing video games, but it will have a hard time determining which new games will be most popular.
If we threw a lot of AI power at the market, I think it would become less efficient at incorporating "all available information" and that would open up opportunities for quality testers, early adopters, fashion leaders, scientifically minded folks, contrarian thinkers, and narrative-thinkers who could spot situations where the AI was drinking its own kool-aid or where changing circumstances would affect humans in a certain way.
TL;DR: Critical thinking is a qualitative process and experience that may be hard for a neural network to emulate for reasons that may be inherent to being human versus being machine. Because inputs could lead to different outputs when run through different brains, or even the same brains at different times, there is no one correct and verifiable solution to inductively-informed narratives. This means we will not be able to tell whether an AI is thinking correctly by comparing it to human reference points, or even our own outputs. To understand economics and investing, the AI would need to emulate human psychology to some extent. E.g. How many people will buy an iPhone 16 when the specs have barely changed since the 15? None? All? 30%?
*In some applications, such as maybe reacting to earnings reports or exploiting momentary B/A spreads, an AI could be useful if it could draw faster conclusions, even if the accuracy was lower than a human whose conclusion would not arrive in time to exploit the gaps. But these are essentially the trading bots we already have, and have had for decades.