The humorous side:
LLM products will generate the answers you like, and because we equate A.I. with godlike "intelligence" we, regardless of our positions on issues, will believe we have come across authoritative information.
It seems the way one phrases the question is used as information to categorize a user and provide them the perspective they're looking for - either by intentional social-media-style steering or merely by certain phrases tending to be repeated in a dataset by people who think alike. For example, people aware of RFT or other psychological/linguistic approaches are probably highly educated, practiced in abstract meta-thinking, politically left of center, urban-dwelling, and earning a somewhat higher than median wage. The only internet content the LLM can scrape on the topic will have been written by a population of people who skew in this direction. So anything you ask referencing this topic will produce an output skewed toward the demographic which writes or talks about such things online.
As another example, the word associations of "immigrants" with "crime" or "slums" or "third-world" are much, much more likely to appear in right-wing content. So if you wrote a question like "Do immigrants from third-world countries bring crime and turn American cities into slums" the LLM will find a lot more hits for those key words in content written by right-wingers. The output, likewise, will confirm the intuitions of the sort of person who would ask such a question.
Again, we already saw this with simple search engines like Google. Google, using nothing more than a predictive algorithm of what a user would click on given the words they typed, tended to deliver results that would match what the user was hoping to find. Like an LLM, its initial iterations didn't try to reason; it merely tried to predict. Thus, if you typed something like "coverup COVID vaccine cause of autism" it would deliver misinformation, because that's what users will typically click when they search for these terms. People who aren't inclined to believe conspiracy theories, or are looking for evidence to debunk conspiracy theories, tend not to type those terms as often. More importantly, they produce orders of magnitude less data about their point of view.
Basically LLMs, just like early Google, are inherently susceptible to the leading-question fallacy. Maybe they are a demonstration of it! The issue lies in the structure of the training data.
In the realm of politically salient issues, different ideologies have their own vocabularies. Conservatives do not use terms like "bourgeois" or "proletariat" or "dialectic" as frequently as Marxists do, so an LLM with a prompt containing those terms is probably more likely to provide an answer assembled from a body of data composed by Marxists than by conservatives. Similarly, terms like "power dynamic" or "gini coefficient" or "police brutality" or "inequality" tend to be talked about by liberals more often than conservatives, for the simple reason that liberals are concerned by these things and conservatives do not recognize them as a problem worth talking about.
So yea, we're just getting started with people using LLM outputs to justify their positions. Social media will soon be dominated by dueling AI outputs. The reason we care is this persistent, hopeful belief that a machine analyzing the internet will bypass human error and spit out Objective Truth. In reality, we're getting an averaged-out assemblage of the ideas associated with key words on the internet, written by humans with the same flaws and biases as we think AI will bypass. You can shit into a blender, but it still won't make a smoothie.
The dark side:
This interaction reminds me of the early days of social media echo chambers. Lifelong friends would get into flame wars and swear never to speak to one another ever again, over differences in debates over abstractions, misunderstood theories, or group identities. Community groups and civil society quietly withered away as people devoted more and more time to making the information on the computer conform to their ideological/tribal preferences.
Web 2.0 involved finding a way to make a profit from this obsession, by putting people into informational bubbles to maximize their engagement. Facebook would see you liked antique Ford Mustangs and would feed you related content to keep you scrolling.
A.I. companies will absolutely, 100% chance, and surely find the same road to profit. The subscription you don't cancel will be the one which makes you feel the most satisfied, just like with social media. Already, your A.I. account builds up a memory of your previous queries, which by design affects your subsequent results. This is by design so that you can conversationally hone results without having to compose longer and longer prompt strings. Yet the dark side is that once it learns you are, for example, a QAnon conspiracy theorist who asks for multiple iterations on information related to this topic, it will learn to give you more confirming information on this topic and automatically exclude the sort of results you tend to ask it to exclude.
Then off you go with conclusive, objective A.I. generated proof that there actually was a basement in the pizza parlor. This example is kinda wild, but what about the projected impact of a tax bill on deficits? What about the significance of the risk of microplastics? What about the risks of an Iranian nuclear bomb? What about the use of slaves to fight in the American Civil War? On these topics and thousands more, the most content is generated by people with a potentially ideological point to make.
Getting practical:
An analytical framework like RFT could be wielded as a weapon to dissect an argument on any side. Does that process lead to truth discovery? Or does it merely reveal our deepest thinking for the mere rhetoric and sloppy linguistic associations it actually is?
In any case, the great A.I. hope has always been that an objective machine will bypass our human fallacies, biases, limitations, and ideological blindness, delivering Truth and solutions. This theme goes back to the dawn of science fiction, back to the first calculators, back to deus ex machina.
Yet, who would actually be persuaded by having their narratives broken down in RFT format, and associated with other, different ideas considered to be incorrect? Is an LLM going to change anyone's mind, due to not being a person representing a competitive threat, or due to the perception of objectivity?
Would it even be desirable if AI's caused a great conformity in our attitudes about what is reasonable or desirable? Would it be a utopia or a dystopia to have our philosophy handed down to us by a commercial information product? Would it be good or bad if ideas that differed from the consensus were crushed by machine output?
I have my doubts, but from what I'm seeing, the perception that A.I. is somehow objective and more accurate than the sum of the internet thoughts is deeply ingrained. This is the sort of epistemic error that could lead to a world war some day.
So I suggest thinking of LLMs not only as a tool to rapidly compose output, but also as a mirror reflecting back much of what you want to hear. Think about it. Have you yet gotten into an argument with an LLM? Or has it always compliantly given you more of what you already "knew"?