
“It’s garbage in, garbage out. It’s one of the main issues of computing and algorithms or systemic modelling and that’s the same issue that a lot of the AI has now."
Human Events Daily's Jack Posobiec hosted tech expert Larry Ward of Human Events and Libby Emmons, editor-in-chief of The Post Millennial, to talk AI bias, large language models' (LLMs) second-rate credibility, and the growing concerns of liberal slant within major AI systems.
Posobiec opened the conversation by touching on one of the biggest issues with how AI is trained today: “I’m not a computer programmer, but I do understand some of the basics,” he said.
“It’s garbage in, garbage out. It’s one of the main issues of computing and algorithms or systemic modelling and that’s the same issue that a lot of the AI has now—not just the bias in terms of being fed these far-left sources like Wikipedia or the New York Times or so much liberal corporate media that’s out there.”
He added that beyond ideological input, the quality of AI training data is degrading. “AI is degrading as the models continue to be trained on the internet because there’s so much AI-generated content that’s now appearing on the internet. You have AI that’s being trained on AI, which degrades the overall value of the model itself.” This is known as "model collapse."
Larry Ward agreed. “You’re 100% right and it’s going to continue to get worse,” he said. Ward pointed out that the business deals driving AI development further compound the bias. “These AI companies, these large language models are making deals—you saw a huge deal with Amazon that was put out there to the New York Times, where they’re paying the New York Times an undisclosed sum of money—probably hundreds of millions—to train its models on the New York Times.”
According to Ward, the trend is systematically excluding conservative viewpoints: “These companies are going to mainstream or very liberal places and paying them a lot of money so that they can train their model on their content. What they’re not doing is searching for conservative publications and voices like Human Events or The Post Millennial or the Washington Examiner, et cetera and so on.”
“Silicon Valley has destroyed the financial wherewithal of a lot of these companies on the right, because they demonetize them, they throttle them, they’ve choked them and they put them instead in deep financial stress.”
“Trust is the number one asset. It’s the number one investment that AI companies need to make in order to yield a high investment and right now, who’s going to trust an AI company like OpenAI or Google that puts George Washington as a black president, or you type in OpenAI and they say Joe Biden is still the president. There’s lots and lots of evidence, overall, that they are just off the rails in terms of their liberal bias.”
Posobiec emphasized that the goal isn’t to shift AI to the right, but to make it fair and balanced: “You’re not talking about making AI conservative, you’re talking about making it viewpoint neutral so that true information is able to get through the screen.”
Ward agreed, warning of the real-world consequences of biased AI during major events. “If it went to 2020 when COVID was around, these AI models were just trained on the liberal bias, what would we have? We’d have the AI telling everybody they have to wear masks and go out and get vaccines… and to participate in Black Lives Matter protests, that’s what it would be telling the American people. How many more people would have been fooled into some of the nonsense that went on during COVID?”
He continued, “We have to look at this, it’s a national security risk. These biased AI systems, they pose a national threat because they create blind spots in everything from policy analysis to threat assessment. We have to have a neutral view, we have to have both conservative and liberal perspectives and quite frankly, these AI companies should put their money where their mouth is and contribute, should contact these conservative publications to pay the market rate as soon as humanly possible.”
Later in the episode, Posobiec brought in Libby Emmons, who pointed to a growing problem in the field—AI hallucinations. “We’ve seen some recent developments in AI that have been stunning, that back up my view that AI is not a tool that should be used, as someone who doesn’t know how to use that tool,” Emmons said.
She pointed to a recent incident involving fake book recommendations.
“The situation recently, where a commissioned author wrote a summer reading list and used AI to get recommendations for that recommended summer reading list and AI spit out a bunch of fake books by real authors. The author didn’t check it, his editors didn’t check it, fact checkers checked it and the whole thing ran in the Chicago Sun Times and Philadelphia Inquirer.”
Emmons also referenced a legal case highlighting similar issues. “A federal judge is seeking to hold a law firm in Alabama with sanctions after that law firm filed a brief with fake citations of fake cases in order to defend themselves. And that’s not the first time that that’s happened.”
“What we have going on here is what are being called AI hallucinations, otherwise known as complete and total lies and fabrications,” she said. “Where AI is asked a question and it just makes stuff up, and then people too lazy to check what the AI has spit back out at them and just run with it, completely unaware and apparently unperturbed that it’s just spreading lies.”
Source link