AI Already Controls Us

madi thomas
4 min readMay 2, 2022

“What kind of music do you like?” I could list off the genres or artists that I like, or I could be honest. “Whatever Spotify recommends.”

That isn’t the only way AI sneaks its way into my life. As a frequent okcupid user, algorithms play a large part in who I date. Algorithms decide what I watch on Youtube (70 percent of all Youtube watch time is recommended content.) My makeup is often inspired by pictures that popped up in my Instagram explore page that morning. Any random question that pops into my head gets answered by whatever Google shows on its first page.

Curated Content

In 2020, a large part of the content we consume is curated by algorithms. TikTok’s entire platform is based around recommended content. Amazon automatically recommends people make purchases similar to their previous ones. Youtube, which relies heavily on recommended content, is quickly becoming a more popular platform than TV. Online browsing is incredibly passive, so people aren’t being overly critical of what is being recommended to them. They continue to scroll and tap. One way this situation has already backfired is kids’ content on Youtube. Creators gamed the system and uploaded bizarre, disturbing videos featuring children’s characters. These videos racked up millions of views as kids continued to watch the automatically recommended content. The rising rates of AI influence on content should be heavily scrutinized before these systems get utilized further. In Youtube’s case, the problem is already apparent. When Google’s Deep Mind looked into the impact of recommendation algorithms, they concluded that it can give rise to echo chambers. Quoting from Deep Mind’s Twitter: “ Feedback loops in recommendation systems can give rise to “echo chambers” and “filter bubbles” which can narrow a user’s content exposure, and ultimately shift their world view.” Algorithmic Bias Even outside of social media, algorithms shape our lives. Automatic filtering systems are frequently involved in whether you get hired, get a loan or get into college. In a spectacle of failure, Amazon accidentally made a biased hiring algorithm in 2018. It analyzed the data of current Amazon employees and searched the pool of candidates to find more that fit those requirements. Spoiler alert: the algorithm only picked men as new hires. Amazon chose not to use the program.

Humans are biased. Naturally, that bias finds its way into our technology. To avoid this, we need to make sure engineers and data scientists responsible for building these systems are trained in how to avoid transferring human cognitive biases into their work. This article goes further into what companies are currently doing wrong regarding bias and what could be improved.

Release the Robot Army

Bots have begun to manipulate online discourse. On Twitter, 40% of tweets about Covid-19 are from bots. The danger here lies in the fact that people are conformist. If you see a tweet has a lot of likes and retweets, you’re more likely to take it at face value. If you’re a foreign actor interested in creating chaos, you could get bots to post about a conspiracy theory. When people see these fake posts have tons of likes and retweets from the bot network, they buy into it. Researchers at Carnegie Mellon discovered that 62% of the top 1,000 retweeters are bots. That’s a lot of influence.

In 2016, the Russian Internet Research Agency created fake news sites that were then shared by Americans. Though we’re not sure how much Russian interference affected the election, keep in mind they were working with a limited labor supply. Imagine the amount of fake news that could be generated if your employees are robots! Automating that process has enabled malicious actors to spread discourse quickly cheaply.

Enter GPT2, an advanced language model released by Open AI last year.I messed around with https://talktotransformer.com/, a text generator that utilizes GPT2. Put in a topic and the neural net guesses what comes next.

My input: “Hillary Clinton is a lizard…”

***block quote The output: “By “a lizard,” we mean the Obama/Clinton “regime.” Many people have made the “non-classified” inference, which leads people to the conclusion that Obama is a lizard as well.

In terms of propaganda, the general argument for Hillary Clinton is that she is a bold and forceful leader. This “boldness” is needed in the Ukraine, Syria, and ISIS, however, these are unstable, unstable regions where secret intelligence is crucial to containing a global threat. In short, Hillary Clinton is a reptilian.”

Conspiratorial silliness aside, that reads like a human wrote it. Who knows what this tech will look like in 5 or 10 years? What would it look like employed at a massive scale? We wouldn’t only be worried about malicious state actors like Russia infiltrating public opinion; anyone could do it.

Reality? Huh?

Some say we’re already a post-truth society. Maybe truth isn’t gone yet, but we’re on the verge of losing it. Americans believe less in institutions, and the political divide is larger than ever. Once information becomes arbitrary, it will be impossible to relate to people with different beliefs. When we disagree on facts, we don’t have a shared sense of reality and it becomes arduous to discuss anything. So imagine that, but cranked up to 11.

Linguistic algorithms are improving. It might one day be impossible to tell what text is human and what’s robotic. As we speak, there are even neural nets that are competing against each other to fool the other into thinking they’re human.

This is where the real chaos comes in. If everyone is a suspected bot, why listen to anyone on the internet? Text posts, at least, become completely non-credible. A peak into this future is already available. Check out this subreddit completely populated by bots.

We’ve all read about a future where AI is smarter than us. Unfortunately, it doesn’t have to be more intelligent than us to wreak havoc. AI is already manipulating humans.

--

--