If Google Controls What We See, Can They Control How We Think?

 

In November 2022, Google settled a location-tracking case with 40 states for $392 million. Forty attorney generals sued Google because, since 2014, they deceived users by accessing their location without authorization. While Google established privacy controls, they were also tracking users’ locations with sensors on their devices that connect with GPS, cell towers, Wi-Fi, and Bluetooth signals to track someone’s location—both outside and inside buildings.

The law caught up with Google last year because of a man’s shocking discovery. According to TechRadar, “An investigation by privacy-focused browser Brave found that Google used hidden web pages to collect user data and create profiles that would let users be subjected to targeted adverts.” Brave’s chief policy officer Johnny Ryan stated he tracked his own data being accessed by Google‘s advertising exchange platform Authorized Buyers—formally known as DoubleClick. 

To reproduce Ryan’s experience, Brave completed a study, collecting research from hundreds of people. The study found that Google created a secret web page identifier that was unique to each user. TechRadar explains, “These identifiers, which noted the user’s location and time of browsing, were found to have been shared with multiple advertising companies in an attempt to boost the effectiveness of targeted advertising.” Google was able to mislead its users by embedding a hidden preference for location sharing and automatically turning it on for users. Google’s constant access to its user’s locations is dangerous because locations can expose a user’s routines and identity, which enables Google to sell advertisements catered to each user. 

 

How Has Google Changed Information Consumption and Distribution?

When my parents grew up, they read the encyclopedia or went to the library to find information for projects, research papers, or defend arguments against friends or relatives. The information always came from a single tangible source of information. Nowadays, we have everything at our fingertips with Google. Even though Google isn’t the only browser in the world, it is by far the largest, owning nearly 84% of the market share. 

The expression ‘look it up’ has become as prevalent as the expression ‘Google it’. Google is so successful because it finds the answers for you quickly and it’s free—however, nothing is truly free. What we forget when we are exploring Google’s interface is that there is a transaction between the consumer and Google: Google finds all of the information you ask for but you give up your ideas, values, locations, habits, interests, and quarrels with their algorithm. 

Google uses its algorithm and user data to predict user behavior and interests. It takes your data and sells it to large corporations to build better advertisements and have a more select viewing audience. In addition to advertising, Google also rents spaces at the top of search results to top-paying companies while smaller sites sit far below, which reduces the diversity of information that users are exposed to.

 

How Has Google Affected Our Generation?

All the time, I hear the weirdest things. I heard someone say that Joe Biden is a robot and another person say Donald Trump is a spy for Vladamir Putin—this sounds like crazy stuff, right? But the people I talk to that share these wild accusations believe and know they have the truth because they heard or read about it. So, I typically ask the person that tells me their wild new fact where they received the information, and they tell me it was on the news, social media, or Google.

While all three media sources can provide valid and false information, Google’s browser is not a source of information—it simply guides you to sources of information. It can be hard to recognize this when looking for a quick answer, and Google hands it to you in seconds. I deal with this issue frequently when I look for answers; I find myself looking for the first correct answer on the internet rather than finding the most trustworthy source.

 

 But why do we not do this?

Part of the problem is that we trust Google search results, thinking they will always give us the answer we are looking for. However, the engine spits out answers that could be true or false because the engine does not care if the information is true or false. For example, after searching for Putin’s age—we are handed an answer without a source. The answer looks true and it showed up quickly so it probably is true, right?

We give internet platforms too much validation and confirmation bias. We know that the information we read is true because we want it to be true. However, that materializes a big problem for our opinions and ideologies if we only see one side of every story. If we cannot trace our information or refuse to, we can not differentiate between truth and opinion. We give up our critical thinking ability by allowing Google to control what we see.

 

Why Is This Location-Tracking Lawsuit Alarming?

Google has ingrained itself into our lives and dominates the digital realm. Other social media platforms—like TikTok and Meta—are also dominating the internet. Their use of algorithms to control what people see keeps users addicted to the service, consuming the same one-sided ideologies the algorithm knows they want to see. 

 

TikTok’s Algorithm: Tiktok’s algorithm is the most dangerous social media platform for my generation because it is highly addictive for children. No child is supposed to be watching 100 videos per minute of unfiltered internet—it is dangerous, manipulative, and confusing for them early on in their lives. Once they react to one video, the same type of video will show up. As the kid sees more videos and believes what they see as truth, the polarizing and radicalizing information will become a part of their identity and weigh down on their thought process.

 

Meta’s Algorithm: Meta’s algorithm is even more haunting, as it targets more emotion-driving posts to display to users. While TikTok’s user base is younger than Meta’s, they may use a similar algorithm for Instagram. Every time a user refreshes their feed, the algorithm takes the stored posts that score highest in activity and reactions and then arranges them in descending, non-chronological order of interest for each user. As users constantly roll down the wheel of info-fortune, the algorithm collects more data to update the content to your recent interests. 

 

While Meta’s algorithm makes sense economically—for keeping users connected—the societal effects are devastating. According to the Washington Post, “Starting in 2017, Meta’s ranking algorithm treated emoji reactions as five times more valuable than “likes,” internal documents reveal. The theory was simple: Posts that prompted lots of reaction emoji tended to keep users more engaged, and keeping users engaged was the key to Meta’s business.” The algorithm was so effective that it helped Meta grow nearly 55% in value in 2017. 

 

However, Meta realized a critical flaw since users that reacted to “‘controversial’ posts could open ‘the door to more spam/abuse/clickbait inadvertently’”. In 2019, Meta favored profits over safety by giving extra algorithm value to the angry reaction button. They made this change even though the system led users to more “misinformation, toxicity, and low-quality news.” Meta users were consuming media catered to their ideologies, often being handed hateful suppositions coming from sketchy sources.

 

Target’s Pregnancy Algorithm: One of my favorite algorithm stories is how Target predicted a girl was pregnant before her father did. Target’s algorithm could predict when women were pregnant—with precision—if they were purchasing certain supplements or creams, and they would send coupons for those items.

One day, an angry man went into a Target outside of Minneapolis, demanding to talk to a manager: “My daughter got this in the mail!” he said. “She’s still in high school, and you’re sending her coupons for baby clothes and cribs? Are you trying to encourage her to get pregnant?” Later on, the store manager called to apologize, but her father apologized, stating he did not know her daughter was pregnant.

We know that these large tech firms can affect how we form opinions, and we lose that power in return for short-term benefits. With all this data tracking and selling, companies like Google can predict what we want to see, so we will not have to question how we receive our search results and where they originate. However, society is impacted as a whole if we are lazy with checking where our information comes from because it compounds on itself until everyone follows their information blindly. The less dependent we are on these media engines, and the more we take a break to question our sources, the closer we will be to truth, diversity of opinion, democracy, and anti-radicalization.

 

What Can We Do To Fix These Problems?

Since Google has over 50% market share, we can consider it a monopoly. Due to economies of scale, we know that there can be no serious competitors since it would be too expensive to compete with Google and the data allows Google to innovate and cater to users in ways that smaller companies could not. After all, Google is a free service that makes its money from usage, so a competing firm would have to maintain profitability while having a free-to-use browser. 

This recent lawsuit was a measly drop in the bucket for Google’s $1.2 trillion valuation, but what if the government gets more involved with Google? Google offers many free services to its users and supports many smaller businesses, but the societal issues that Google creates should be taken care of by Google. Splitting Google into different firms would be a solution to create artificial competition, but that will not completely solve the problem and could ruin many services that Google offers, as it would reduce its profitability.

There needs to be widespread awareness of disinformation so we can continue to think, believe, and act independently, outside of what big tech wants us to see. Social media platforms could engage in an educational endeavor by sneaking in videos to the scrolling wheel that tells users to fact-check information on the internet since the algorithm does not fact-check. This strategy could work for younger users that scroll through social media regularly and can more easily be manipulated. 

We must maintain awareness of the algorithms when we use Google or scroll through Tiktok since they profit from our usage. We also need to educate each other about the power of manipulation and social media information. It is important—especially on the internet—to always be questioning if we are reading the truth or if we are reading somebody else’s opinion.