Your Online Privacy Is In The Hands Of Big Data

Bernardo Montes de Oca
6.4.23

Think back to the last video you watched on YouTube. Chances are there was a highly personalized ad somewhere, and it’s eerily similar to that product you googled a few days back. There’s no other way to put it: YouTube knows you very well, and if you feel watched, you’re not the only one. 60% of internet users don’t feel safe. 

The truth is that YouTube needs to know you: it means more retention, viewership, and subscribers. This, in turn, translates to more efficiency when placing ads, which is a win-win for both channels and the platform. 

The dilemma is that YouTube, and its owner, Alphabet, aren't the only ones working hard to learn everything about you. Everyone wants your data. There's a saying: if you're not paying, you're the product. It's a controversial statement, but true. So, should you be concerned about this? The answer isn’t black or white, but one thing is for certain: your data is no longer yours. 

Is there a price for data? 

So, how much would you sell your data for? That’s what Fast Company asked thousands of people in 2020. This theoretical data package includes everything we like, dislike, follow, and consume; the results were fascinating. 

Source: Fast Company

Whether these figures are high or low, it’s up to the individual, but the reasoning behind them relies heavily on one argument: companies already know too much about us, probably more than we think, so why bother charging a lot? Ultimately, this data should make the living experience much more enjoyable. 

On the other hand, authorities don’t love this idea. In 2022, the FTC announced its plans to address the topic of data-driven ads, with the idea of "commercial surveillance" being more popular lately. Moreover, the problem becomes even more complex when regulation comes into play. Private companies love our data (as do governments, which we’ll discuss later), but authorities argue there should be a limit. So, where does that limit begin? 

The dilemma with regulation 

One of the main roles regulatory authorities have is to control how private companies behave. Still, regarding technology, they have one distinct disadvantage: they're too slow. Given its fast-evolving nature, technology can change quicker than the law trying to regulate it. 

On the other hand, those who advocate less regulation argue that technology needs to be ahead of regulation. Otherwise, the development of new ideas wouldn’t happen. Examples of this situation include cryptocurrency regulation, blockchain development, and even the internet faced scrutiny during the nineties and early 2000s. 

So, what about the citizens? Chances are that most people go about their daily lives, seldom questioning whether they've given up too much data. So, let’s say you’re intrigued. You can do this if you want to know how much Google knows about you. Chances are, you’ll be surprised. 

Imagine now if that information ended up in the wrong hands; that’s where regulation wants to put its hammer down and be stricter. For years, regions such as Europe have made massive efforts to update and establish the necessary means to control what companies can ask and, most importantly, what individuals can demand from companies. 

Source: Statita

That's how we land in the General Data Protection Regulation (GDPR). This regulation was enforced to allow individuals in Europe to have more control over their data under the control of companies. GDPR demands that organizations that collect data guarantees their safety and obey regulations; otherwise, they face expensive fines. 

Data is vital for our lives, but how much is too much? 

Right now, there's not much we can do without data if we want to live in a connected world, which is, at the same time, inescapable. All countries require you to introduce personal information, from public services to buying your necessary medication. 

If you want to buy a home, there are personal credit scores. If you want to take out a credit card, the bank already knows who you are before you enter a location. Even travel uses more data than we can imagine, so it's necessary. 

The good side of data

Our lives are much more comfortable thanks to data, which indeed comes at the partial sacrifice of our privacy, at least from a collective standpoint. For example, Waze built its roads thanks to users providing location data. Google Maps also relies on us to improve its navigation systems. Every time we register a location or ask for directions, we’re helping Google Maps become even more efficient at giving us directions. In turn, these become a platform for apps such as Uber, where your comfort and ease of transportation improve daily. 

Now, is this something that people want? It's hard to answer. On one side, our data is ours, or it should be, but on the other, without it, companies wouldn't be able to provide us with these services that have made our lives better. 

Our data has helped companies keep our lives up to date, at the cost of our privacy. To that degree, it's the price that we have to pay. Every digital service requires our data, even if we don't want to give it up. Some people are uncomfortable with that thought, but data has saved countless lives. 

The pandemic was one of the most recent examples of data usage for decisions on a global scale. Case registries allowed local and national governments to ease up or increase restrictions. And this has existed for years. Censuses are an excellent example of how our data helps define a nation's identity. 

Data plays a critical role in informing the government on decisions related to our health and, in achieving that, they must have a deep understanding of who we are. Let's take one example: insulin. Diabetes is becoming a global problem, and insulin could soon become the most valuable commodity around. So data becomes essential for understanding who needs insulin and when. This implies collecting more information for pharmaceutical companies to optimize production and, yes, make a profit while doing so, which is another dilemma.

How Coca-Cola used data to become a giant

Coca-Cola is one of the world’s favorite beverages. It has been at the forefront of marketing and advertising for over one hundred years, and since 1887, the company has exploited data in the form of coupons. 

John Pemberton, its creator, followed his accountant's advice and began offering coupons, where people could trade one for a free glass of Coca-Cola at a local pharmacy. Though the idea worked, Pemberton didn't love it, so he didn't exploit it.

Eventually, Pemberton sold the company to Asa Griggs Candler, who decided to go in the opposite direction. An avid businessman, Candler loved the idea of coupons. So, he approached local pharmacies. He would provide them with two gallons of coke syrup in exchange for the names and addresses of nearby consumers. 

This data was gold for him, as he mailed coupons to hundreds for one free Coca-Cola glass. It was a win-win. Pharmacies had more customers, and more people tried the famous beverage. Thus, the world was hooked on Coca-Cola. 

Coupons were so effective that Coca-Cola stopped using them just recently. For decades, this simple method proved valuable, as hoards of people traded them for beverages, and some of them were hooked for life. 

How one invention changed the Internet

Nowadays, every time we log into a new website, we get a question: do we accept all the cookies? While most of us say yes to this, there's a lot of history behind it. The birth of the cookie is so essential that the New York Times calls the importance as this: before cookies, the Web was essentially private. 

We owe it all to Lou Montulli. Before he and his invention, we could navigate with considerable privacy. It was a safe place. So, what did he do that changed everything? 

In 1994, Montulli worked at a small, nine-employee company called Netscape Communications. They had created the Netscape Navigator, the first mass-used navigator, and needed a way to track people. Montulli's invention changed our lives forever, and it was a simple text document.

Before Montulli's creation, there was no way of knowing how often a user visited a website. So what Montulli did was leave a trace. It was a small, undetectable piece of evidence in the user's computer. This text file reminded the website of a visitor. If that user revisited a website, the website could detect that file and say: hey, he's a repeat visitor. Therefore, the site was doing something right, and so on and so on. 

Montulli had created the cookie, and we get a daily reminder of his work, even today. The problem (or benefit, depending on how you see it) was that Montulli only created the base. Nevertheless, it was flexible, with potential, and another company took notice: Microsoft. 

But, before discussing the evil corporation, let's clarify one thing. The first cookies were relatively safe. Montulli designed them so that they didn't identify the user by name. Cookies also didn't use a single ID for all sites, as this would be easy to trace. But they were also straightforward to modify. 

Microsoft did precisely that. It decided to take the cookie and improve it while keeping it secret from the average user. Initially, you had to opt out of cookies, and it wasn't easy. Cookies, given their design, could only be tracked by the site which created them or by a related site.So, what would happen in companies partnered and started tracking other cookies? That's how the third-party cookie was born. Companies could sell us a product, or even reject medical treatment, based on navigating results. They were so controversial that, by 1997, the first effort to ban cookies started. That's when the world realized one thing: our privacy wasn't unbreachable.

Netscape Navigator and Internet Explorer ignored the RFC2109 recommendation to block third-party cookies. This recommendation was simple. All it asked was that both navigators didn't automatically accept cookies, but they rejected the idea. 

Source: Statista

Most websites are harmless. They want to track what you buy to sell you more of it. But, even on harmless websites, the stored data is worth a pretty penny. For example, at the peak of the dot-com bubble, Toysmart was an online toy company that couldn't compete with larger retailers such as Walmart and Toys R’ Us. After the crash, Toysmart found itself in bankruptcy, and to try and survive the crisis, it came up with a plan: selling its data. The worst of it is that it was tempting. Disney, a 60% owner, wanted to pay for the database, which totaled 260,000 clients and included children's names. The sale didn't happen because the FTC intervened, but this wouldn't be the last of such events. Our privacy became an afterthought when Montulli changed the world with his invention. 

The true value of content

Content in itself has no value as a business. It's merely a form of entertainment or education. It's how companies use it to their advantage that is the actual business. Therefore, tracking users is essential for marketers as they learn what we will buy and when. 

From Montulli's harmless cookie, we've evolved so that Amazon or Facebook can provide us an ad with what we were googling just minutes before. Plus, sometimes, these websites seem to hear our minds as we've been thinking about a particular product. Now, let's be clear, it's not that they're reading our minds or listening to our conversations. They don't have to; they've become too good at tracking, which is concerning. We've reached a point where technology doesn't need to snoop on us through the microphone because it does it in other ways. Thus, social media knows what to suggest, so it feels like it's tracking our brains.

That's how we go back to videos. The more videos we watch on YouTube, the more we become ad targets. As companies share their databases, we see more ads that are aimed precisely at what we need. So, content thus becomes the perfect advertising vehicle.  

But is that harmless? While most people will not bat an eye at this idea, some believe we will eventually be defined by our data. In this dystopian future, our data will become who we are, reducing us to mindless drones responding only to advertising while robots take over our jobs. Does that sound farfetched? Some people think it's not. 

What's the future of data and our lives?

The discussion of whether data will turn our lives into a dystopia has been frequently visited for years. Now, it's much more intense than ever, and all we have to do is look at entertainment. Blade Runner and Black Mirror are thirty years apart, yet they can show the reality to which we might be headed: countries with complete control over citizens; possible crimes stopped before they occurred, all thanks to data, but things might change.

People are fighting back because not everyone loves that idea. Local and regional activists have pushed for more independence over data. These movements are consistent efforts to stop our country from becoming an authoritarian dystopia. Still, the reality is that we're already being tracked in countless ways. 

Fortunately, there's a growth in the possibilities of more control. So while the promise of Web3 has lost a bit of steam, decentralization of data could have massive potential. 

Movements such as educating people on the freedom of data, privacy, and other topics have gained strength. All this at a time when more and more people are aware that governments could become ultimate authoritarians. Surveys have shown that 60% of people are concerned about their privacy, and 53% feel they have no control over their digital identity. 

Statista research shows that 68% of global Internet users feel more vulnerable to identity theft when using the internet now than ever before. At the same time, 70% of all users have taken some action to prevent fraud and scams. We've discussed how GDPR has helped Europe become safer, but this doesn't apply equally to all regions. For example, 66% of internet users in LATAM feel unsafe, while this number is 38% in Europe. 

There's no denying that we've been tracked every day and that the limit has evolved. Most of the time, we don’t think about it, but every once in a while, we ask ourselves: how much of our privacy have we lost? The answer seems to be “a lot”, and there’s still more to lose.

Want more content like this? Subscribe to our free weekly newsletter! It will quickly become your one-stop shop for startup news.

Subscribe Here
Bernardo Montes de Oca
Content creator in love with writing in all its forms, from scripts to short stories to investigative journalism, and about almost every topic imaginable.
MORE STORIES
Slidebean logo
© Copyright 2024 Slidebean Incorporated. All rights reserved.
Made with 💙️ in New York City and San Jose