Today, we live in an increasingly interconnected world thanks to the digital revolution. However, this progress brings with it a series of ethical and legal dilemmas that lead us to question the extent to which we own our own personal information. Privacy has gone from being a fundamental right to becoming a scarce commodity, subject to the rules imposed by large technology corporations and government authorities. To better understand this phenomenon, it is first necessary to explore what we mean by privacy. Traditionally, it has been considered the right to keep certain information out of public view or from unauthorized persons. As our lives move into the digital environment, this definition becomes more complex. Today, a simple click can open the door to our browsing history, our preferences, and in some cases, even sensitive data such as our location and financial details.

Data Collection: A Necessary Evil?

Digital platforms, from social networks to streaming services, operate primarily through an advertising-based business model. But to deliver truly effective targeted ads, these companies rely on the massive collection of data. According to a study by McKinsey & Company, around 70% of successful brands have adopted a data-driven approach to optimize their advertising. The following table illustrates the types of data most commonly collected by various platforms: Platform, Data Collected, Facebook, Interaction, Likes, LocationGoogleSearch history, emails, locationAmazonPurchase history, searches, preferences

However, there is a very fine line between the legitimate use of this information to improve the user experience and the invasion of their privacy. Many users are not fully aware of the true extent of this collection and how their data is used. This lack of transparency can lead to feelings of distrust towards platforms.

Legal and ethical aspects: who protects our privacy?

Although laws exist to protect consumer privacy, such as the General Data Protection Regulation (GDPR) in Europe, these can be ineffective if not properly enforced. For example, many companies have chosen to include complex clauses in their terms and conditions that limit their liability for privacy-related abuses.

Surprisingly, only 18% of users say they read all the terms and conditions before accepting a service; this raises a critical question about informed consent. While it is true that we are responsible for protecting our own data since we are the ones who accept these terms, it is also undeniable that companies must be more proactive in providing this information.

Social implications: mistrust and alienation

As these problems continue to go unresolved, new concerns arise regarding public trust in institutions.In a context where our personal data can be used to manipulate our political choices and our social perceptions—as evidenced by the Cambridge Analytica scandal—it is reasonable to ask how much control we truly have over our digital lives. The phenomenon of fear of being observed—or what some call omnipresent surveillance—can lead to self-limiting behaviors among citizens. For example, users may choose not to express their opinions on social media platforms or avoid certain topics due to fear of negative repercussions. This behavioral shift can affect democratic health and limit public debate. However, there is also hope. Projects like Tor, which allows browsing the internet without leaving a significant digital footprint, as well as services like Signal, focused on secure and private communications, offer viable alternatives for those who wish to maintain their privacy. Likewise, community initiatives promoting decentralized technologies are gaining ground as a response to the current computing monopoly. Of course, these solutions require both collective and individual action; it is not enough to simply activate privacy settings without first being aware of the broader context in which we operate.