By: RAYAN ALI KHAN
In recent years, technology has developed rapidly, changing how every human being lives. The development of Artificial Intelligence, a migration to digital platforms, and powerful computing resources has led to technology’s wider use across government and private sectors, at the expense of privacy. Solove defines privacy as an individual’s right to control their personal information and to live without unwanted observation or interference, an idea reinforced by its recognition as a fundamental human right (UN, 2022). At the same time, technological progress has enabled surveillance on a larger scale, defined by David Lyon as “the focused, systematic and routine attention to personal details for purposes of influence, management, protection or direction” (Lyon, 2008). This surveillance, conducted by government intelligence agencies and private entities, threatens individual privacy. Shoshana Zuboff in 2014 coined the term “surveillance capitalism”, which fittingly describes what occurs today (Zuboff, 2014). Private corporations, like tech giants Google, Meta, and Amazon, routinely collect vast amounts of data from users. This data is not only stored but analyzed using algorithms to predict preferences and influence behavior. This process occurs with little transparency and/or meaningful consent, converting one’s digital footprint into a company’s monetary gain. This data is also utilised by governments as a form of surveillance, justified as public safety. As “surveillance capitalism” becomes increasingly embedded in tools people depend on, it begs the question: is privacy loss a fair tradeoff for technological advancement?
Firstly, technological advancement enables personalised services based on extensive data collection, which allows for greater convenience. A prime example is digital platforms, which utilize behavioral data to adapt user experiences in real time for greater engagement. For example, Netflix uses machine learning algorithms that analyze watch history, device type, and time of day to recommend content, and this model is said to account for over 80% of views on the platform (Gomez-Uribe and Hunt, 2016). This research provides good evidence from company employees about how such a platform works and optimizes for user preferences. Researchers were affiliated with Netflix, so bias may exist, but the information remains valuable because of the detailed explanation of the recommendation system’s design and impact, offering insights that external analyses may lack.
Further evidence for data driven personalisation’s commercial impact is from a 2021 report by McKinsey. The report gives insight into a quantifiable revenue lift due to personalisation. Their findings suggest corporations using data collection to personalise products see a 10-15% revenue lift, with company specific gains even ranging from 525%.
McKinsey may have a commercial motive, but its conclusion – that consumers are willing to trade some privacy for convenience – can be corroborated by multiple market reports. A 2023 Statista report found that in New Zealand, 74% of respondents would accept certain privacy risks for more convenient digital services , followed by Australia at 70% and United States at 69% (Statista, 2023).
Similarly, Epsilon found that 80% of consumers were more likely to purchase from brands offering personalised experiences, showcasing a commercial benefit to corporations to personalise their products to consumers using their user data (Epsilon, 2018). While Epsilon’s marketing orientation could introduce commercial bias, its findings with Statista’s more neutral polling data reinforce that personalisation resonates with customers across demographics.
Personalisation through technology also has had measurable impacts in public health. Personalised health interventions using targeted reminders have also increased commitment to treatment plans and has promoted healthier lifestyles, which contributes to a reduced strain on healthcare systems (Nahum-Shani et al., 2018). Overall, the wide range of sources and examples showcasing personalisation’s benefits suggests that while an individual’s privacy may be harmed in the process, the associated benefits can be significant.
Furthermore, modern technology supports surveillance for public safety and crime prevention. Police forces are increasingly using systems incorporating CCTV with facial recognition to detect threats and prevent crime. For example, take London, one of the most surveilled cities in the world (US News, 2020). The Metropolitan Police Service
(MPS) readily deploys Live Facial Recognition (LFR), and between January and July 2024, the MPS conducted LFR deployments resulting in 312 arrests, with 233 individuals charged or sent directly to court for their warrants (Metropolitan Police, 2024). Notably, some individuals had been evading capture for extended periods, including one suspect sought for over 16 years. The average pre-arrest pursuit duration was 384 days, showing that LFR was less effective in locating long term fugitives (ibid). Even so, the strategic use of LFR has proven efficient, achieving an average of one arrest for every two hours of deployment, and officers supporting LFR deployments also engage in proactive duties, yielding an additional 10–12 arrests per month (Ibid).
Criminologist Thaddeus Johnson’s research supports these results. Their analysis of the adoption of police facial recognition in 268 US cities found the violent crime rate would drop significantly in cities using facial recognition, with no evidence of increased over policing (Thaddeus Johnson et al, 2024). The study however wasn’t able to link direct causality, but it is still a useful data point for the use of facial recognition.
Thus, varied sources suggest that the strategic application of facial recognition may contribute to safer communities.
However, these technologies also suppress freedom and privacy. In China, the government has implemented one of the most comprehensive surveillance systems in the world (US News, 2020). These findings are corroborated by a Reuters report in 2022, which showcased Chinese firms developing AI software to enhance their surveillance (Reuters, 2022). US News is a respected and long standing publication known for its in-depth analysis and encompassing coverage of national and global issues, while Reuters is a globally trusted news organization known for its accurate reporting on international developments.
This pattern of losing civil freedoms and privacy appears globally; Pakistan is a prime example, where technological state surveillance advancements also raise major concerns. The Federal Investigation Agency, for example, increasingly relies on digital monitoring under the Prevention of Electronic Crimes Act (PECA) to target critics (Digital Rights Foundation, 2023). Amnesty International also found that the government used digital evidence from intrusive methods to prosecute opposing voices in Pakistan’s courts (Amnesty International, 2022).
Furthermore, the reported security benefits justifying widespread surveillance are often challenged against the erosion of fundamental rights. The vast scale of data collection in many modern surveillance systems hasn’t proven superior to targeted investigations. A 2014 Human Rights Watch report analyzed US surveillance practices highlighted concerns that such programs, which collect immense data, lack clear evidence of indispensability for security (HRW, 2014). Though from 2014, its findings on the necessity of mass surveillance remains important today, as governments continue expanding personal data use.
Thus, even in countries with democratic institutions, the legal mechanisms to protect individual privacy may not have kept pace with technological capability, and an individual’s private data may be weaponized against them. Therefore, while public safety is important, the expansion of surveillance technologies reveals that the loss of privacy in such contexts could pose a threat to one’s personal autonomy.
Moreover, the commodification of personal data by private corporations reveals that the loss of privacy is often also exploited for commercial profit. As technological platforms grow increasingly dependent on user data to generate revenue, individuals are frequently reduced to behavioral datasets for economic gain. For example, as many services move online, many consumers are forced to adapt. Corporations may however be utilising this new platform more maliciously than users know. A 2019 investigation by Privacy International revealed that popular depression related websites in France, Germany, and the UK were sharing sensitive user data, including answers to depression tests, with third parties without explicit consent, and had embedded third party trackers (Privacy International, 2019). The report however doesn’t differentiate between harmful and benign uses of third-party trackers, for example those used for analytics instead of advertising. Nonetheless, it does showcase a lack of user control over deeply personal data.
Similarly, a 2020 report by the Norwegian Consumer Council — a nationally recognized watchdog group — examined ten popular dating apps, and the findings indicated that these apps were sharing personal data, such as GPS location, IP address, age, gender, and sexual orientation, with various third-party advertising and analytics companies without obtaining valid user consent (Norwegian Consumer Council, 2020). Although consumer councils are not strictly academic bodies, their research draws attention to another way in which user privacy can be commodified as convenience.
A more recent example from the 2021 investigation by The Markup revealed that popular tax filing services in the United States like H&R Block, TaxAct, and TaxSlayer had embedded Meta’s tracking pixel in their websites, which transmitted sensitive financial information such as income levels, filing status, and refund amounts to Facebook without users’ explicit knowledge or consent (The Markup, 2021). The investigation may lack thorough peer review, but the evidence reinforces previously established patterns and findings: the commercial exploitation of our personal data often occurs with minimal transparency or oversight, and technological improvement is used as a guise.
Collectively these cases showcase that while these data driven practices might lead to more convenience and superficial benefits, they may come at the cost of user’s control over their own private information.
Both perspectives on the relationship between technological advancement and privacy held logical claims backed by credible sources. Those arguing for the benefits of technological advancement were strengthened by detailed statistics on increased consumer engagement, commercial profitability, and public health improvements, logically connecting these to tangible societal benefits. The research offered data from multiple market reports and health studies showcasing the positive impacts that occur with the introduction of personalisation. However, the argument was weakened by potential commercial biases in some sources, particularly from industry affiliated researchers and consultancy firms. Those arguing against had the evidence of surveillance abuse and data exploitation, which was compelling, though some investigations lacked thorough peer review or differentiation between varying degrees of data collection practices.
The research revealed significant implications of technology on individual freedoms and democratic accountability, as demonstrated by cases in multiple regions. My perspective before researching this topic was that a loss of privacy is not worth any technological advancement, and as my investigation progressed, this conviction only grew stronger. I encountered evidence which showed significant downsides, especially with extensive surveillance through technology. China’s mass surveillance of minorities in Xinjiang and the use of digital tracking by authorities in Pakistan showcase the harm surveillance has on an individual’s privacy. Exploitation of personal data, shown through cases involving sensitive health information and financial data being shared without explicit user consent, show the lack of transparency in the industry. However, I do also believe there are parts of society which now, due to our own doing, require a certain loss of privacy for those places to remain safe and exist, like facial surveillance in public crowded areas, looking for suspicious individuals.
Reflecting on these insights, it can be concluded that while technological advancement presents undeniable benefits, the erosion of privacy still cannot be justified as an acceptable trade off. Technological progress should not come at the cost of individual autonomy and privacy rights, but should instead better an individual’s quality of life, instead of putting it at risk. Both should go hand in hand when new inventions are being brought forth. Moving forward, I wish to research and investigate specific countries and firms and develop ideas on how systems can be made while respecting privacy rights. Specifically, I will research technologies like zero knowledge proofs, a cryptographic method through which a party can prove to another party that a statement is true without learning any additional information beyond the truth of the statement. By conducting case studies of firms that piloted these technologies, I aim to identify design patterns which can be used to help design future systems at scale, for both governments and corporations.
Essay Bibliography
Amnesty International
https://www.amnesty.org/en/latest/news/2022/02/pakistan-repeal-draconian-cybercrime-law/
Andrejevic, M. (2014). Surveillance and Alienation in the Online Economy. Surveillance & Society, 12(3). Available at: https://espace.library.uq.edu.au/data/UQ_348586/UQ348586_OA.pdf
Bia Gao et al, Artificial Intelligence in Advertising: Advancements, Challenges, and
Ethical Considerations in Targeting, Personalization, Content Creation, and Ad Optimization https://journals.sagepub.com/doi/full/10.1177/21582440231210759 David Lyon, Surveillance Studies: An Overview
https://www.researchgate.net/publication/26526923_David_Lyon_Surveillance_Studi es_An_Overview
DigWatch
https://wp.dig.watch/updates/pakistan-implements-ai-powered-criminalidentification-system
Epsilon
Gómez-Uribe, C. A., & Hunt, N. (2016). The Netflix Recommender System: Algorithms,
Business Value, and Innovation. ACM Transactions on Management Information Systems, 6(4). Available at: https://dl.acm.org/doi/10.1145/2843948.
Human Rights Watch: Chinas Algorithms of Repression
Inbal Nahum-Shani 1,✉, Shawna N Smith 2, Bonnie J Spring 3, Linda M Collins 4, Katie Witkiewitz 5, Ambuj Tewari 6, Susan A Murphy 7, 2014
https://www.hrw.org/report/2019/05/01/chinas-algorithms-repression/reverseengineering-xinjiang-police-mass Human Rights Watch
Johnson, Thaddeus L. and Johnson, Natasha N. and Topalli, Volkan and McCurdy,
Denise and Wallace, Aislinn, Police Facial Recognition Applications and Violent Crime Control in U.S. Cities. Available at SSRN: https://ssrn.com/abstract=4796951 or http://dx.doi.org/10.2139/ssrn.4796951 McKinsey 2021
https://www.mckinsey.com/capabilities/growth-marketing-and-sales/our-insights/thevalue-of-getting-personalization-right-or-wrong-is-multiplying Metropolitan Police:
Nahum-Shani, I., Smith, S. N., Spring, B. J., Collins, L. M., Witkiewitz, K., Tewari, A., & Murphy, S. A. (2017). Just-in-Time Adaptive Interventions (JITAIs) in Mobile Health:
Key Components and Design Principles for Ongoing Health Behavior Support. Journal of Biomedical and Health Informatics, 21(1). Available at: https://pmc.ncbi.nlm.nih.gov/articles/PMC5364076/. Reuters 2022
Solove, D. J. (2008). Understanding Privacy. Harvard University Press. Available at:
https://ssrn.com/abstract=1127888. Statista, 2023
https://www.statista.com/statistics/1023952/global-privacy-risks-accept-convenienceconvenience/
The Guardian
https://www.theguardian.com/uk-news/2017/jun/07/london-bridge-attack-cctvshows-fatal-clash-between-police-and-terrorists The Markup
US News
https://www.usnews.com/news/cities/articles/2020-08-14/the-top-10-mostsurveilled-cities-in-the-world
Zuboff 2014: A Digital Declaration: Big Data as Surveillance Capitalism https://www.faz.net/aktuell/feuilleton/debatten/the-digital-debate/shoshan-zuboffon-big-data-as-surveillance-capitalism-13152525.html