By: RAYYAN JAMIL KHAN
Artificial intelligence (AI) is a rapidly expanding technology sector that aims to build computers that can perform tasks requiring human intelligence, such as reasoning, analysing, and even learning (Google Cloud, n.d). Currently, many technology companies are developing their own AI models, such as Google, Meta, or X. These models are developed using a vast array of statistics and data. This can be acquired from diverse sources of many forms. Nevertheless, ethical concerns regarding data acquisition emerge; for example, it was recently uncovered that Meta used 82 terabytes of pirated data, which is data acquired unethically as pirating is largely viewed as theft of data (The Express Tribune, 2025). Moreover, the development of this technology raises concerns as to what methods were used to create it, posing a question as to whether or not it should be controlled, i.e, regulated. This topic delves into ethical and technological lenses, as many believe regulation needs to be carried out on a technical level to address moral concerns in the sector. Additionally, those in favour argue that regulation is necessary in the sector to allow for greater protection of personal data and even reduce bias in its results. On the other hand, opponents of this view believe that regulation could hinder technological innovation and be complicated to execute. This paper aims to explore the positives and negatives of regulating AI development as suggested by this proposition and whether or not it should be done.
When discussing regulation, the first point that comes to mind is how unregulated AI may result in having biases perpetuated in its answers. According to the European Council, AI can “unintentionally perpetuate biases and discrimination” (2025). Moreover, they argue that their new AI Act regulating the development of AI in this sector prevents biases and promotes “fairness and equality”. Additionally, the argument that unregulated AI may promote discrimination is further highlighted when, in 2018, Amazon had to scrap an AI employment tool due to it effectively training itself to be biased towards females and promote male applicants to be hired, something that the system “effectively taught itself” (BBC). This incident highlights how unsupervised and unregulated AI algorithms can create assumptions and biases that can cause adverse outcomes for marginalised groups, such as in this case of employment for women, supporting the argument of the Council, which through their AI Act would classify this situation as an “Unacceptable Risk” and effectively ban the model from use as it would pose a threat to people’s “rights or livelihoods” (European Council, 2024). As the Council is a government institution of the EU, it has a vested interest in driving policy for the betterment of its people, and thus it possesses no biases and holds credibility as a reliable source. Moreover, since the council addresses both the benefits and risks of Artificial Intelligence in its argument for regulation through the act rather than only addressing the risks associated with AI, a sense of fair comparison and analysis is laid out regarding their policy, strengthening their overall argument. For example, while they state that AI could pose serious health and safety risks, they also address how AI may be used to improve healthcare and address global challenges. However, they have not given any clear sources to support their arguments. This lowers the credibility of the article and makes it harder for the reader to do further research after reading the article. Additionally, the lack of sources and evidence to support the claims may also make it harder for the readers to comprehend the argument made. Nevertheless, since AI has proven to develop biases in its results based on unsupervised learning that can cause real-world problems and discrimination in key fields such as workplaces, as stated by the BBC, the council’s reasoning and justification for regulation hold weight.
Moreover, Thomson Reuters, a Canadian conglomerate, argues that AI needs to be effectively regulated to ensure more benefits than risks (2024). For example, the threat of data privacy due to AI, as Reuters argues, is that data can be more easily stolen through methods like phishing emails, as they can be made “believable”, alongside highlighting numerous accusations made by influencers that AI developers use their data without due credit, indicating how AI may use user data without consent for training. Similarly, Aditya Sinha, an Indian author and journalist, writes in the New Indian Express on why a proactive approach is needed to AI regulation (2024). Corroborating Reuters, he writes about how new research has been discovered indicating that AI is a threat to privacy and personal security due to a compromise of personal data and system vulnerabilities in the algorithm that could put data at stake (ibid). Moreover, this research is from a credible source: MIT. However, Sinha does not explain how the risks stated by the research are determined, merely stating them, leaving a sense of vagueness among readers and a slight sense of confusion.
Additionally, Sinha argues that more regulation should be introduced in the field and writes that even the EU AI Act is not comprehensive enough, leaving room for malicious systems to be developed (ibid). As a journalist with no involvement in this industry, Sinha has a vested interest in providing factual and correct information to his readers. Moreover, by evaluating global approaches to AI regulation, such as in China or the US, in the light of quantitative data, such as 54,000 Generative AI inventions being filed in China between 2014-2023, the author strengthens the overall credibility of his argument and provides global relevance and significance. However, Sinha only argues as to why regulation should be carried out and does not consider how regulation may serve to become excessive, bottlenecking AI development, causing his argument to appear onesided. Despite this, Sinha’s evaluation of the risks of AI along with weaknesses of current regulation serves as effective reasoning as to why AI Development needs to be regulated.
On the other hand, Nikhil Sharma, a student at the University of Michigan, writes in the Michigan Daily that AI should not be regulated, arguing that it risks a lack of innovation in the sector (2023). In a similar light, the National, a UAE-based news company, argues that over-regulation of AI risks a decline in innovation, as this is a rapidly developing industry which requires developers to experiment with and have the freedom to innovate, something that regulation may cause to diminish (2024). Moreover, Sharma also discusses the geopolitical risks associated with AI regulation, arguing that if the U.S. imposes regulation, it may cause developers to move elsewhere with less regulation, showing the global and geopolitical ramifications of regulation. Additionally, as a former investment associate and geophysicist, Sharma has no vested interest in the field, adding credibility to his views. Moreover, in the introduction, he also uses anecdotal evidence, such as the testimony of Sam Altman, the founder of OpenAI, a leading U.S. AI company, to argue against regulation. He evaluates Altman’s call for swift regulation as vested interest, arguing that regulation would suppress AI startups, limiting competition for OpenAI. Additionally, Sharma provides a plethora of credible sources to provide evidence for his claim, such as the New York Times, a reputable news company. As a result, the overall quality of the article’s argument increases. Nevertheless, Sharma only evaluates the negatives of regulation in his article and how it can serve to be a detrimental element in the industry’s development. While conceding that regulation will prove to be necessary in the field some time in the future, he does not adequately explain why and instead shuns the fact. Consequently, indications of pivoting towards one side of the argument may be observed, which could weaken the overall credibility of the article in the eyes of the reader, due to the view that a holistic evaluation may not have been done. This is further exacerbated by the author’s use of sweeping statements, such as saying that stymying AI growth at this stage could be “catastrophic” (2023). Nevertheless, Sharma’s article serves as a key source to provide insight as to why AI regulation should not currently be pursued due to its strong evaluation regarding the negatives of such policy.
Additionally, many argue that AI should not be regulated as it is difficult to effectively categorise AI models, making regulation complicated to implement. For example, Matt O’Shaughnessy, a former visiting fellow at the Carnegie Endowment for International Peace, writes there that AI is “really challenging” to define, hence causing regulation to either limit use of powerful and effective AI algorithms to a large extent or by not being complex enough to identify indirect harms, essentially making regulation ineffective (2022). Moreover, he continues to write that even ‘simple algorithms’ can pose harms that may be hidden “behind a veneer of mathematical objectivity” [ibid]. Similarly, Adrien Book, a strategic manager in emerging technologies, writes in the Austrian magazine WeAreDevelopers that AI is not the same as the atomic bomb, that is, how it is not only destructive and can result in many positives, and that regulations can often be too ‘vague or broad to be applicable’ [2024]. The discussion and explanation of multiple viewpoints and perspectives of the matter in this article serve as a strength of its credibility, since it indicates a lack of bias in the author’s viewpoints and allows for a fair comparison to be possible. As all sides are evaluated, the author presents information holistically. However, it must be noted that Book may have a potential conflict of interest in this sector due to his provenance as he works in managing emerging technology projects, including AI. Thus, any argument that he makes may have the possibility of serving potential vested interests, raising concerns about article credibility. Moreover, the author provides a plethora of sources to justify and explain their arguments, primarily from reputable news sources. While inherently positive due to the enhanced credibility of the article’s arguments, the majority of the sources listed are from the Financial Times, a British Newspaper. This limited source diversity may suggest a lack of broader research. Ultimately, the argument made by both sources collectively that AI regulation is difficult to execute is relevant to the argument against regulation, as it proves that any attempts at controlling the industry could be difficult, futile, or counterproductive.
After evaluating this issue from an array of globally diverse viewpoints, including Governmental, Scholarly, and through industry experts, I now believe that while a force for change and innovation, AI requires an amount of regulation to ensure that users’ privacy and data is secured, and it must be ensured that any algorithms produced do not produce results contacting bias and prejudice. However, while regulation is a must, it has to be done in a way which does not hinder innovation and development in the field. Definitions of regulation must also be specific, while being constantly updated to reflect changes in the field.
Before this essay, I thought AI development needed to be regulated excessively to safeguard privacy and restrict any malicious models from developing. While I still believe this to a degree due to the strong arguments posed on how AI may source its training data unethically or even produce results with biases that could affect one’s socioeconomic status, I now also concede that during development, excessive testing and experimentation needs to be done, for which regulation may serve as a barrier, limiting the growth of the field.
During my research, I noticed that while the issue was discussed, highlighting the societal impact of AI, the algorithms themselves, which result in AI capabilities, were not fully elaborated, and their intricacies were not evident. I would thus conduct further research to uncover the algorithms used to develop AI, exploring how exactly data plays a role in AI development and how certain algorithms can be used to mitigate or intensify risks in a model. Moreover, I would explore how regulation impacts and causes deviations in the original algorithms, discovering the effectiveness of regulation in AI development.
Bibliography
- BBC (2018). Amazon scrapped ‘sexist AI’ tool. BBC News. [online] 10 Oct.
Available at: https://www.bbc.com/news/technology-45809919 [Accessed 12
Feb. 2025]
- Book, A. (2024). Should AI be Regulated? The Arguments For and Against.
[online] Wearedevelopers.com. Available at:
https://www.wearedevelopers.com/en/magazine/271/eu-ai-regulation
artificial-intelligence-regulations [Accessed 19 Dec. 2024]. - Consilium. (2024). Artificial intelligence act. [online] Available at:
https://www.consilium.europa.eu/en/policies/artificial-intelligence/#what
[Accessed 18 Jan. 2025]. - Google Cloud (2025). What Is Artificial Intelligence (AI)? [online] Google Cloud.
Available at: https://cloud.google.com/learn/what-is-artificial-intelligence
[Accessed 2 Feb. 2025]. - News Desk (2025). Meta torrented 82TB of pirated books for AI training.
[online] The Express Tribune. Available at:
https://tribune.com.pk/story/2527649/meta-torrented-82tb-of-pirated-books
for-ai-training [Accessed 5 Mar. 2025]. - O’Shaughnessy, M. (2022). One of the Biggest Problems in Regulating AI Is
Agreeing on a Definition. [online] carnegieendowment.org. Available at:
https://carnegieendowment.org/posts/2022/10/one-of-the-biggest-problems
in-regulating-ai-is-agreeing-on-a-definition?lang=en [Accessed 1 Apr. 2025]. - Pimentel, B. (2024). Why AI still needs regulation despite impact. [online]
Thomson Reuters Law Blog. Available at:
https://legal.thomsonreuters.com/blog/why-ai-still-needs-regulation-despite
impact/ [Accessed 31 Mar. 2025]. - Sharma, N. (2023). AI should not be regulated. [online] The Michigan Daily.
Available at: https://www.michigandaily.com/opinion/regulating-ai-is-a
mistake/ [Accessed 17 Mar. 2025]. - Sinha, A. (2024). Why we need to be proactive on AI laws. [online] The New
Indian Express. Available at:
https://www.newindianexpress.com/opinions/2024/Sep/12/why-we-need-to
be-proactive-on-ai-laws [Accessed 10 Apr. 2025]