top of page
  • Writer's pictureneeraja kulkarni

AI and Philosophy: Will Computers Become Our Overlords?

In this paper, I discuss the variation in power relations between humans, technologies, and beyond. To begin with, I put forth Ramakrishnan’s views and fears on emerging technologies, including Big Data and the militarization of AI. I then compare his arguments with Pinker’s views on the nature of human minds and the AI hype. To conclude, I overlay a Foucauldian lens on Ramakrishnan’s and Pinker’s arguments on AI to thoroughly answer the question, “Will Computers Become Our Overlords?.”





Ramakrishnan argues that human intelligence has been evolving historically for survival. For instance, AI only seems like a threat because we make intelligence the primary characteristic that defines us. What concerns Ramakrishnan is the rationality we delegate to technology rather than to humans. He illustrates the engineering of technologies today that “mimic” our neural networks; the processes of ‘reinforcement learning’ where technologies “learn” to interact with large data sets, identifying patterns, and creating insights. Further, distinguishing old computing with machine learning (ML), he emphasizes that new programming has distanced scientists from technology as computers have begun performing complex calculations on their own. This loss of control increases as technologies gather larger data sets. 


Ramakrishnan warns of several data-related issues: increasing distrust in evidence, the “monopolistic role of data” where Big Tech commodifies their users to reestablish their influence, and the uncertainty in data-based decision making that tends to perpetuate social biases. His fears regarding data are congruent with Zuboff’s arguments on surveillance capitalism. Among other things, Zuboff argues that surveillance, by collectivisation, analyzation, and utilization of Big Data, is a newly emerged extension of capitalism. For instance, not only do corporations profit from user data by selling it to other private actors, but also sell it to states. The increased control is susceptible to manipulation, threatening the health of democracies. For instance, my fears primarily lie with the global implications of data ownership: would certain western economies gain more control as they might end up owning the personal data of people from developing countries? In contrast, Pinker would harshly dismiss the AI hype, stating that such reinforcing pessimistic views make governments enhance security measures instead of real threats. In the context of data, he would add that governments neither have the will nor the means for such extensive surveillance mechanisms.  

 

In a similar dystopian context, Ramakrishnan paints a fearful picture of militarizing AI, stating that the increasing loss of control might result in AI-automated wars. According to him, these wars would not be nuclear but would attack vital digital infrastructures of states, resulting in grave humanitarian consequences. On this note, Pinker would refute Ramakrishnan, with an argument that it is an overestimation that humans would ever have the ability to engineer this perfect AI. Pinker claims that if these technologies are so brilliant, they would not cause harm. He emphasizes that social institutions are a form of technology – producing patterns and ideas in our brains, identifying that these intricacies make human intelligence multidimensional, requiring different computational abilities than those built in AI. Hence, it seems that AI will only evolve at the pace to which we understand the functioning of our minds. 


As a solution to these pressing socio-economic issues, Pinker suggests reparationary legal mechanisms for holding specific actors accountable. According to Pinker, it is what we make of it that would matter; hence, innovation in legislation is necessary to govern the actors who own such technologies to avoid misuse. While I agree that legislative measures will be crucial in safeguarding humans, in this context, the blame does not fall entirely on humans. However, as Verbeek argues in his piece, “Do Artifacts Have Morality?,” AI embeds certain choices presented to its users, hence becoming moral agents. In the case of big data, without the deep-learning technologies that allow us to mine data, we would not be able to perform these invasive processes to achieve maximum utility. In the national security context, automated drones allow nations to establish their presence in other countries, surveil, and even shoot targets while killing civilians referring to them as “collaterals.” Therefore, the mechanism of technologies should be governed too. 


Pinker and Ramakrishnan both paint a similar conclusion – that computers will not become our overlords. For instance, Ramakrishnan is more certain that bacteria would overpower us, than machines. While Ramakrishnan is disappointed in the lack of our understanding concerning AI, Pinker is optimistic that the causal power of our ideas can evaluate future risks and prevent dystopian prophecies. Considering their views, immediate tech governance measures must be directed toward (1) universally ratified principles on AI ethics guiding engineering, and innovation, and data transparency, (2) reform and inculcation of universal human rights in the context of emerging technologies, and (3) accountability and transparency mechanisms that enable criminal legal action against corporate or state leaders. On a more long-term note, mechanisms to structurally transition away from the current exploitative economic order can be explored. For instance, Zuboff argues that the current socio-economic order that led to the commodification of humans needs to be “reinvented and reestablished” to tackle issues such as disinformation.   

 

Overlaying a Foucalding lens, first, it is important to note that AI would penetrate all digital spaces globally. Today, there are arguably two primary sovereign powers, states and tech corporations. Here, tech corporations and their governing states would diffuse power interchangeably on each other and their citizens. This diffusion of power is not limited to developing states collaborating with tech corporations to exercise power on their citizens. Further, the corporations would also establish control over citizens of such developing states. Hence, on a contextual basis, diverse sovereign powers would also become each other's ‘disciplinary powers.’ AI, on the other hand, would become a globalized disciplinary power. In this way, the sovereign powers would diffuse control between each other and AI globally. However, as users willingly give away their personal data, Foucault would argue that all these systems would be transparent and lead to a geopolitical anatomy where coercion is consented and subtle.  


At the very beginning of his argument, Ramakrishnan proclaims that “computers have already overtaken” and that computers make us “do things we don’t want to do.” Similarly, considering the power relations, AI would exercise ‘biopower’ on all its users (sovereign or masses): (1) where technologies establish control on masses (through their physical and digital interfaces) by limiting and guiding its utilization, and (2) digitally penetrate into our mixed realities as they establish behavioral influences on our linguistics; how we think, speak, and most importantly how we act, as a result of our increasing interactions with technologies or people socially (through technology). Perhaps the aforementioned governance mechanisms might be valuable in resisting such forms of repression.


This paper is limited, as in a future event where AI becomes sentient (because of our engineering or independently), it might influence the socio-economic order entirely, drastically changing all power relations. Therefore, to answer the question, "Will computers become our overlords?," I have overlaid a Foucauldian lens stating that a variation in power relations would perpetuate between humans and computers; that there would be no true overlords.  



The following paper has been written for the class ‘Philosophy of Technology; from Marx and Heidegger to AI, Genome Editing, and Geoengineering’ taught by Prof. Mathias Risse at the Harvard Kennedy School.


References

Jasanoff, Sheila. States of Knowledge : the Co-Production of Science and Social Order. London ;: Routledge, 2004.


Pinker, Steven. "Tech Prophesy and the Underappreciated Causal Power of Ideas." In Possible Minds: Twenty-Five Ways of Looking at AI, edited by John Brockman, Chapter 10. (New York: Penguin Press, 2019).


Ramakrishnan, Venki. "Will Computers Become Our Overlords?" In Possible Minds: Twenty-Five Ways of Looking at AI, edited by John Brockman, Chapter 18. (New York: Penguin Press, 2019).


Scharff, R. C., & Dusek, V. “Panopticism. Michel Foucault.” In Philosophy of technology : the technological condition : an anthology (Second edition.). (Wiley Blackwell, 2014).


Scharff, R. C., & Dusek, V. “The New Forms of Control. Herbert Marcuse” In Philosophy of technology : the technological condition : an anthology (Second edition.). (Wiley Blackwell, 2014).


Smith, Tony. 2019. “Marx, Technology, and the Pathological Future of Capitalism.” In The Oxford Handbook of 


Verbeek, Peter-Paul. 2011. “Do Artifacts Have Morality?” In Moralizing Technology. United States: University of 


Zuboff, Shoshana. “Surveillance Capitalism or Democracy? The Death Match of Institutional Orders and the Politics of Knowledge in Our Information Civilization.” Organization Theory 3, no. 3 (2022): 263178772211292-.


Zuboff, Shoshana. The Age of Surveillance Capitalism : The Fight for a Human Future at the New Frontier of Power. First edition. New York: PublicAffairs, 2019.






Comentarios


bottom of page