By Sadia Meherin
AI sounds fancy and magical at work but it is ultimately a political concern because it has the potential to significantly impact society and the economy, and as a result, governments around the world are taking steps to address the potential consequences of AI and to ensure that it is developed and used in ways that align with the values and interests of their citizens. Political decisions related to AI will have a significant impact on how it is developed and used, and it is important for governments to carefully consider the potential consequences of these decisions.
With that being said, there are a number of challenges to AI implementation in the political landscape and we need to acknowledge them.
Politics in its simplest form is the decision of who gets what, when, and how, and our first challenge comes in the form of ‘who.’ If and when AI is manufactured, who will get to control the process or production? Should it be the government? Because if so, it will be in question and will embody the values of the said government.
Imagine some powerful AI under the Chinese or North Korean regimen versus under democratic countries. It is not so much about if one system of governance is better than the other but that both of these systems are inherently different with different ideological drivers and hence are bound to pass on various doctrines to anything they produce.
Furthermore, AI implementation is going to change the way we view our lives. And if the government pulls the string by monopolizing it, we will end up giving up too much authority to an already powerful organization of people who are out of reach. AI would allow governments to run monitoring programs beyond our imagination, putting human rights at stake.
AI needs the kind of investment and resources that only huge MNCs may be able to pour. How do we even begin to unravel the problem that we must face if something that can manipulate lives and thinking is entrusted to the hand of the very few who operate under profit restrictions? Another danger in that scenario is the fact that no human is without bias – a bias that they are bound to pass on to the AI they manufacture, which in turn would be biased in its operations. For example – If AI was used in a company’s HR to sift out résumé of people and if the person inputting the algorithm has an inherent bias against women or people of colour, then the AI will automatically rule out candidates that don’t fit the bill. If that happens, at what point does the government intervene, and based on what regulatory body?
On that note, another significant aspect of politics is accountability – assigning responsibility, being accountable and facing the consequences when proven wrong. Consider the most common dilemma. The one where the lives of a bus full of people vs. pedestrians are pitted against each other. If an AI were to decide what happens and it ends in a disaster as it is bound to, who do we hold accountable? Should we blame the driver? Pedestrians? The AI? What standard do we have for machine reasoning too? What do consequences even look like for machines?
Next, AI-based systems are not creations of sensible intelligence. Let’s talk about an experiment.
There’s an instance of an AI tasked with assembling a robot toy’s body parts and then making it go from point A to B. Do you know what the AI did? It stacked parts of the toy’s body and made a tower to make it fall to point B.
The action it took is not based on common sense; more importantly, it’s not what the humans wanted when they designed the task, and therein lies the problem. This was a small exhibit. We, as people who are designing the algorithm, can give instructions but sometimes cannot precisely predict the action of these machines, and that’s terrifying! AI would do anything to satisfy its instructions not how we want it to. And in real life? Things like this could lead to lives lost because human life would hold no value to an AI programme. Our intentions, if not aligned, would have terrible consequences. Remember Midas and his golden touch?
AI specialists categorize AI into three categories, Narrow intelligence (NI), General AI, and Super Intelligence. We’re surrounded by NI, General AI is upon us, and Superintelligence – intelligence that surpasses that of a human – is fast approaching too. It all sounds like science fiction. It’s science fiction for the generations before us that didn’t have to address these questions. But for us, it’s a harsh reality, and we are the ones who have to deal with it.
In a more traditional definition, politics is the question dealing with power distribution and how it is to operate in societal institutions, which makes AI and its related elements a political concern. AI is not manufactured with the native intelligence we humans are born with. It is going to be easy to give AI the wrong problem to solve, the consequences of which are going to be vast but directionless. Technological change will not wait for anyone, and the speed at which our policy institutions are evolving is not enough to address the issues of modern technology. There are many different ways to address the issues with AI implementation, depending on the perspective we use to judge it. Still, in a political scope, the most significant thing we must address is that, although the world is ready for AI, our governmental and legal frameworks are not.
Here are several challenges in brief in implementing AI in the political landscape:
Ensuring ethical development and use: One challenge is ensuring that AI is developed and used ethically, particularly in areas such as facial recognition and other forms of biometric data that can be used for surveillance and tracking. Governments must also address the potential for biased algorithms, which can reinforce existing inequalities and discrimination.
Managing the impact on employment: AI has the potential to disrupt the job market, leading to job displacement in certain sectors. Governments must consider how to address these impacts and ensure that workers have the skills and support they need to adapt to a changing job market.
Balancing regulation and innovation: Governments must also consider how to balance the need for regulation and oversight with the need to support innovation in the AI sector. Overly strict regulations could stifle innovation, but a lack of regulation could lead to negative consequences for society.
Ensuring accessibility and inclusivity: It is important to ensure that AI technologies are accessible and inclusive, particularly for people with disabilities. Governments must consider ways to promote the development of accessible AI technologies and ensure that the benefits of AI are shared widely across society.
Managing the potential for misuse: There is also the potential for AI to be used for malicious purposes, such as in cyber attacks or propaganda. Governments must consider how to address these risks and ensure that AI is used responsibly.